However whilst I can determine whether points exist or not within the sphere. I have no clear algorithm for determination of a contiguous surface. I don't care about efficiency. Does anyone have any idea about how to proceed??

Thanks

]]>

I want to give option to users that use my api to add their own effects. I mean effect as combination of vertex/geometry/pixel shader and render states.

My first (and absolutely wrong I think) idea was that I do a lot of "mega-]]>

I'm using Borland C++ Builder as my environment, if I'm correct, to generate 3D graphics but now I have this problem to render the surfaces of the sphere.

It would be a great help if anybody could look into my problem...!

//---------------------------------------------------------------------------

#include

#pragma hdrstop

#include "math.h"

#include "matrix_and_vector.h"

//---------------------------------------------------------------------------

#pragma package(smart_init)

#pragma resource "*.dfm"

#define pi 3.141592654

#define ALPHA ((pi/180)*63.4)

#define GAMMA ((pi/180)*60)

#define L (1/tan(ALPHA))

#define sNmax 30

TForm1 *Form1;

//---------------------------------------------------------------------------

__fastcall TForm1::TForm1(TComponent* Owner)

: TForm(Owner)

{

}

//---------------------------------------------------------------------------

void matrix_multiply(float vrtx[4],float trnx_matrix[4][4])

{

float ans [4]={0,0,0,0};

int i,j;

for (i=0;i<4;i++)

{

for (j=0;j<4;j++)

{

ans[i]+=trnx_matrix[i][j]*vrtx[j];

}

}

for (i=0;i<4;i++)

{

vrtx[i]=ans[i];

}

}

struct vertex

{

float vrtx[4];

float xp,yp;

public:

void vertex_input(float x1,float y1,float z1)

{

vrtx[0]=x1;vrtx[1]=y1;vrtx[2]=z1;vrtx[3]=1;

}

void operator =(vertex v1)

{

vrtx[0]=v1.vrtx[0];

vrtx[1]=v1.vrtx[1];

vrtx[2]=v1.vrtx[2];

}

void projected_vertex()

{

xp=vrtx[0]+vrtx[2]*L*cos(GAMMA);

yp=vrtx[1]+vrtx[2]*L*sin(GAMMA);

}

};

void translation(vertex v[sNmax+1][sNmax+1],float x,float y,float z)

{

int i,j;

float trns_matrix[4][4]={1,0,0,x,

0,1,0,y,

0,0,1,z,

0,0,0,1};

for (i=0;i<=sNmax;i++)

{

for (j=0;j<=sNmax;j++)

{

matrix_multiply(v[i][j].vrtx,trns_matrix);

}

}

}

class edge

{

private:

vertex v1;

vertex v2;

public:

void edge_input(vertex v11,vertex v22)

{

v1=v11;v2=v22;

}

void plot_edge()

{

v1.projected_vertex();

v2.projected_vertex();

Form1->Canvas->Pen->Color=clRed;

Form1->Canvas->MoveTo(v1.xp,v1.yp);

Form1->Canvas->LineTo(v2.xp,v2.yp);

}

};

class edge_table

{

private:

edge edge_lat[sNmax][sNmax];

edge edge_long[sNmax][sNmax];

public:

void edge_table_lat(vertex v[sNmax+1][sNmax+1])

{

int i,j;

for (i=0;i=-90,a<=sNmax;phi-=(180/sNmax),a++)

{

for (theta=180,b=0;theta>=-180,b<=sNmax;theta-=(360/sNmax),b++)

{

xs=r*cos((pi/180)*phi)*cos((pi/180)*theta);

ys=r*cos((pi/180)*phi)*sin((pi/180)*theta);

zs=r*sin((pi/180)*phi);

v[a][b].vertex_input(xs,ys,zs);

}

}

translation(v,x,y,z);

}

void sphere_edge_table()

{

sphere_edge.edge_table_lat(v);

sphere_edge.edge_table_long(v);

}

};

void __fastcall TForm1::Button1Click(TObject *Sender)

{

sphere s(200,200,200,150);

s.sphere_vertex_table();

s.sphere_edge_table();

}

//---------------------------------------------------------------------------]]>

I have to make some nice controls for touch screen :buttons with round corners and images, transparent panels..

The developing language is VB.Net, .NetFramework 3.5.

Trere are any libraries (third party) ?

I tried to use Resco Controls for Mobile but the performance is not good. For example, the time for loading a form with 6 buttons is quite big..and the application works slow..

Please help..

Thank you!

]]>

]]>

Google isn't publicly releasing the specifications and their Sketchup tool doesn't export to any formats except for other proprietary formats of Google. The program has a great user interface. It seems like a waste that other software can't make use of the models that come out of it.

Do you know of a any reverse engineering of the format? Are there any unofficial specs available somewhere?

]]>

void myGlutDisplay(void)

{

// glClearColor(1.0,1.0,1.0,1.0);

glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glColor3f(1,0,0);

glBegin(GL_POLYGONS)

glVertex3f(250,250,1);

glVertex3f(280,250,1);

glVertex3f(280,280,1);

glVertex3f(250,280,1);

glEnd();

glFlush();

glutSwapBuffers();

}

int main(int argc, char **argv)

{

// setup glut

glutInit(&argc, argv);

glutInitDisplayMode( GLUT_RGB | GLUT_DOUBLE );

glutInitWindowPosition( 50, 50 );

glutInitWindowSize( WIN_WIDTH, WIN_HEIGHT);

// Initialize my Scene

initScene();

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

glOrtho(0,500,0,500,-1,60);

glMatrixMode(GL_MODELVIEW);

glutMainLoop();

return EXIT_SUCCESS;

}

]]>

I have to create a concave mesh out of an unsorted point cloud.

I know this is a common problem and I already did some research into this topic. Nearly everybody suggests to use the Delaunay triangulation. I tried the 3D Delaunay which gives you a number of tetrahedra and the convex hull of the points.

What I need instead is the concave hull. A mesh that tightly wraps around the outer points of my set. I read that one could extract these triangles by using 3D alpha shapes. Here is where I'm stuck. I think I don't completely understand how to use them.

How do I utilize the resulting alpha value?

Could anyone tell me if this is the right direction or if there is a simpler way to do what I need to do. I really appreciate your help.

Thanks.]]>

]]>

I am a newbie to computer graphics so please excuse my ignorance. I have

a 3D mesh of a scanned object, and I have images taken from heaps of

known positions. I am currently doing a simple texture mapping procedure

which results in a simple vrml file. This is done offline using C++. The

output is a vrml file with a list of facets, where each facet is mapped

to a portion of an image

Currently the texture mapping process is simply: for each facet

- finds list of images (ie camera postions) where facet is facing camera (ie

backface culling)

- filters the list of images to keep the ones where the angle between

camera and facet's normal is less than a given threshold.

For now the winner image is simply the one that is most normal to the facet.

For most facets, the results are acceptable, but I need to develop this

further, since some facets are mapped to incorrect images due to

occlusion. Now I need to test for visibility (occlusion

culling). I have been doing some reasearch and seems like z-buffer is

the way to go (ie the most common) (please correct me if that's

wrong). But I am not sure I fully understand it. It seems like

for each image

for each pixel

find depth of closest 3D point (on a facet)

But I am sort of lost how does that help in texture mapping. In the end

of my visibility filter, I want for each facet, a list of images where

the whole facet is visible, I will then chose the winner based on some

other future defined criteris. I fail to see how using z -buffer will

tell me whether a facet is fully visible in an image. I might be

mistaken but is z-buffer used mainly to create an image from a new

viewpoint using existing images (I am not interested in that). I would

appreciate any help in clarifying how having a z buffer for each image

will help in deciding whether a facet is fully visible in that image.Is

ray tracing more suitable for my application

Thanks

Nawar

]]>

I got some Problems with my Environment Shader. I put it on a Sphere.

Following things are there:

I got a working box funktion which calcs a box around the sphere.

I got a working Infinite Sphere Class which is used to get the right side of the hitten box. The ray must hit 2 Planes and I take the plane with the shorten distance of the hitpoint.

I got a working textured triangle class for texture things (sry no rectangle class). I'm making the hitten side of the box out of 2 Triangles. But i doesnt work!

The ray hits the Plane, but only at 20% the triangles. This is realy Strange. Maybe you can help me. It would be realy nice :)

Here is the Picture:

[IMG]http://i44.tinypic.com/2r61icz.jpg[/IMG]

Debog Code:

[code]Was in LEFT ((-1.91083,-0.569883,-7.11584) /t = 7.38994)!

Was in DOWN ((-0.0233016,-0.00694942,-0.0867739) /t = 0.0901164)!

Hitpoint: (-2.11247,-0.3,0.0539508) / ON DOWN PLANE

Box: Leftdown (-4,-0.3,-2) / UpRight (0,3.7,2)

ReflectedRay: (-2.08917,-0.293051,0.140725) + t * (-0.258572,-0.0771161,-0.962909) / Hitpoint: (-2.11247,-0.3,0.0539508)

TexturedSmoothTriangle one((-4,-0.3,-2),(0,-0.3,-2),(-4,-0.3,2), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,599,0), Vec3f(799,599,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000)

TexturedSmoothTriangle two((0,-0.3,-2),(0,-0.3,2),(-4,-0.3,2), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(799,599,0), Vec3f(799,0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000)

Down: Env Shader error : the ray has not hitten one or two (the triangles of the env cube)

[/code]

Hier is the source Code:

[code]Vec3f

EnvShader::Shade(Ray& ray)

{

//Thank you for the phong shader

Vec3f N = ray.hit()->GetNormal(ray);

Vec3f reflectedDirection = ray.direction() - N*2*N.dot(ray.direction());

reflectedDirection.normalize();

//Vec3f leftDown = Vec3f(-10,-10,-10);

//Vec3f rightUp = Vec3f(10,10,10);

float oldt = 400020001;

int use = 0;

int maxX, maxY;

//cool even more code that can be used again.

Vec3f hitPoint = ray.origin()+ray.direction()*ray.t();

Ray reflectedRay(hitPoint, reflectedDirection);

//wow this class almost only use allready existent code

m_box = ray.hit()->CalcBounds();

//m_box = m_scene->GetSceneBox();

Vec3f leftDown = m_box.min();

Vec3f rightUp = m_box.max();

//Vec3f leftDown = Vec3f(-100,-100,-100);

//Vec3f rightUp = Vec3f(100,100,100);;

//ok the tnear and tfar stuff is complicated. lets use inf planes

//i know i shouldnt do it. but we are in c and can play with pointer :) ps: java sux :P

InfinitePlane down(leftDown, Vec3f(0,1,0) , (FlatShader*) 0x00000000);

InfinitePlane up(rightUp, Vec3f(0,1,0) , (FlatShader*) 0x00000000);

InfinitePlane back(rightUp, Vec3f(0,0,1) , (FlatShader*) 0x00000000);

InfinitePlane front(leftDown, Vec3f(0,0,1) , (FlatShader*) 0x00000000);

InfinitePlane left(leftDown, Vec3f(1,0,0), (FlatShader*) 0x00000000);

InfinitePlane right(rightUp, Vec3f(1,0,0), (FlatShader*) 0x00000000);

//lets just test every plane. if it hits our reflected ray, we use it if t is the smallest

if(left.Intersect(reflectedRay))

{

std::cout << "Was in LEFT (" << reflectedRay.direction() * reflectedRay.t() << " / " << reflectedRay.t() << ")!

";

oldt = reflectedRay.t();

use = 1;

}

if(right.Intersect(reflectedRay))

{

if(reflectedRay.t() < oldt)

{

use = 2;

oldt = reflectedRay.t();

}

}

if(up.Intersect(reflectedRay))

{

if(reflectedRay.t() < oldt)

{

use = 3;

oldt = reflectedRay.t();

}

}

if(down.Intersect(reflectedRay))

{

std::cout << "Was in DOWN (" << reflectedRay.direction() * reflectedRay.t() << " /" << reflectedRay.t() << ")!

";

if(reflectedRay.t() < oldt)

{

use = 4;

oldt = reflectedRay.t();

}

}

if(front.Intersect(reflectedRay))

{

if(reflectedRay.t() < oldt)

{

use = 5;

oldt = reflectedRay.t();

}

}

if(back.Intersect(reflectedRay))

{

if(reflectedRay.t() < oldt)

{

use = 6;

oldt = reflectedRay.t();

}

}

std::cout << "Hitpoint: " << reflectedRay.origin() + reflectedRay.direction() * reflectedRay.t() << " / " << use << std::endl;

std::cout << "Box: " << leftDown << " / " << rightUp << std::endl;

//1 is left

if(use == 1)

{

/*

C - D

| |

A - B

*/

//-1 because we start by 0,0

maxX = m_right->getX() - 1;

maxY = m_right->getY() - 1;

Vec3f a = leftDown;

Vec3f b(leftDown[0], leftDown[1], rightUp[2]);

Vec3f c(leftDown[0], rightUp[1], leftDown[2]);

Vec3f d(leftDown[0], rightUp[1], rightUp[2]);

TexturedSmoothTriangle one(a,b,c, Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(0,maxY,0), Vec3f(maxX,maxY,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

TexturedSmoothTriangle two(b,d,c, Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(maxX,maxY,0), Vec3f(maxX,0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

if(one.Intersect(reflectedRay))

{

return(m_left->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else if(two.Intersect(reflectedRay))

{

return(m_left->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else

{

//std::cout << "Left: Env Shader error : the ray has not hitten one or two (the triangles of the env cube)

";

return(Vec3f(1,0,1));

}

}

//2 is right

else if(use == 2)

{

/*

C - D

| |

A - B

*/

maxX = m_right->getX() - 1;

maxY = m_right->getY() - 1;

Vec3f a(rightUp[0], leftDown[1], leftDown[2]);

Vec3f b(rightUp[0], leftDown[1], rightUp[2]);

Vec3f c(rightUp[0], rightUp[1], leftDown[2]);

Vec3f d = rightUp;

TexturedSmoothTriangle one(a,b,c, Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(0,maxY,0), Vec3f(maxX,maxY,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

TexturedSmoothTriangle two(b,d,c, Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(1,0,0), Vec3f(maxX,maxY,0), Vec3f(maxX,0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

if(one.Intersect(reflectedRay))

{

return(m_right->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else if(two.Intersect(reflectedRay))

{

return(m_right->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else

{

//std::cout << "Right: Env Shader error : the ray has not hitten one or two (the triangles of the env cube)

";

return(Vec3f(0,0,1));

}

}

//3 is up

else if(use == 3)

{

/*

C - D

| |

A - B

*/

maxX = m_right->getX() - 1;

maxY = m_right->getY() - 1;

Vec3f a(leftDown[0], rightUp[1], leftDown[2]);

Vec3f b(rightUp[0], rightUp[1], leftDown[2]);

Vec3f c(leftDown[0], rightUp[1], rightUp[2]);

Vec3f d = rightUp;

TexturedSmoothTriangle one(a,b,c, Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,maxY,0), Vec3f(maxX,maxY,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

TexturedSmoothTriangle two(b,d,c, Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(maxX,maxY,0), Vec3f(maxX,0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

if(one.Intersect(reflectedRay))

{

return(m_up->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else if(two.Intersect(reflectedRay))

{

return(m_up->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else

{

//std::cout << "Up: Env Shader error : the ray has not hitten one or two (the triangles of the env cube)

";

return(Vec3f(1,0,0));

}

}

//4 is down

else if(use == 4)

{

/*

C - D

| |

A - B

*/

maxX = m_right->getX() - 1;

maxY = m_right->getY() - 1;

Vec3f a = leftDown;

Vec3f b(rightUp[0], leftDown[1], leftDown[2]);

Vec3f c(leftDown[0], leftDown[1], rightUp[2]);

Vec3f d(rightUp[0],leftDown[1], rightUp[2]);

TexturedSmoothTriangle one(a,b,c, Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,maxY,0), Vec3f(maxX,maxY,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

TexturedSmoothTriangle two(b,d,c, Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(maxX,maxY,0), Vec3f(maxX,0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

std::cout << "ReflectedRay: " << reflectedRay.origin() << " + t * " << reflectedRay.direction() << " / Hitpoint: " << reflectedRay.origin() + reflectedRay.direction() * reflectedRay.t() << "

";

std::cout << "TexturedSmoothTriangle one("<<a<<","<<b<<","<<c<<", Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,"<<maxY<<",0), Vec3f("<<maxX<<","<<maxY<<",0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000)

";

std::cout << "TexturedSmoothTriangle two("<<b<<","<<d<<","<<c<<", Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(0,1,0), Vec3f(" << maxX<<","<<maxY<<",0), Vec3f("<<maxX<<",0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000)

";

if(one.Intersect(reflectedRay))

{

return(m_down->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else if(two.Intersect(reflectedRay))

{

return(m_down->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else

{

std::cout << "Down: Env Shader error : the ray has not hitten one or two (the triangles of the env cube)

";

return(Vec3f(0,1,0));

}

}

//5 is front

else if(use == 5)

{

/*

C - D

| |

A - B

*/

maxX = m_right->getX() - 1;

maxY = m_right->getY() - 1;

Vec3f a = leftDown;

Vec3f b(rightUp[0], leftDown[1], leftDown[2]);

Vec3f c(leftDown[0], rightUp[1], leftDown[2]);

Vec3f d(rightUp[0], rightUp[1], leftDown[2]);

TexturedSmoothTriangle one(a,b,c, Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(0,maxY,0), Vec3f(maxX,maxY,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

TexturedSmoothTriangle two(b,d,c, Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(maxX,maxY,0), Vec3f(maxX,0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

if(one.Intersect(reflectedRay))

{

return(m_front->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else if(two.Intersect(reflectedRay))

{

return(m_front->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else

{

//std::cout << "Front: Env Shader error : the ray has not hitten one or two (the triangles of the env cube)

";

return(Vec3f(1,0.5,0));

}

}

//6 is back

else if(use == 6)

{

/*

C - D

| |

A - B

*/

maxX = m_right->getX() - 1;

maxY = m_right->getY() - 1;

Vec3f a(leftDown[0], leftDown[1], rightUp[2]);

Vec3f b(rightUp[0], leftDown[1], rightUp[2]);

Vec3f c(leftDown[0], rightUp[1], rightUp[2]);

Vec3f d = rightUp;

TexturedSmoothTriangle one(a,b,c, Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(0,maxY,0), Vec3f(maxX,maxY,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

TexturedSmoothTriangle two(b,d,c, Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(0,0,1), Vec3f(maxX,maxY,0), Vec3f(maxX,0,0), Vec3f(0,0,0) ,(FlatShader*) 0x00000000);

if(one.Intersect(reflectedRay))

{

return(m_back->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else if(two.Intersect(reflectedRay))

{

return(m_back->GetTexel(reflectedRay.u(), reflectedRay.v()));

}

else

{

//std::cout << "BACK: Env Shader error : the ray has not hitten one or two (the triangles of the env cube)

";

return(Vec3f(0,0.6,0.6));

}

}

else

{

std::cout << "Env Shader error : the ray has not hitten any Infinte Plane

";

return(Vec3f(1,1,1));

}

}[/code]

]]>

Also, here's a link for a great Raycasting tutorial (Wolfenstein engine) if anyone wants it...

http://www.permadi.com/tutorial/raycast/

~Robby

]]>

i ask it to many but none can repli me .

can you?

write full programme?

]]>

I need a 3-d graphics API whose functions I can call from a VB.Net application that I am trying to build.

The functions that I need are:

1) Draw a translucent sphere.

2) Plot points within the sphere given the x,y,z coordinates.

3) At each point, put a little label next to it.

4) Need to be able to differentiate between points further back or closer up in the sphere. I was thinking that perhaps closer points can be larger than further points, or something like that.

5) Rotate the sphere.

6) Zoom inside the sphere.

7) Allow the user to lasso points in the sphere which can be submitted to a separate function for processing. If not lasso, then at least somehow be able to select multiple points.

8) When the mouse rolls over a point withing the sphere, an event should be triggered that I can deal with (I want to pop up additional info about that point in a box on the side of the window).

9) There might be a total of 2000 points in the sphere, so the application shouldn't choke while rotating or zooming in/out. I guess 2000 isn't a lot, I think?

10) Should this application be browser based or standalone desktop application based? Well, there will only be less than 50 users, so I was thinking that a desktop application would be fine.

11) Can't use Javascript api's since some administrators may have disabled the use of Javascript in their company (due to virus security threats).

12) Some people have suggested Graphics Server 6, Flex, Direct 3d, etc. Will these work? Or is there something better?

13) I don't care if I have to pay up to $1000 for it. In fact, support would be nice if I have questions.

14) Google Earth api won't work since this is for lat/long points on the surface of a sphere....I need to get inside the sphere.

15) Would a game engine api be overkill?

Thanks so much,

Rolex1000

]]>

I want to learn and develop an image processing prject, i want 2 know as how to convert a series of related images into a 3d image . and pls give a specific help about which language will use to develop this project

]]>

I am initiating a simple project in 3D graphics.I want to do the following things:

1)creating a simple square in 3D

2)Move the sqaure to the given final position through some intermediate position already given.

Please give your suggestions to do these things.

]]>

I have a code that loads an IGES file into memory and translates all the IGES entities in the file into OCC shapes. My code is:

#include "IGESControl_Reader.hxx"

#include "TColStd_HSequenceOfTransient.hxx"

#include "TopoDS_Shape.hxx"

int main()

{

IGESControl_Reader myIgesReader;

Standard_Integer nIgesFaces,nTransFaces;

myIgesReader.ReadFile ("solid.igs");

//loads file MyFile.igs

Handle(TColStd_HSequenceOfTransient) myList = myIgesReader.GiveList("iges-faces");

//selects all IGES faces in the file and puts them into a list called //MyList,

nIgesFaces = myList->Length();

nTransFaces = myIgesReader.TransferList(myList);

//translates MyList,

cout<<"IGES Faces: "<<nIgesFaces<<" Transferred:"<<nTransFaces<<endl;

TopoDS_Shape sh = myIgesReader.OneShape();

//and obtains the results in an Open CASCADE shape.

return (0);

}

The file being loaded "solid.igs" is a 3D rectangular box.

I need to display the marked dimensions of the IGES file...

I'm a beginner.

Any one have an idea..

Thanks

]]>

I am embarking on a major programming project which is going to revolve around applying various surface reliefs to set geometrical structures, i.e. it is going to involve applying displacement maps to the geometries in order to create a surface texture on the shape. What my system is going to need to do is to have a 3D shape, then the code is going to generate a surface relief to apply, which the code then needs to apply to the shape and then display. So the code is going to generate this stuff itself, not take in any models or anything like that.

What I would like to know is what programming language set up would allow me sophisticated enough control over 3D structures? I'm looking at Java and Java 3D, but I'm not sure if displacement mapping is actually possible there, so I may have to look elsewhere, like at C++ using OpenGL or something, which is not ideal as it's much more complicated. Anyhoo, does anyone know what would allow me to do this? (just the discplacement mapping of 3D geometries through code, I know how the rest should work)

Thanks in advace,

Hayden Devlin]]>

I have a series of data points which describe a surface, i.e. in 3 dimensions. I understand that there is an algorithm, the Delaunay three dimensional algorithm for triangulating the surface. Now it comes down to the question of generating tetrahedrals and a circumscribing sphere where none of the other data points are contained within the sphere.

However whilst I can determine whether points exist or not within the sphere. I have no clear algorithm for determination of a contiguous surface. I don't care about efficiency. Does anyone have any idea about how to proceed??

Thanks]]>

Best regards

Ricardo Furtado]]>

The format seems fairly simple and I have some files loading perfectly.

In some others there are problems with the "POLS" chunks. According to the specs at http://www.sandbox.de/osg/lightwave.htm , a POLS chunk is of the form "POLS { ( numvert[U2], vert[U2] # numvert, surf[I2] )* }" but I've used a hex editor to view some files that show "FACE" chunks starting at the beginning of a POLS chunk.

Do you know how to handle this problem or where to find a more updated, accurate description of the format?]]>

I am trying to project a point x in 3D Space onto a point x' on the surface of a cone. x' should be the point on the cone's surface closest to the original point x such that the normalized vector x-x' is the cone's surface normal in the point x'.

The Cone is parameterize in two distinct ways...

(1) - the cone's apex a e R]]>

Now i am wondering is there any method through which we can extract the z coordinates from an image like JPG, so that we can draw its 3D projection.

]]>

APRIL 19 and 20!!!

Save the Date!

THE FIRST ANNUAL ECAROcon (Extra-curricular Computer Art Organization Convention)

ECAROcon 2008 is a conference that brings together students and professionals

in the industries of Computer Art, Computer Animation, Computer Gaming, Software

Design, Computer Engineering and many other artistic and computer-related fields.

Modeled loosely after major technology and computer art conferences, this conference will bring the industry back to the people and the students. In recent years the number of computer art, graphics and gaming conferences open to the public has declined, and we hope to create an affordable, worthwhile conference that is targeted at students and industry professionals alike. Hosted in Syracuse, New York, by ECARO, with the support of Syracuse University, we are bringing the industry to you through exhibits, workshops, speakers, debates, live performances, and demo reels all at an extremely affordable admission cost. Held at the Convention Center at Oncenter in downtown Syracuse, there will no doubt be plenty of space for all, and all are welcome!

ECAROcon will play host to many companies in multiple industries that share the common bond of art in technology. Running April 19th to the 20th ECAROcon's invite list includes all the major companies from around the world. These companies will exhibit everything from demos of their newest games and animations to the cutting edge of technology, they are also always on the lookout for up-and-coming new talent. Come and join the fun and who knows maybe your work will be displayed at ECAROcon 2008!

www.ECAROcon.org

[img=www.talkvideogames.com/attachment.php?aid=232]

[img]http://www.talkvideogames.com/attachment.php?aid=232[/img]]]>