Help Visibility test for texture mapping - Programmers Heaven

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Categories

Welcome to the new platform of Programmer's Heaven! We apologize for the inconvenience caused, if you visited us from a broken link of the previous version. The main reason to move to a new platform is to provide more effective and collaborative experience to you all. Please feel free to experience the new platform and use its exciting features. Contact us for any issue that you need to get clarified. We are more than happy to help you.

Help Visibility test for texture mapping

NawarNawar Posts: 1Member
Hi to all

I am a newbie to computer graphics so please excuse my ignorance. I have
a 3D mesh of a scanned object, and I have images taken from heaps of
known positions. I am currently doing a simple texture mapping procedure
which results in a simple vrml file. This is done offline using C++. The
output is a vrml file with a list of facets, where each facet is mapped
to a portion of an image

Currently the texture mapping process is simply: for each facet

- finds list of images (ie camera postions) where facet is facing camera (ie
backface culling)

- filters the list of images to keep the ones where the angle between
camera and facet's normal is less than a given threshold.

For now the winner image is simply the one that is most normal to the facet.

For most facets, the results are acceptable, but I need to develop this
further, since some facets are mapped to incorrect images due to
occlusion. Now I need to test for visibility (occlusion
culling). I have been doing some reasearch and seems like z-buffer is
the way to go (ie the most common) (please correct me if that's
wrong). But I am not sure I fully understand it. It seems like
for each image
for each pixel
find depth of closest 3D point (on a facet)


But I am sort of lost how does that help in texture mapping. In the end
of my visibility filter, I want for each facet, a list of images where
the whole facet is visible, I will then chose the winner based on some
other future defined criteris. I fail to see how using z -buffer will
tell me whether a facet is fully visible in an image. I might be
mistaken but is z-buffer used mainly to create an image from a new
viewpoint using existing images (I am not interested in that). I would
appreciate any help in clarifying how having a z buffer for each image
will help in deciding whether a facet is fully visible in that image.Is
ray tracing more suitable for my application

Thanks
Nawar
Sign In or Register to comment.