Author |
Message |
Jason Hoobler (Stereofan) New member Username: Stereofan
Post Number: 1 Registered: 4-2007
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 15, 2007 - 5:01 pm: | |
I'm trying to think about the problem of how to conceive of a transfer from stereo pair photography to the formation of a hologram, even just a partial one (with 2 images; greater partial for every extra image, I presume). This discussion and all ensuing work is meant to be released to an through public domain as I believe in the power of public prior art and hope to see 3d for the masses, not patents for the few. One of the differences between photography and holography is the preservation of phase information in holograms -- I do not know if this is everything, but the interferogram recorded on a halide plate from the reference and scatter beams of a laser form an interferogram which converges into a wavefront when reconstructed through the laser's original reference angle. I am wondering if wavefront reconstruction can be calculated sparately and independently to match an extracted z-map from stereopair imaging divergence to recreate a partial depth map of the photographed object as a transform of some kind -- perhaps affine?!? Who knows, I'm still thinking this task through... any feedback would be welcome. |
Larry Elie (Ldeliecomcastnet) New member Username: Ldeliecomcastnet
Post Number: 16 Registered: 10-2006
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, April 16, 2007 - 1:48 pm: | |
No. You could concieve of making a model from the stereo pair (people get paid a lot to come up with this part of the technique), then calculate the proper positions of the interfering information to make a hologram of the model, but there is no technique to put the surface information (images) back on the model's surfaces. The best you could hope for is one narrow range view. One could concieve of this, but it hasn't been done. |
Jason Hoobler (Stereofan) New member Username: Stereofan
Post Number: 2 Registered: 4-2007
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, April 16, 2007 - 7:28 pm: | |
So you would get a narrow-range view of shape-based information like a stereogram without the pre-req of x'ed eyes? (Magic Eyes) Building a model of range information is photogrammetry, no? Why couldn't a broad perspective be "painted" using something like Nyquist's sampling theorem, except modified for holographic conversion of a multiplexed stereoperspective? I agree it seems like a horrendous problem... |
Larry Elie (Ldeliecomcastnet) New member Username: Ldeliecomcastnet
Post Number: 17 Registered: 10-2006
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, April 16, 2007 - 8:10 pm: | |
No. The 'painting' onto the model even with some sampling method will destroy the reconstruction. Doing computative holography even in black and white is intensive, now you are asking to take the model and expecting to re-construct to a surface image based on converging interference to the model's surface. Imagine I take a picture of some simple object of only 'one' color, say a plain bronze colored Oscar statue. There is very little surface data, but there is a ton of reflexive data. You could imagine computing the model from two images (people do this for reverse engineering of parts) and you have a model. The model doesn't know what the surface's reflections looked like. Now if you made a real 3D model, say with a 3D lithiography printer, you could paint the model gold again. But for someone to do a computational hologram of the model will just get you a plain object, without any surface info. You want to make it harder, try painting old Oscar to make him look more life-like before you take the stereo pair photo. Now it's an order of magnatude harder to reconstruct. Again, you can do the model, and probably even a hologram of the model, but beyond that it gets pretty dicy. Then you can begin to think up all the other things that make up an image; surface and texture, etc. Don't forget there may be more than one object. It gets pretty hopeless pretty fast. |
|