Author |
Message |
Ray Price
Rating: N/A Votes: 0 (Vote!) | Posted on Wednesday, January 30, 2002 - 1:23 am: | |
Testing the stereo seperation on the NVidia drivers I notice that close to an object you get two almost very seperated object images (as expected), as you back away the object images merge (as expected) but... as you back away further the images seperate again. This last stage does not sound right to me as I would think after a certain distance the object images would be always merged. Would anyone with a better understanding than me care to comment? Thanks Ray |
The_Moses_Monkey
Rating: N/A Votes: 0 (Vote!) | Posted on Wednesday, January 30, 2002 - 4:03 am: | |
hmmm thats what my eyes do. kinda hurts to focus on where your focal point is try it n tell me what u see. remember that your eyes focal point is variable put a figer up focus on it n take note of the background objects(there seprated arent they) now put a finger in front of the focal finger (wow 3 fingers)the new finger is seperated as well as the background objects. i suspect the nvida drivers are functioning fine they just operate at a non variable (set)focal point.where as the eye is mobile and needs to vary its focal point the Ctr or HmD is stationary and cannot anticipate your choice of focal point.(Hardware vs Living flesh) hope this helps |
David C. Qualman
Rating: N/A Votes: 0 (Vote!) | Posted on Wednesday, January 30, 2002 - 5:58 am: | |
The logic does make sense. When things are at just the right distance, they will be exactly matched up. If they are closer, they will be separated. If they are further away, the will also be separated, but in the other direction. The objects should be merged at the surface of the screen. This makes things separated one way look like they are out of the screen, and things separated the other way look like they are inside the screen. No matter what, you don't want them to be separated by too much - roughly a half inch. Otherwise, you will get headaches. |
Levent Acar
Rating: N/A Votes: 0 (Vote!) | Posted on Friday, February 01, 2002 - 3:09 pm: | |
-Set stereo seperation to just "1". -Then increase convergenge by shortcut key. -Toggle stereo on and off several times if you don't feel the depth (this helps if there is a need to swap lense sync.) Now you moved every object to the front of your monitor. The distant objects merged but close ones seperated. This is also usefull to reduce the ghosting effect. But beaware, HUD objects and menus also pops out of the monitor therefore they are also seperated. This set up works with my ASUS VR100 glasses. Please let me know if it also works for you. |
Ray Price
Rating: N/A Votes: 0 (Vote!) | Posted on Wednesday, February 13, 2002 - 4:02 pm: | |
Interesting, thanks for the replys guys. Re the focal point, do you think we will ever see HMDs where the focal point can dynamically change depending on where the eyes are looking? Could small cameras tell using the position of the eyes where the focal point is and pass this to the software? Ray |
David C. Qualman
Rating: N/A Votes: 0 (Vote!) | Posted on Wednesday, February 13, 2002 - 7:10 pm: | |
Cool idea, but pretty complex. If you don't want to wear contact lenses that are mechanically gimballed to the HMD, you would probably need a vision system to see where the eyes are aiming. Thus, another set of optics, and a camera system. If we assume both eyes aim the same direction, then only one vision system is needed. Then, a computer to analyze the eye direction is needed. Then, some servos - or dynamically focusable lenses - would be needed to adjust the HMD's focal plane. With a really good book of white magic, or a better book of black magic, all of this could be made to fit into a lightweight HMD and not require a military budget to purchase. It's alot easier just to join the Society of Creative Anachronists - or the Army. Remember, sometimes the best word processor is a pencil. |
M.H.
Rating: N/A Votes: 0 (Vote!) | Posted on Thursday, February 14, 2002 - 11:38 am: | |
Re: dynamic focal point chane ... Another problem, even when you know the direction of the eye, you do not know on witch object in this drirection the user want to focus. It will be nessesery to read the observer mind. Maybe that direct plugin in the barain will be easer. But I realy appriciate this idea, becouse it show how is such simple thing in reality complicated. |
Ray Price
Rating: N/A Votes: 0 (Vote!) | Posted on Thursday, February 14, 2002 - 4:58 pm: | |
Is that true? If you track the eye with cameras, feed back the position to the software to find out what Z position the object you are looking at is at, and then feed this back to the HMD to drive servos on the lenses? |
M.H.
Rating: N/A Votes: 0 (Vote!) | Posted on Friday, February 15, 2002 - 11:11 am: | |
I thing that the problem is following: There are to much objects with different Z depth in given direction. You can not know wahtever the user want to see the tree in foreground or the mountains in the background of the tree ... |
David C. Qualman
Rating: N/A Votes: 0 (Vote!) | Posted on Friday, February 15, 2002 - 9:19 pm: | |
A really cool idea would be to aim a camera into the eye, to see the light that is reflected from inside of the eye. Then, shine a beam of light into the eye. The camera could have a filter to only see the beam of light that is reflected from the back of the eye. So, this camera would see the beam of light after it has gone through the eye's lens, reflected from the back of the eye, and gone through the lens again. Thus, the camera sees a beam of light that was defocussed (i.e spread) twice by the same lens. If we already know the shape of the beam before it enters the eye, and we know the shape of the beam after it exits the eye, we can compute the point-spread-function of the lens. If effect, we could measure how much the lens defocssed the light, and thus, we could see where the eye is trying to focus. If we can measure the spread of the light beam accurately enough, the user would adjust the focal length of the lens in their eye. Then, the HMD could quickly adjust to put the right depth of the scene in focus. Now, we wouldn't need know where the eye is aiming. We would just measure how far away the eye is trying to look. Of course, this assumes perfecly designed lenses and retinas. Few of us have this. |
Richard Scullion
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, February 18, 2002 - 1:02 pm: | |
I think it would actually be possible but you would have to work out where BOTH eyes are looking and triangulate the position. You couldn't do it with focus as the focus is always on the surface of the screen. |
Traco
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, July 12, 2004 - 4:18 am: | |
Has anyone thought about if there is a Auto-Focus HMD? Nowadays, there is only HMD with fixed focal-length. Why dont the others investigate it? Will it be useful? |