Author |
Message |
ToxicX
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, March 12, 2001 - 11:47 pm: | |
Our latest review takes a look at the 3DPlus package from SSC, with a pair of wired Eye3D glasses running on a manual switch with the Wicked3D driver and the brand new Soft4D MPEG/VideoCD player. The VideoCD player converts any MPEG movies to a stereoscopic movie in realtime, the effect is stunning. http://www.stereovision.net/ |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Tuesday, March 13, 2001 - 12:12 pm: | |
Toxic, the demo-videos on the 3Dplus CD are basically Pulfrich-stuff with horizontal camera movement. They even work with TV-cardboard-glasses most of the time. Try some other VCD's with lots of static shots, vertical movement, shaking hand-camera, etc. Christoph |
ToxicX
Rating: N/A Votes: 0 (Vote!) | Posted on Tuesday, March 13, 2001 - 10:20 pm: | |
It works as long there there is ANY movements, not restricted to horizontal. The explosion is a static shot, where the moving ball comes out of the background. Zoom works, static shots too. The Bond gallery shows this: http://www.stereovision.net/toxicx/shots/drno/drno.htm There was no horizontal camera movement that I remember in the Bond movie, some in Matrix and not much in the music video VCDs or the web MPEG clips I viewed. While the demo streams are probably optimized, only 3 of the shots were caught while moving horizontally, but not in a straight left->right line. I don't have the Blair Witch Project or other shaking handycam MPEGs around, but I'll look around on the web and make some galleries later. |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 17, 2001 - 9:52 pm: | |
I'd assume the software builds 3D geometry off of objects in a scene from one frame to the next. The software approximates the depth values by comparing one frame to the next, and extrapolating the differences in the scene. Sort of like Pulfrich effect on steriods. This is actually a good technique, and the better the software's algorithms, the better the stereo effect gets. I'd also asume that the 3D looks better in an actual moving scene (which the software would make corrections to the stereo in real time as it "learns" more about the scene (IE- the longer the scene, the more the software can "figure out" it's depth values) -Eric L. |
Brightland
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, March 18, 2001 - 5:06 am: | |
Hey guys, Christoph is correct. One of the Bond images shows strong vertical parallax on James' head. The software appears to take the current frame and the next frame to create a "stereo" pair. No motion, no stereo. Vertical motion, vertical parallax. This can cause eyestrain over time. There is no fully automated way to extract 3D depth when there is no movement. To fix vertical parallax problems, you'd need to compute some form of image correlation (such as LCC, etc.), which is rather CPU heavy, and not going to be real-time in the near future. The only way to get decent results would be to have an operator create depth outlines (using some custom software), in much the same way that BW movies are colorized (operators select areas manually, and the software automates some of the process). Regarding Eric's learning idea, neural networks might be trainable for such tasks (they do OK for pattern recognition apps). John |
ToxicX
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, March 18, 2001 - 1:36 pm: | |
The software uses the motions vectors in the MPEG data which makes it usable on a PC, without this info it would require a lot more CPU power. The 25 fps makes it troublesome to get good shots since fast movement displays weird on a screenshot, much like taking a photo at a cinema, you get blur if you shoot between to frames. No, it does not just display the stereo by taking every other frame for the depth effect. Look at the explosion, it's a static camera, only the fireball moves toward you. There is no left/right or up/down movement there. I guess that I should post more shots and some clips to better explain what I mean. |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, March 18, 2001 - 3:25 pm: | |
>>>I'd assume the software builds 3D geometry off of objects This is Science Fiction. >>>I'd assume the software builds 3D geometry off of objects According to Soft4D the program looks for horizontal movement and adjusts the time-shift accordingly. The time-shift applies to a whole image I guess. I don't think there's manipulation within a single frame, i.e. object manipulation. There are 8 different possible timeshifts, 4 in each direction. Soft4D claims to try to avoid vertical parallax. The only way I can imagine to achieve this is by setting the time-shift to zero as long as too much vertical movement is going on. I see lots of vertical parallax in the software, but my machine doesn't fulfill the minimum system requirements (PII-450 vs. PIII-550). Also I was asked to wait for the next release which should be enhanced. >>>It works as long there there is ANY movements, not restricted to horizontal. Well it comes down to the question what 'it works' means. Basically I think this statement is utter nonsense! Christoph |
Brightland
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, March 19, 2001 - 12:21 am: | |
Using the MPEG motion data is a good idea. It's true that you can compute depth from *any* motion, and then convert said motion into horizontal parallax. So, given all these velocity vectors, how do you determine which objects are in close and which are out far? What about far away objects moving slowly, and nearby objects moving slowly? If you assume that all motion is "camera relative", caused by camera motion only, then it can work OK (nearby objects move fast, far away objects move slower). Objects moving relative to the camera are going to be problematic (like James Bond's body in the sample image). If someone has figured out a way to compute such motion properly, fully automated in realtime, that would be very impressive. While anything is possible, some things are highly unlikely on today's hardware ;-). Regards, John |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Tuesday, March 20, 2001 - 12:39 pm: | |
>>>>It's true that you can compute depth from *any* motion, and then convert said motion into horizontal parallax. How? Well I guess you mean diagonal movement. Where do you get your stereo pair from when it's a clean vertical movement, with no horizontal elements. BTW even if it is theoretically possible, I doubt 3dplus does this - yet. >>>>Using the MPEG motion data is a good idea. I must admit, I'm not a MPEG-compression expert, but I bet that MPEG has no idea of objects and no idea about distances. I assume it divides the screen in squares and compares those squares to those of the prior key image. Then it tries to recycle as many squares (pixel arrays) as possible, even if they moved. I think it's hard enough to shoot a good stereo-photography, not to mention a good stereo-movie with REAL 3D-equipment. Christoph |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Tuesday, March 20, 2001 - 10:32 pm: | |
Cristoph, I'd appreciate it if you didn't quote my posts out of context. Cutting a sentence in half in turn halves it's meaning as well. For example: >>>I must admit, I'm not a MPEG-compression expert ...Then don't talk about MPEG compression. See what I mean. -Eric L. |
Brightland
Rating: N/A Votes: 0 (Vote!) | Posted on Thursday, March 22, 2001 - 6:56 am: | |
Christoph, You can convert *any* velocity (in any direction) to a horizontal offset. Low velocity = into the screen (positive offset), high velocity = out of the screen (negative offset). That's how an explosion example would look decent (smoke front moving at relatively constant velocity in all directions, with nearer particles moving faster due to perspective). It's true, shooting real stereo 3D is challenging, but it's cool to see people getting enthusiastic about stereo3D, regardless of how it is generated. Regards, John |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Thursday, March 22, 2001 - 11:58 am: | |
>>>Christoph, I'd appreciate it if you didn't quote my posts out of context. So what was the context in this case? >>>I must admit, I'm not a MPEG-compression expert >...Then don't talk about MPEG compression. If you know better please tell us. The reason I wrote this is to get more info through discussing this with others. BTW you talked about MPEG before me so - are you an expert. Christoph |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Thursday, March 22, 2001 - 12:09 pm: | |
>>>You can convert *any* velocity (in any direction) to a horizontal offset. Low velocity = into the screen (positive offset), high velocity = out of the screen (negative offset). But this would require manipulation on the object level. Furthermore a fast object in the distance may travel over the screen at the same speed as a slow object in the foreground. How can an algorithm distinct between the two? Stereo3D is about distance. The velocity of an object doesn't tell about the distance. In order to get a parallax for objects you have to cut them out of the background. This will leave gaps in the background which will have to be filled with informations from prior or later frames. Very demanding and even if you achieve it the objects in itself will remain flat, like cardboard-cutouts. >>>It's true, shooting real stereo 3D is challenging, but it's cool to see people getting enthusiastic about stereo3D, regardless of how it is generated. Bad stereo3d may as well hurt the market. There is very little enthusiasm in the general public about good stereo3d and I'd hesitate to confront them with anything which isn't first rate. Christoph |
Brightland
Rating: N/A Votes: 0 (Vote!) | Posted on Thursday, March 22, 2001 - 9:29 pm: | |
Hi Christoph, I raised the same points regarding near/far velocities in my 3/18/01 post above. There is no easy solution on today's hardware; that was my point. However, you can use velocity to provide pseudo 3D which can work in camera relative-based motion (or where motion is relatively constant for near and far objects). Velocity is simply distance over time (d/t, m/s, etc.); we're dropping time and taking the instantance displacement between frames (distance). We're then converting said distance into a horizontal displacement (generating a synthetic right eye view from the left, or vertical parallax correcting the second frame). Regarding velocity to parallax conversion: I'm not talking about any "objects", I'm talking about correlated pixels. I've written Linear Correlation Coefficient codes that find pixel correlation between left/right eye views (you can do the same between two video frames). Using LCC and a 2D spline-warp algorithm, you can easily compute pseudo depth from velocity without holes or gaps (regardless of direction). This would not even be close to real-time, though. In summary, it is possible to convert any velocity to horizontal parallax, given simple rules (based on absolute velocity magnitude, and even with variations based on direction). On current computer hardware, it will not be perfect, but it will work for most camera relative (motion) shots, and it will eliminate problems such as "James Bonds head" having vertical parallax (it will eliminate all vertical parallax for correlated pixels). Regards, John |
Eric M. Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Friday, March 23, 2001 - 1:54 am: | |
Okay, allow me to explain. In my original post, I stated: >I'd assume the software builds 3D geometry off of objects in a scene from one frame to the next. Not refering to 3D objects, but a given object in a film's scene. by comparing it's position on screen from one frame to the next, and taking into consideration the movement in the film's scene, it would be possible to approximate that object's position in terms of depth. Perhaps it was partly my bad; A more appropriate term would be "subject" Since the start of man's ability to represent objects as drawings, or "Subjects" in a hand-drawn form, there have been "rules" to follow when illustrating depth. When drawing an illustration with Perspective, an object that is further away from the viewpoint of a scene would be drawn smaller than a similar object placed closer to the viewer's "point of view". The ainceint Japanese represented depth by placing far away objects higher on the page than those that were closer to the bottom. Size was also used by them to represent depth. Lastly, in modern times, when you illustrate in perspective, you first draw a "Horizon" an imaginary line that seperates the sky from the ground, along with a "vanishing point", or in other words, the center of perspective. The further away a subject in the picture is placed in the scene, the smaller it is drawn. also, the smaller it gets, the more it converges upon the vanishing point. Read some books on illustrating in perspective to see what I mean on this. Now, while this fools the brain into preceiving depth, it is not "true" depth, since an illustration is not usually in stereo. However, that's not to say you couldn't, it just means you would have to draw two images, one for each eye, with the proper perspective, Viola! stereo perspective illustration. Now, if a program could compare one frame to the next in a video file, which is in perfect perspective, since it's a photograph, if the camera, or an object moves, it would be possible for the software to compare the Perspective of the two frames, while noting changes between the two. then, using algorythims, it could extrapolate the depth in the scene, based on the changes in the scene itself, using the simple rules of perspective (IE, if the girls boobs get bigger, because she's walking towards the camera, the computer says "Oh, there getting bigger, and moving away from the vanishing point, that means they're getting closer!"). The computer can then render the changes to two different images (ie: a stereo pair) This may not be possible with a direct stream, but dont forget, the video is in a file! the app can "read ahead" and compare multiple frames to estimate depth, and render two images based off of those differences, or maybe render a wireframe "Bumpmap", which it simply textures with the video file itself. Either method is possible. and John's opinion on the neural network thing is right along my lines of thinking. If you have a fast enough PC, and really nice algorythims to handle the chores, it could do this in real time, but it would definitely have better results if you let the machine "convert" or pre-process the file before playing. this is not science fiction, the concept is strictly academic. Now, to move on to my other argument, you took me out of context. First, I posted the following complete, inteligble phrase: I'd assume the software builds 3D geometry off of objects in a scene from one frame to the next. and YOU hacked it up, so it read: I'd assume the software builds 3D geometry off of objects changing the meaning of the original phrase altogether. you, sir, are nothing more than a hack, attempting to distort my original text to suit your own needs. The original "Context" involved how the software in question was able to convert 2D video to 3D in real time. It wasn't about drawing a dedicated 3D, wreframe scene like in QuakeII, nor was it about playing MPEG layer 2 videos, or what compression it supported. The whole point of this discussion was to establish how the app is able to make a 2D video into a moving stereo pair. The MPEG compression doesn't mean shit, because this is how the file is packaged. and the software views the video the same way we do, as a flat, 2D image. the computer is just doing the work of turning the flat images or "frames" into a stereo pair. A human being can do this too, but it would take forever to painstakingly hand-draw the extra frames. The computer is taking the drudgery out of that chore by doing it using mathematical algorythms. and simple rules of perspective. Plain and simple. As for me being an MPEG expert, does making VCD's count? Does my work in coding count? I don't give a fuck whether or not you think I'm a expert in the field, because when I have a problem understanding something, I open a goddamned book and I read that motherfucker! Hell, sometimes I'll ask advice from people that know shit about the subject too, but I won't act a fool when I get an idea or opinion I don't like from someone else! The point here is, I don't need to be an expert, because any problems I have that come my way, I solve, and most of the time, I solve the problem on my own. I don't give a shit about labels or self-proclaimed "experts", which is just another word for "High and Mighty asshole". I do not pester people for answers, and I definitely don't give anybody shit for their ideas or criticizm. I just take it all in and move on. Fucking Anarchy!!! -Eric L. |
Michal Husak
Rating: N/A Votes: 0 (Vote!) | Posted on Friday, March 23, 2001 - 4:05 pm: | |
Eric L: maybe using not so much shit's and fuck's will be a good idea. All: Mpeg compresion realy have an information about motion inside. Just finding this motion is one of the most time consuming part of the mpeg compression algoritm. Instead of giving information for a new group of pixels, the algoritm gives the vector where to shift blox's of pixels from old (or future) images to construct the current image. I thing that just this information about local motion in the scene could be used for arifical stereo creation ... |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 24, 2001 - 1:18 am: | |
In response to the above, I will make an effort to refrain from using off-color explatives in future posts. Furthermore, I apologize if I have offended anybody. My bad. I usually make it a point to refrain from cursing, and anybody who regularly reads my posts will back me up on that. Still, I feel it is my obligation to remind you of my constitutional right to free speech and expression. If I feel the only way to get my point across is to use a curseword, than I will do so, without hesitation. However, out of respect for your request, I will do my best to keep it to a minimum. -Eric L. BTW: Don't forget the pallate information stored in the file. Color valuess can also be display cues as to the the depth of a subject in a given scene. By all rights, you are correct in assuming it's an "artificial" stereo effect, but by definition, any stereo display method other than viewing a real object with your own eyes could be considered "artificial". However, even though playing QUAKE II in 3D is artificial in nature, the effect is stunning. 2D to 3D video conversion can get to this point eventually, when the computers get a bit faster, and the software gets a little better. Soft 4D is definitely a step in the right direction to meet that goal, however. |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 31, 2001 - 12:34 pm: | |
>>> However, even though playing QUAKE II in 3D is artificial in nature, the effect is stunning. 2D to 3D video conversion can get to this point eventually, when the computers get a bit faster, and the software gets a little better. Quake 2 contains artificial, but meaningful 3D-data, a monoscopic movie does not. 2D-to-3D movie conversion will reach this point when the computer fully understands the scene, i.e. knows what a car, a house, a person IS, how large it usually is, how it usually moves and how it relates to it's environment. It has to determine what the camera lens focal length was and how the thing moved relative to the environment. Then the computer has to recreate the whole scene synthetically and calculate a stereo pair. For this to happen the computers have to become a million times faster, the software has to become a billion times more intelligent. If you just translate movement of objects or color values into some parallax you get random, meaningless, eyestraining, headache inducing bullshit. Christoph |
Anonymous
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 31, 2001 - 12:44 pm: | |
Eric, like it or not Christoph knows much more that you, not only that, he owns this board! Why don't you show him a little more respect and quit the lashing out to make yourself look big. It really makes you look very, very bad. |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 31, 2001 - 1:02 pm: | |
Maybe such statements should better not be posted anonymously. Also everybody is equal on this board. In the past I often was wrong. The purpose of a discussion is to put one's own opinion to the test. Christoph |
Michal Husak
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 31, 2001 - 1:46 pm: | |
Christoph: You are right that general understanding of the scene and whole 3D reconstuction of all objects is something witch is impossible to do by existing computers and algoritms. Current computers ar unable to solve such easy thinks as recognize what is on the picture .... The problem is that the method you describe is 'human based aproach' to solve the problem ... The computer based method of solving problems usualy totaly differ from the way the human will chose ... I thing that motion vector based analysis could give the Z depth of object especialy for specific camera motion ... It will require human control of the conversion (fully automatic conversion will be probably imposible in enought good quality) but it could work especialy when the camera moves ... The stereoscopic reconstuction must be than done off-line on pixel level ... |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 31, 2001 - 9:03 pm: | |
Bah! pointless discussion at this point. I don't want to continue, due to the fact that too many people seem to take me out of context around here. I NEVER disrespect Christoph, and even if I did, it would be something between me and Christoph. I respect his opinion, but I also respect my own. The questions posted in this forum are usually hypothetical, I just add to the hypothesis. I don't claim to know everything, and nobody else should either. I reiterate my earlier responses to many posts. 1) I don't like "know-it-alls" they make me puke from their high and mighty stink. there is noone lower. 2) I will respect anothers opinion, so long as they respect mine. 3) at least I have the balls to post here under my REAL name, which is more than can be said for some people who post here. I NEVER said the whole software idea was able to be fully implemented at this day and age. need I quote from my earlier posts? I don't think I'll need to, since you have a scroll wheel on you mouse, and can re-read them yourselves. If you have a fucking problem with the way I express my ideas, feel free to e-mail me, the address is rave669@usa.net even if I don't like what you have to say, I'll happily respond. I have no fear, and I'm secure enough about myself that I don't need to trash other's opinions to make myself feel smarter. As for this forum, it quickly wears on my patience. There are a lot of assholes that post here, piss off the users, and generally waste my time. They intentionaly post in threads to start arguments, and they always have "anonymous" as their user name. I understand the philosophy behind allowing anonymous posts, but I think it's time Christoph started requiring user names and VALID e-mail addresses of the poster's here. It's getting out of control. When I first happened upon this forum, there were good discussions, and folks like VROne, Michal and Christoph all gave good input and ideas, some of which found their way into my own experiments. Lately, however, the threads have been overflowing with BS posts. (anyone who has been viewing th HMD related threads in General Discussion would know what I mean) The worst part about it is, nobody can do anything to fix the problem, because anonymous posting grants users blanket immunity, and does nothing to solve the abuse of the system. I'd respect Christoph even more if he would do more than scold the posters, who obviously aren't intimidated by this tactic. Require a valid e-mail address, and about 50% of the lamer posts will dissapear, and the ones that make it through will be easy to trace back to a specific user most times. I know I'm probably going to hell when I die, why should I care. At least I can admit my faults. And before you flame me, remember this: If you poke a rattlesnake in the eye with a stick, it's going to bite you back, because you reap what you sow. -Eric L. |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Saturday, March 31, 2001 - 9:12 pm: | |
BTW, If you don't like my attitude on the forum, you can thank my old friend Exile from alt.binaries.emulators.gameboy, he showed me that being spineless about your opinions gives others the opportunity to walk all over you. This is why I never back down and always "stick to my guns". People used to think Einstein was a moron, until his theories were proven correct. Disagree? Hate my guts? you know my e-mail address. Feel free to express your opinion. -Eric L. |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 01, 2001 - 12:33 am: | |
>>>People used to think Einstein was a moron, until his theories were proven correct. I don't think that's historically correct, but I won't spend hours reading Einstein biographies, just to prove you wrong. As far as I know other scientist believed in Einsteins thories before they were proven and they actually worked out experiments to get the prove. I took the time to comment your larger, earlier post. >>>Okay, allow me to explain. <<<< >>>In my original post, I stated: <<<< >>>>I'd assume the software builds 3D geometry off of objects in a scene from one frame to the next. <<<< >>>Not refering to 3D objects, but a given object in a film's scene. by comparing it's position on screen from one frame to the next, and taking into consideration the movement in the film's scene, it would be possible to approximate that object's position in terms of depth. <<<< The position and movement of an object within the x- and y-coordinates of the 2D-movie-frame doesn't tell the computer the distance, i.e. z-value or depth, of the object. To do this it would have to identify the object, know the focal length of the camera lens, the field of view, the camera movement and the real size of the object. >>>Perhaps it was partly my bad; A more appropriate term would be "subject"<<<< >>>Since the start of man's ability to represent objects as drawings, or "Subjects" in a hand-drawn form, there have been "rules" to follow when illustrating depth. When drawing an illustration with Perspective, an object that is further away from the viewpoint of a scene would be drawn smaller than a similar object placed closer to the viewer's "point of view". The ainceint Japanese represented depth by placing far away objects higher on the page than those that were closer to the bottom. Size was also used by them to represent depth. <<<< >>>Lastly, in modern times, when you illustrate in perspective, you first draw a "Horizon" an imaginary line that seperates the sky from the ground, along with a "vanishing point", or in other words, the center of perspective. The further away a subject in the picture is placed in the scene, the smaller it is drawn. also, the smaller it gets, the more it converges upon the vanishing point. Read some books on illustrating in perspective to see what I mean on this. <<<< >>>Now, while this fools the brain into preceiving depth, it is not "true" depth, since an illustration is not usually in stereo. However, that's not to say you couldn't, it just means you would have to draw two images, one for each eye, with the proper perspective, Viola! stereo perspective illustration. <<< >>>Now, if a program could compare one frame to the next in a video file, which is in perfect perspective, since it's a photograph, if the camera, or an object moves, it would be possible for the software to compare the Perspective of the two frames, while noting changes between the two. then, using algorythims, it could extrapolate the depth in the scene, based on the changes in the scene itself, using the simple rules of perspective (IE, if the girls boobs get bigger, because she's walking towards the camera, the computer says "Oh, there getting bigger, and moving away from the vanishing point, that means they're getting closer!"). <<<< Problem is, the computer would have to identify objects, seperate them from the environment. To identify and determine the movement of an object is very hard for todays hard- and software. In a movie there are different camera lenses used, there are fast cuts, permanent changes in lighting, all kinds of camera movements. Impossible to analyze for todays equipment. Even if the computer would determine that there's an object which comes toward the camera it still doesn't know what the position of this object is relative to the environment. Every movie contains static shots without movements, but still should have depth. How would you treat these? The only way to get stereo out of a 2D-moviecamera is to move it horizontally at a certain speed. This way you get almost normal stereo-pairs. This is how the color-3D documentaries on TV work (with those one light- one dark-eye cardboard glasses). And that's the reason why all the demos on the 3dplus CD mainly consist of horizontally moving shots. >>>The computer can then render the changes to two different images (ie: a stereo pair) This may not be possible with a direct stream, but dont forget, the video is in a file! the app can "read ahead" and compare multiple frames to estimate depth, and render two images based off of those differences, or maybe render a wireframe "Bumpmap", which it simply textures with the video file itself. Either method is possible. and John's opinion on the neural network thing is right along my lines of thinking. If you have a fast enough PC, and really nice algorythims to handle the chores, <<<< If my grandma had a really nice warpcore and a really nice quantum flux compensator she could go to Jupiter. >>>it could do this in real time, but it would definitely have better results if you let the machine "convert" or pre-process the file before playing. this is not science fiction, the concept is strictly academic. <<<< No, it's not. What you said beforehand is mathematically wrong. You can't do this calculations with the given data from a 2D-movie, even if you take movement into account. The required information isn't there. >>>Now, to move on to my other argument, you took me out of context. First, I posted the following complete, inteligble phrase: >>>I'd assume the software builds 3D geometry off of objects in a scene from one frame to the next. >>>and YOU hacked it up, so it read: >>>I'd assume the software builds 3D geometry off of objects >>>changing the meaning of the original phrase altogether. you, sir, are nothing more than a hack, attempting to distort my original text to suit your own needs. <<< That's what you call 'hacked'? Anyway, even the longer statements leads nowhere. >>>The original "Context" involved how the software in question was able to convert 2D video to 3D in real time. It wasn't about drawing a dedicated 3D, wreframe scene like in QuakeII, nor was it about playing MPEG layer 2 videos, or what compression it supported. The whole point of this discussion was to establish how the app is able to make a 2D video into a moving stereo pair. <<< >>>The MPEG compression doesn't mean shit, because this is how the file is packaged. and the software views the video the same way we do, as a flat, 2D image. the computer is just doing the work of turning the flat images or "frames" into a stereo pair. A human being can do this too, but it would take forever to painstakingly hand-draw the extra frames. The computer is taking the drudgery out of that chore by doing it using mathematical algorythms. and simple rules of perspective. Plain and simple. <<< What algorythms? Anyway, the computer can't apply the rules of perspective, since it doesn't see a perspective. It doesn't see a person standing in front of a car which is in front of a building which is in front of the horizon. It would first have to have means to seperate these elements, to identify them, to register if an object is partially hidden by another object, it would need a gigantic database with common object-properties and so on and so on and so on... We are decades of research away from this. >>>As for me being an MPEG expert, does making VCD's count? Does my work in coding count? I don't give a fuck whether or not you think I'm a expert in the field, because when I have a problem understanding something, I open a goddamned book and I read that motherfucker! <<<< So, what does a book about MPEG compression or perspective help in this case? It doesn't solve anything. >>>Hell, sometimes I'll ask advice from people that know shit about the subject too, but I won't act a fool when I get an idea or opinion I don't like from someone else! The point here is, I don't need to be an expert, because any problems I have that come my way, I solve, and most of the time, I solve the problem on my own. <<<< O.K. then go ahead and write those magic 2D-to-3D conversion algorythms yourself, because noone else will be able within the forseeable future. Christoph |
Michal Husak
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 01, 2001 - 2:05 pm: | |
Christoph: You are not totaly right. After reading a book about mpeg compression, you will find that the mpeg algoritm makes a motion analysis of the images on pixel level. Such information could be partialy utilized later for faster Z-depth information recreation. I agree that this aproach based algoritm could not work for scenes with no camera motion ... |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 01, 2001 - 2:25 pm: | |
Michael, I never said that MPEG compression doesn't contain motion information, at least I hope I didn't. I just say that I believe MPEG doesn't handle objects, but groups of pixels. I assume that '3dplus' tries to determine if there is horizontal movement going on and in which direction and speed this goes. Then it adjusts the time-shift accordingly. >>>Such information could be partialy utilized later for faster Z-depth information recreation. 'Z-depth information recreation' sounds too optimistic for me. Christoph |
Michal Husak
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 01, 2001 - 2:47 pm: | |
Christoph: You are right. Mpeg study motion of 16x16 or 16x8 pixel macroblocs. I am only a bit more optimistic than you according the possible ways of conversion .. To bye honnest, I can not fully participate in this discusion, becous I am working on a comercial appliaction doing just this sort of conversion and I have signed a bit draconic agreements about not speaking how it works . |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 01, 2001 - 9:31 pm: | |
It hurts me badly that high calibers in the stereo field, like you, are participating in this 2D-to-3D conversion business. Another example is Michael Starks of 3DTV. You guys should know better. Christoph (Last Man Standing) |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 01, 2001 - 10:54 pm: | |
>>>That's what you call 'hacked'? Anyway, even the longer statements leads nowhere. You're still skirting the issue. The fact is you misquoted me, and are trying to cover up that fact by making excuses. pathetic. Obviously you've never attended college; Misquoting a source will lead to a failing grade. >>>So, what does a book about MPEG compression or perspective help in this case? It doesn't solve anything. it would, if you READ the thing, rather than put it under the leg of your cofee table you retard. I am done arguing with you. While at one time, I respected your opinions, I no longer do. You are apparently too concerned with your being "right" to actually accomplish anything. I am done posting in your two-faced forum from here on out. Maybe you read my thread, maybe not. I don't care either way you ignorant bastard! I don't give a fuck about your opinion or what you "Know" is right. Others here have, and still do, support my earlier arguments. They know what's up, and have intelligence, They don't take everything so litteraly, and they know what the hell I'm talking about, unlike you. I still respect Michal, since he backs up his arguments with solid data, unlike you, whose idea of an intelligent arguement consists of "Impossible, that's Science Fiction", Really? are you sure, Oh all knowing guru whose shit don't stink? if so, Prove it! Remember, PC's and VR were once "Science Fiction" as well, dumbass. My prior respect for your opinion has been replaced by simple truths. You are a manipulative, egotistical fuck, and I hope you roll under a gas truck and taste your own fucking blood. No amount of posting can change that sad fact. All your base are belong to me. You are weak, spineless, and have no chance to survive, make your time! Hahaha!!! Just wanted to let you know, because I love you so much, Christoph! ^_^ >>>O.K. then go ahead and write those magic 2D-to-3D conversion algorythms yourself, because noone else will be able within the forseeable future. Sorry bub, don't need to. Other viable companies are doing this already. If you bothered to read anything that get's posted here, you'd know this too! Ta! -Eric L. BTW: everybody KNOWS your supporters are actually YOU! they all post as anonymous. a weak attempt at social engineering if I've ever seen one. It didn't take that long for me to figure out why you're so outspoken in favor of anonymous posting. Save yourself the trouble in the future, you're not fooling anyone with a brain. |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Sunday, April 01, 2001 - 11:33 pm: | |
No comment. C. |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, April 02, 2001 - 12:10 am: | |
Don't take that the wrong way, Christoph. ...Look at the date...April 01, 2001 April Fools! ^_^ Ha! I knew I'd somehow get a prank in somewhere today ^_- The way I've been acting Lateley, though, it didn't require too much setup. I beg all on this forum to accept my deepest apoligies, I've had a bit o' the Trickster in me lately. All done! now that it's out of my system, I will leave you in peace. Later all ^_^ -Eric L. |
Michal Husak
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, April 02, 2001 - 10:34 am: | |
Christoph: People often must to do what the market require. |
Christoph Bungert (Admin)
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, April 02, 2001 - 3:56 pm: | |
Michael, I'm just concerned. It's pretty hard to explain stereoscopy to the general public. How do you explain converted materials to them? It's even hard to sell real stereo to them. I see flocks of people comming out of an IMAX-3D movie and are just mildly impressed. I saw quite some photographies which were converted by hand in a long painful process and they're still not satisfying. Often objects look like cardboard-cutouts. The ground isn't expanding smoothly into the distance, but there are often identifiable layers with visible steps. Another question is - do we need colorized versions of black and white movies? Do we need stereo-converted 2D-movies? Does the market require converted material? We have all the games, some movies, lots of photos, cameras, even video-camera-add-ons. There is even the will for producing true 3D-material. I'm contacted sometimes by companies who would like to do a commercial or some multimedia titles in stereo. What's missing though is the infrastructure. There is no common ground, no standard these materials could be distributed for. Pulfrich is too limiting, anaglyph isn't good enough, shutterglasses have lots and lots of compatibility and ergonomic problems, movie theatres are not equipped to show 3D. There is also a lack of stereo-capable software for production, especially editing. Still I believe that someday real stereo will become kind of a standard. Let me explain. For more than half a century there were attempts to sell videophone-systems. All attempts failed due to cost, compatibility. It never reached critical mass. Now this technology comes as a by-product of the internet and new compression technologies. Noone has to force it anymore. It just comes our way naturally. The same will happen with stereo. It will be a by-product of the digital revolution in photography and film - free of charge. Digital cameras and projectors will become standard within the next 10 years. The cost will come down up to the point where stereo doesn't cause much extra-cost anymore. In order to get stereo into the homes projectors and headsets will be required. Hollywood isn't ignorant about stereo. Many influencial in Hollywood know about stereo and they would like to do it, but the technology isn't there. They want a system which works without glasses, without headaches, without ghosting and which works well on all seats in the theatre. The developers of digital projection systems for movie theatres, like JVC, are working on such technologies, but they still have to go a long way. Christoph |
Eric Lindstrom
Rating: N/A Votes: 0 (Vote!) | Posted on Monday, April 02, 2001 - 10:04 pm: | |
I agree with Christoph's statements. I for one have always been a stereoscopy enthusiast. Remember the big, 3-D movie boom of the 80's? I swear, every time a movie would come out that had "3-D" in the title, I was there. I was facinated by techniques that would add depth to a two-dimensional image. I saw them all: Treasure of the four Crowns, Metalstorm, Jaws 3-D, Starchaser: (did this one ever get dumped to frame-sequential video? I LOVED this film!). I still have some of the old paper polarizers from the showings. not to mention a huge collection of anaglyph prints, and the old standby, the viewmaster. The problem was, people considered it a gimmick more than anything else, rather than a valid tool for cinematic expression. Alfred Hitchcock's, "Dial M for Murder" was originally shot in 3-D, and Hitchcock used it as a storytelling tool. I went to MGM studios, and they showed a couple clips of the original Stereo print, with the polarizers, and you can tell that, without the stereoscopic effect, something is missing from the film. If only more filmakers were like Alfred. For things to change, the viewing public has to accept the value of stereoscopy, but at this point in time, most people still label it as a "Gimmick". -Eric L. |
Michal Husak
Rating: N/A Votes: 0 (Vote!) | Posted on Tuesday, April 03, 2001 - 9:42 am: | |
Christoph: You are right in evry point. I am simultanoulsy working on software for normal stereo-video editing (I mean source from 2 cameras or camera with NuView attachment), stereoscopic titling, corections e.t.c. for the same company ... Maybe that this codes will be more useful than the 2D-3D conversion one ... It is realy a pitty that there are not enought good quality real stereo movies + projection technology (cheap fast DLP projector ?) available ... |
|