We have many senses. To make us feel an alternative reality is real, we need to involve these senses to fool the brain. Most VR systems make use of two: sight and sound; touch is also used but not in a full reach-out-and-touch-someone sense (although people are working on it!).Â
Tor Nørretranders compiled data about the senses and their relative bandwidth, in computer terms. This is a bit like comparing apples and motor oil, although useful to see how it applies to VR.
So, we can see if we make you see something that is virtually real, we may be able to convince the brain it is real. Yet, simply putting a vision screen in front of us is not the complete answer.
Giving someone depth perception is most of the answer.
This is a fairly complicated topic, but the main method of showing depth of objects is stereoscopic depth perception. Remember those ViewMaster toys? Here is an example of one:
You put in a disk that had a left eye and a right eye image. The images on the left and right nearly look identical, but they represent what the right and left eyes would see if you were standing at that location; each slightly different due to parallax. From the disk, here, we can see each of the left and right eye images. The lenses in the preceding View-Master focuses your eyes on the images.
Your brain would look at these two images and fuse the images into something that looked real. This uses a depth perception technique called stereoscopic depth perception.Â
Yes, the View-Master was an early Virtual Reality viewing device!
Now, what is really going on here? How does Stereo work?
When you look at something, perspective and separation between your eyes will make you focus your eyes differently at something closer as opposed to something far away. In this diagram, the yellow lines show our sight lines to a near object, and the orange lines show the sight lines to a distant object. Note that the angle between the yellow lines is larger than the narrow angle of the orange lines:
A friendly robot loaned us the lower half of her eyes to make this image (that's why it shows circuit boards). Your real eye is constructed somewhat similar; I omitted the light rays and where they fall on the back of the eye for illustration's sake.
Your brain will automatically figure out if your eyes are pointed at a close or far object by the difference in angles between the yellow and the orange lines.
This is just one method our brain uses to distinguish depth. Another that is also vital to Virtual Reality is the use of parallax.
Parallax refers to the way that, not only do the left and right android eyes point differently (as would your eyes, when they are attached to your head), but that each eye sees a slightly different view of the same objects. This will work even with one eye if you move your head to the left and right, and is how people with mono-vision perceive depth (among other ways).
This is how your left eye sees the scene:
This is how the right eye sees the same object:
Parallax refers to the way that an object that is more distant will be less to the right/left than a nearby object, when viewed with the other eye, OR (an extension) when moving your head left to right. Our brain (as well as the brains of animals) will instinctively see these as closer/further.
The red cube is either next to the blue cube or the green cube, depending on what eye sees the image. Your brain will integrate this, coupled with how the cubes move if you move your eye from side to side, to also give you a sense of depth.
With true VR (computer generated or light field based 360 video), if you move your head, you will see the parallax effect and the VR can seem real just like someone with stereoscopic depth perception sees.
I have mono-vision because I have a nearsighted eye and a farsighted eye, and VR works great for me. Your mileage may vary, but if you don't like 3D movies, give VR a try (then again, I really like 3D movies).
Parallax depth perception will work even if you have one eye, when you move your head right to left.
There is one additional way that your brain will use to determine depth of an object - focusing. (Actually, there are many ways other than those listed, such as blue shifting of objects in the far distance, like mountains, and other effects). Focusing on an object in the real world will make that object and other objects at roughly the same distance appear in focus, and objects both further and closer will appear blurry. Sort of like this:
Current HMD's cannot accurately show focus as an effect. You are looking at a small screen that generally has a fixed focus of about 5 feet in front of you. All objects, close and far, will appear to be focused the same as they are actually just being shown on the screen. This can cause a mild VR discomfort, called the accommodation-vergence conflict. Basically, if you focus on the far focus cube (the salmon colored one), your eyes will still focus as if the salmon cube was located where the red cube is; your eyeballs will, however, aim stereoscopically as if it was located where it should be. This effect is most pronounced with very close objects.
This means you may need to float GUI elements out into the room instead of having them very close. This may cause overlapping UI elements.Â
VR design is challenging. I'm looking forward to what you design!