Announcement

Collapse
No announcement yet.

Virtual Reality

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Virtual Reality

    Virtual Reality

    Virtual reality (VR) is a technology, which allows a user to interact with a computer-simulated environment, be it a real or imagined one. Most current virtual reality environments are primarily visual experiences, displayed either on a computer screen or through special or stereoscopic displays, but some simulations include additional sensory information, such as sound through speakers or headphones. Some advanced, haptic systems now include tactile information, generally known as force feedback, in medical and gaming applications. Users can interact with a virtual environment or a virtual artifact (VA) either through the use of standard input devices such as a keyboard and mouse, or through multi modal devices such as a wired glove, the Polhemus boom arm, and omni directional treadmill. The simulated environment can be similar to the real world, for example, simulations for pilot or combat training, or it can differ significantly from reality, as in VR games. In practice, it is currently very difficult to create a high-fidelity virtual reality experience, due largely to technical limitations on processing power, image resolution and communication bandwidth. However, those limitations are expected to eventually be overcome as processor, imaging and data communication technologies become more powerful and cost-effective over time.
    Varun Batish

  • #2
    What is Virtual Reality (VR)?
    Virtual Reality is generally a Computer Generated (CG) environment that makes the user think that he/she is in the real environment. One may also experience a virtual reality by simply imagining it, like Alice in Wonderland, but we will focus on computer generated virtual realities for this discussion.

    The virtual world is hosted on a computer in the form of a database (e.g. terrain database or environment database). The database resides in the memory of the computer. The database generally consists of points in space (vertices), as well as textures (images). vertices may be connected to form planes, commonly referred to as polygons. Each polygon consists of at least three vertices. The polygon could have a specific color, and the color could be shaded, or the polygon could have a texture pasted onto it. Virtual objects will consist of polygons. A virtual object will have a position (x, y, z), an orientation (yaw, pitch, roll) as well as attributes (e.g. gravity or elasticity).

    The virtual world is rendered with a computer. Rendering involves the process of calculating the scene that must be displayed (on a flat plane) for a virtual camera view, from a specific point, at a specific orientation and with a specific field of view (FOV). In the past the central processing unit (CPU) of the computer was mainly used for rendering (so-called software rendering). Lately we have graphics processing units (GPUs) that render the virtual world to a display screen (so-called hardware rendering). The GPUs are normally situated on graphics accelerator cards, but may also be situated directly on the motherboard of the computer. Hardware rendering is generally much faster than software rendering.

    The virtual environment (also sometimes referred to as a synthetic environment) may be experienced with a Desktop VR System, or with an Immersive VR System.

    With Desktop VR a computer screen is normally used as the display medium. The user views the virtual environment on the computer screen. In order to experience the virtual environment, the user must look at the screen the whole time.

    With Immersive VR the user is 'immersed in' or 'surrounded by' the virtual environment. This may be achieved by using:
    A Multi-Display System
    or
    A Head Mounted Display (HMD)
    Immersive VR Systems provide the user with a wider field of view than Desktop VR Systems.

    With Multi-Display Systems the field of view (FOV) of the user is extended by using several computer monitors, or projectors. When using projectors, the image may be front-projected or back-projected onto the viewing screen. Many simulators utilize three screens (forward view, left view, right view) to provide an extended FOV. The configuration where the user is surrounded by projection screens are sometimes referred to as a cave environment. The image may also be projected on a dome that may vary in shape and size. With a multi-display system the user may look around as if in the real world.

    A Head Mounted Display (HMD) consists of two miniature displays that are mounted in front of the user's eyes with a headmount. Special optics enable the user to view the miniature screens. The HMD also contains two headphones, so that the user may also experience the virtual environment aurally. The HMD is normally fitted with a Head Tracker. The position (x, y, z) and orientation (yaw, pitch, roll) of the user's head is tracked by means of the Head Tracker. As the user looks around, the position and orientation information is continuously relayed to the host computer. The computer calculates the appropriate view (virtual camera view) that the user should see in the virtual environment, and this is displayed on the miniature displays. For example, let's assume that the virtual environment is the inside of a car, and that the user is sitting behind the steering wheel. If the user looks forward, the head tracker will measure this orientation, and relay it to the computer. The computer would then calculate the forward view, and the user will see the windscreen, wipers and bonnet of the car (the user will obviously also see the outside world, or out of window (OOW) view). If the user looks down, the computer will present a view of the steering wheel. If the user looks further down, the accelerator pedal, clutch (if present) and brake pedal will be shown. The orientation information may also be used to experience stereo and 3-D sound. If the user looks straight forward, he/she will hear the engine noise of the car. The volume and phase will be equal for the right and left ear. If the user looks to the left, the volume of the engine noise will be higher in the right ear and lower in the left ear. Trackers that only track the orientation (yaw, pitch, roll) are referred to as 3 degree of freedom, or 3 DOF trackers, while trackers that also tracks the position (x, y, z) are referred to as 6 DOF trackers.

    Objects in the virtual world may be manipulated by means of a Data Glove. A data glove measures the flexure (bend) of the user's fingers. The user may grab a virtual object and put it at a different spot. The user may also throw the object. The position (x, y, z) and orientation (yaw, pitch, roll) of the user's hand is measured with a 6 DOF tracker. If it is a force-feedback data glove, the user will also be able to deform the virtual object, and feel the object (e.g. a tennis ball) resisting the deformation.

    In order to navigate (e.g. walk or fly) in the virtual world, a Space Controller is used. The space controller could be a normal joystick, or a computer mouse. For example, when the mouse is moved forward, the user moves forward in the virtual world, when it is moved to the left, the user moves to the left, etc. Force-feedback joysticks or mice could provide haptic cues to the user, e.g. when the user moves into a virtual wall. Normal joysticks and computer mice are usually used in Desktop VR Systems. In Immersive VR Systems we normally use baseless joysticks as space controllers. This enables the user to leave the desktop and to interact with the virtual world while standing up.

    It is also possible for different users to share the same virtual world. This is normally achieved by connecting the host computers to a computer network. Each user's host computer broadcasts the position and orientation of the user in the virtual world. The users may therefore 'see' each other in the virtual world. In fact, users will see representations, referred to as avatars, of each other in the virtual world. They will be able to interact; working together or competing. The sharing of virtual worlds is generally referred to as 'shared virtual worlds', or as 'networked virtual reality'.

    Sight and hearing are the main human senses currently used to experience virtual worlds. Touch (as in tactile- and force-feedback) is becoming more commonplace. Smell dispensers are entering the marketplace, enabling the user to smell the virtual world as well. Taste dispensers will follow soon.

    Last edited by Varun1; 02-12-2008, 11:08 AM.
    Varun Batish

    Comment

    Working...
    X