In the last decade, there has been a great deal of interest in using real-time interactive 3-D computer graphics to simulate real-world environments. One of the great appeals of these "virtual environments" (VEs) is that they can involve many sensory modalities, and can thus completely immerse users in a computer-generated environment.
To date, however, very few VE systems have fully realized their potential. For example, current cave automatic virtual environment (CAVE) systems typically depict only the external environment, not the user's own body. As a result, CAVEs are most appropriately used in situations that require passive observation or minimal physical interaction with an external environment (e.g., a cockpit simulator). Situations that involve dynamic body-based interaction between the user and the environment becomes harder to emulate and more unnatural. Moreover, natural navigation over long distances in CAVEs requires cumbersome locomotion devices such as a treadmill, and these devices generally have difficulty simulating turns and orientation changes effectively.
In most ways, fully immersive VEs offer the greatest promise as a truly natural means of interacting with computer-generated environments. In practice, however, their advantages are rarely realized because: