Better Living Through Visual Chemistry
The Silicon Graphics cafeteria, Cafe Iris, serves primo cappucino. Of course, good coffee to one person tastes like sludge to another. Likewise, while some think Cafe Iris is a good place to have a meeting, others think it's a zoo. This is the kind of subjectivity that perceptual psychologists love.
Subjectivity is a main reason that computer scientists and engineers may experience aggravation when working with percepetual psychologists. But if virtual environment developers want to achieve broad-based, widespread implementation of immersive technologies, then we must jibe with the psych folks.
That's why I'm sitting here in Cafe Iris drinking cappucino with Mary and Mike, two psychologists from NASA Ames Research Center. Mary, Mike and I are discussing better living through visual chemistry. (The scientists call it improved graphical rendering through perceptual tuning.)
Dr. Mary Kaiser is a research psychologist with a Ph.D. in experimental psych; she joined the Aerospace Human Factors Research Division at Ames in 1985. Four years ago, when Mike Montegut was a U.C. Santa Cruz grad student, he wanted to merge his psych studies with his interest in computer science, so he went to Ames. Now he's asking questions like, "Why do we update parts of a visual database that people can't see?"
As perceptual psychologists, Mary and Mike ask these questions, then find answers. At NASA, this engineering helps improve the quality of visual displays in aerospace vehicle simulation and telerobotics. The display quality can make or break accuracy in flight and on the ground, so Mary and Mike help developers understand how people see the world. For instance, our limits of spatial and temporal resolution, stereopsis mechanisms, and attentional focus all filter visual input. VR developers haven't exploited this knowledge to reduce the computational cost required to achieve desired image quality and frame rate. That's what we're discussing now.
As she sugars her mocha, Mary says that during the first decade of immersive VR, we focused on developing and implementing interfaces and toolkits, taking a "brute-force approach." She chuckles, "The first time Mike McGreevy had me slap on a headmounted display, I saw a virtual environment, monochrome, 100x150 pixels. McGreevy was talking about designing airplanes with it, and I was saying, 'Gee, Mike, this isn't supporting any critical aspect of visual perception that I can see!'"
Well, now that we can build better VR, we must consider which rendering techniques support optimal perceptual experiences inside those environments. That's why VR developers need to work with perceptual psychologists.
Such collaboration does not always come easy. Mary and Mike say you can't get a precise answer from a perceptual psychologist, even to a trivial question such as, "What field of view is best?" A computer scientist wants to hear a definitive response, like "120 degrees minimum." Instead, the psychologist will raise her eyebrows and muse, "That depends on your application. Does it require peripheral flow , like flight simulation, or a task that is primarily foveal , like surgery simulation?"
"We're trained to consider all the relevant parameters of human performance," Mary shrugs. "Consequently, we often seem constitutionally unable to give straightforward guidelines, so engineers have a tough time talking with us. Yet psychologists and system designers need to develop a common taxonomy for system design requirements."
Mike nods. "We need to define what's really necessary in a visual simulation, as opposed to what's efficient."
But we know it's not always easy to translate perceptual knowledge into computational savings," Mary adds. Take the fact that you're wasting 80-90% of your pixels once you're outside the region in which a normally sighted person has the clearest vision. Human peripheral spatial resolution is horrible. In principle, you could render peripheral regions in much lower resolution, and the observer wouldn't notice. Unfortunately, it's difficult to measure and predict where the viewer will look, so it's easier to render the entire display in high resolution."
This type of research has resulted in the development of rendering techniques that enhance graphical system performance with no perceptible loss of image quality. Working with colleagues at University of Virginia, Mary and Mike developed level-of-detail modulation techniques that appear much more natural than those commonly used. Most significantly, they developed techniques for rendering images with higher apparent resolution. To use this process to create stereo displays, images of different resolution are shown to the eyes.
Figure 1 shows varying texture complexity.
Figure 2 shows varying polygonal complexity.
The resulting fused image appears to possess the higher-resolution detail.
Figures 3 and 4 show the process applied a non-stereo image; two low-resolution images are phase-shifted and superimposed to create an image of higher apparent resolution than can be achieved with the sum of the polygons rendered in the component images.
"Our goal," Mary says, "is to extend the upper range of graphical rendering performance and to enable lower-end systems to produce visual imagery that currently can be produced only with high-end systems."
Today Mary and Mike are testing and refining these techniques on the famous Mars terrain database gleaned from the Viking missions in the 1970s. "The Mars simulation comes from a database NASA acquired 20 years ago," Mary details. "There's a real excitement about what people will experience via this type of data. This is a shared cultural resource, not bits of data electronically rotting away in NASA warehouses. We're entering the golden era of unmanned planetory exploration!"
I'll drink to that. Coffee, anyone?
I loved talin's description of electric reality, virtual hallucinations. The mixture of fantasy and technoreality is enlivening and fresh in a way that is so hard to achieve for people like me who thoughts are dominated by hypothetico-deductive scientific empiricism. Your novel world presents an alluring alternative reality.
I have a much stodgier approach to VR, trying to decipher how the human mind works from its interactions with the limited representations of reality that goggles offer. It can be exciting at times when a new insight offers itself from the strange perceptions of these semireal environments. I see now how the sense of self in a seeing person is dominated by the reference frame of space, and our understanding is derived from metaphoric extensions of spatial structures and dynamics, that overpower the temporal and melodic sequences of our lesser senses. How I wish I could sing the sense of space!
Most Active Topics:
Topic 1 Innerductions
Topic 22 The Wired Brain?
Also in Immersion Quest:
Virtual Reality Rolls Up Its Sleeves
Come On In, The Data's Fine!
Being There Here: How and Why
electric minds |
virtual community center |
world wide jam |
edge tech |
Any questions? We have answers.