Tools for Thought by Howard Rheingold

April, 2000: a revised edition of Tools for Thought is available from MIT Press, including a revised chapter with 1999 interviews of Doug Engelbart, Bob Taylor, Alan Kay, Brenda Laurel, and Avron Barr.
The idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry or orthodox computer science, nor even homebrew computerists; their work was rooted in older, equally eccentric, equally visionary, work. You can't really guess where mind-amplifying technology is going unless you understand where it came from.


Chapter One: The Computer Revolution Hasn't Happened Yet
Chapter Two: The First Programmer Was a Lady
Chapter Three: The First Hacker and his Imaginary Machine
Chapter Four: Johnny Builds Bombs and Johnny Builds Brains
Chapter Five: Ex-Prodigies and Antiaircraft Guns
Chapter Six: Inside Information
Chapter Seven: Machines to Think With
Chapter Eight: Witness to History: The Mascot of Project Mac
Chapter Nine: The Loneliness of a Long-Distance Thinker
Chapter Ten: The New Old Boys from the ARPAnet
Chapter Eleven: The Birth of the Fantasy Amplifier
Chapter Twelve: Brenda and the Future Squad
Chapter Thirteen: Knowledge Engineers and Epistemological Entrepreneurs
Chapter Fourteen: Xanadu, Network Culture, and Beyond

Chapter Twelve:
Brenda and the Future Squad

To those of us who don't live and work in futurist sanctums like ARC, PARC, Atari, or Apple, such activities as flying through information space or having first-person interactions with a computer are hard to imagine in terms of what one would like to do on a Friday night. There simply aren't any analogous images available in our cultural metaphor-bank: Is it like watching television? Playing a video game? Searching through an infinite encyclopedia? Acting in a play? Browsing through a book? Fooling with fingerpaints? Flying a plane? Swimming?

My initial encounter with Alan Kay led me to several of the people who worked for him at the time, and I eventually ended up spending more time with Brenda Laurel and Colleagues than I did with Alan. Brenda and her friends were interested in the same questions that puzzled me: what would it feel like to operate tomorrow's mind-augmenting information-vehicles? My first experience with their work took place in a guarded, well-equipped room in Sunnyvale, California, home of Atari Systems Research Group. The following brief scenario is taken from my notes of that first observation:

The world was grey and silent before Brenda spoke.
"Give me an April morning on a Meadow," she said, and the gray was replaced by morning sunshine. Patches of cerulean sky were visible between the redwood branches. Birds chirped. Brooks babbled.
"Uhhmm . . . scratch the redwood forest," Brenda continued: "Put the meadow atop a cliff overlooking a small emerald bay. Greener. Whitecaps."
Brenda was reclining in the middle of the media room. "The background sounds nice," she added: "Where did you get it?"
"The birds are indigenous to the northern California coast," replied a well-modulated but disembodied female voice: "The babbling brook is from the acoustic library. It's digitally identical to a rill in Scotland."
"There's a wooded island in the bay," continued Brenda, looking down upon the island that instantly appeared below her where only green water had been a moment before. She surveyed the new island from her meadow atop the cliff above the bay, then spoke again: "Monterey pine, a small hill, a white beach. Zoom into the beach. Let's walk up that path. There's a well under that banyan tree. I want to dive in and emerge bone-dry in the Library of Alexandria, the day before it burned."

A few groups on the leading edge of cognitive technology have been trying to find images to help them in their effort to materialize a mass-marketable version of Bush's Memex, Engelbart's Augmentation Workshop, and Kay's Dynabook. Those people who are attempting to design these devices share an assumption that such machines will evolve from today's computer technology into something that will probably not resemble the computers we see today. Ideally, we won't see these hypothetical computers of tomorrow, because they will be invisible, built into the environment itself.

Try to imagine a computer that is nowhere to be seen, and is set up to attend to your every wish, informationally speaking. You enter a room (or put a helmet over your head), and the room (or the helmet) provides multisensory representations of anything, real or imaginary, you can think of to ask it to represent. Science fiction writers of the past decades have done their share of speculating on what one might do in such a representationally capable environment. You could, for example, go skiing in the Alps with wraparound full-color three-dimensional visual display, authentic panphonic soundtrack, biting cold air, ultraviolet-rich high-altitude sunshine, spray of powder snow on your cheeks, the feeling of skis beneath your feet, of being impelled down a slope.

But you shouldn't have to limit your use of such a universal information medium to a real terrestrial experience. You could explore a black hole in a neighboring galaxy, navigate through tour nervous system, become a Connecticut Yankee in King Arthur's court. If you want to extend your senses into the real world in real time, you can look at quasars with x-ray radiotelescope vision, CAT scan everything you see, hover above the earth in a weather satellite, zoom down to take an electron microscopic look at the microbes on a dust mote on a license plate in Kenya.

If you want to communicate with one person or an entire on-line network, you have all the media at your disposal, along with additional "dialogue support tools" to augment the interaction. Or the interaction might be private, limited to you and the informationscape -- for reasons of work or play.

Perhaps you want to know something about blue whales. Everything written in every magazine, library, or research data base is available to you, and an invisible librarian is there you help you, if you wish; just focus your eyes on a reference file and it fills the screen. Ask the librarian questions about what you want to know, or allow it to ask you questions. But you don't have to just read about whales. You can listen to them, watch them, visit them. Just ask, and you'll be underwater, swimming among them, or in a helicopter, watching them while you hover above the crystalline Baja waters.

The experience won't be strictly passive. You can act out the role of a whale or Louis XIV (or Genghis Khan, if that is your taste) in a simulated video encounter and make decisions about the outcome of that encounter. Paint palettes, text editors, music and sound synthesizers, automatic programming programs, and animation tools will give you the power to create your own blue whale or ancient Mongolian microworlds and romp around in them.

Since MIT, Lucasfilm, and Evans & Sutherland were in the bidding for Kay's services when he left Xerox, one can safely assume that Atari must have offered him something more. Although his obvious desire was to run an advanced software shop, Kay knew that his next software dream would require very advanced hardware. "You want hardware designers? We'll get you hardware designers," you can imagine them saying. Atari got him nothing but the best -- including Ted Hoff, the legendary Intel scientist who was the leader of the team that invented the microprocessor chip. Kay assembled his own software research team.

Brenda Laurel joined Atari Systems Research Group after a stint in their educational marketing division. When I first met her, she was involved in a research project that she insisted defied verbal description. She invited me to watch a special kind of brainstorming session they were just beginning to explore.

The Atari research building was in a typical Sunnyvale flatland industrial park, with the usual high-tech high-security trappings -- twenty-four-hour guards, laminated color-coded nametags, uniformed escorts. It was here that I joined Brenda and several of her colleagues in a group-imagination exercise connected with what they called a media-room project.

Brenda signed me in, walked me through the gray-walled, gray-carpeted corridors, and brought me to a large room, bare except for a few industrial-modern couches and chairs, a videotape setup, and two whiteboards. Inside the room were Eric Hulteen, the project leader; Susan, a red-haired, soft-spoken young woman; Scott, a quiet, spaced-out preppie type; Don and Ron Dixon, the Robotic experts; Craig, a somewhat skeptical, bearded hacker; Jeff, Tom, Brenda, and Rachel, who was videotaping the event.

Rachel was short, had a crewcut, wore a tank-top tee shirt, purple blousy harem pants, and no shoes. Don and Ron were twins. A few in the group could be as young as twenty-three or twenty-four, the oldest was no older than thirty-five. Jeans and sandals were the dominant costume. Nobody wore a tie. Nobody had acne or a speech impediment. Nobody wore a plastic penholder.

As it was explained to me by Brenda and by Eric, whose project it was, a media room is an information terminal that a person can walk around inside -- a place where you can communicate directly with the machine without explicit input devices like keyboards. The room itself is set up to monitor human communication output. This presumes that all the hardware and software that are now in experimental or developmental stages will be working together to do what a good media room does -- without bothering the person who uses it with details of its operations.

Eric came to Atari from MIT's Architecture Machine Group, an innovative group led by Alan Kay's old friend and Atari consultant Nick Negroponte. The idea of "spatial data management" that came from the MIT group was a response to the problem of finding a way to navigate the huge new informational realms opened by computers, by adopting the metaphor of information space that the user can more or less "fly" through. The dominant metaphor in software design viewed large collections of information through the well-known "file-cabinet" metaphor, in which each piece of information is regarded as part of a "file folder" that the user locates through traditional filing methods. But what if the collection of information could be displayed visually and arranged spatially, so the user could have the illusion of "navigating" through it?

Perhaps the most well-known demonstration of this metaphor was the "Aspen Map" created by Negroponte's group. To use this map, you sit in front of a video screen and touch the screen to steer your way down a photographic representation of the streets and houses of Aspen, Colorado.

A computer-directed videodisk connects the video steering controls to a very large collection of photographs of Aspen. The computer translates your position and your commands into the correct sequence of photographs. If you decide to look to the left, the screen shows the streets and houses that are located to the left of this position in the real city. If you decide to stop and take a closer look at one of the houses that are specially marked, or even open the door and look inside, you can do so.

The kind of simple branching structure of a city's streets represents only the most basic kind of information base that can be represented spatially. The most important aspect of this idea doesn't have to do with road maps -- although this is obviously a good way to learn how to get around in a town you've never seen before. The important point is that some information domains can be organized around a spatial metaphor, creating a coherent environment path that each user can move around in by following his own particular path. A reference work for someone trying to find the problem in an automobile engine or the plumbing system of a nuclear submarine could just as easily be mapped in such a way.

Whether they came from MIT, Carnagie-Mellon, or another video game manufacturer, every person in Kay's Atari group represented the cream of the crop of the best young minds in fields ranging from robotics to holography to videodisk technology to artificial intelligence to cognitive psychology to software design. The necessary hardware components of the media room will become available, everyone hopes, by the time the really tricky part -- the software design, construction, and debugging -- is on its way to completion.

The person inside a full-scale media room will have 360-degree visual displays of some sort -- high-resolution video or holographic images -- computer-generated and archived. Images can be retrieved from a library (and added to the library), or they can be constructed by the person or by the computer. There will be a total-sound audio system ranging into ultralow and ultrahigh frequencies. But the most important element is not in the sensory displays, which involve straightforward if now-expensive technology, but in the software -- in the way the room is designed to "know" what to do.

If the media room is to be the universal medium, the room itself must be able to see and hear the person inside, and "understand" what it sees and hears well enough to carry out the person's commands. Ideally, it should understand the person it is dealing with well enough to actively guide the fantasy or the information search, based on its knowledge of personal preferences and past performance. Bioelectronic sensors built into the floors will keep track of the user's mood. The only thing the room is presumed not to do is read minds.

One of the ways to describe a media room is "a computer with no interface," or "a computer that is all interface." When the computer interface disappears, you are not at the control panel of a machine, but walking over the Arctic ice, or flying to Harlem, or looking through a book in a musty old room. How does one envision the capabilities of a technology that doesn't exist yet? How do you deal with an invisible computer? If you don't have to worry about how to tell it what to do, and if its computer-representation capabilities are too large to worry about, the question shifts from the tool to the task: "Okay, now that I can go anywhere, including places that don't exist, where do I want to go?" Brenda, Eric, and their colleagues wanted to know what new communication styles people might adopt in response to such a system. Most of all, they wanted to know how it would feel to use such a system.

The night I watched her and her colleagues fantasize in that room in Sunnyvale, Brenda's idea was to plan the uses of a future technology of this sort by using the same kinds of tricks that actors use to create imaginary spaces: "Magical kinds of things can happen through improvisation," she told the group, "because it can trick you into revealing preverbal ideas. What we each bring to this is our capacity to have inspirations in real time."

The first improvisations were warm-up exercises. Brenda's trip to the Library of Alexandria was followed by Scott's visit to a hypermirror that showed him what he looked like in the infrared and gave him a real-time scan of his brain metabolism in sixteen colors. He watched the colors of his thought processes as he watched the colors of his thought processes.

Then the group decided to make Eric play the role of the person using the system, while everyone else improvised roles as the components of the media room -- input to the user's vision, mobility, hearing, emotions, thoughts. In the first try, everyone got into their role with such enthusiasm that Eric was literally swarming with people mimicking him, giving him advice, grimacing. He spent his time rather defensively trying to figure who did what. It was like a combination of twenty questions and charades, but it revealed something about the bewilderment of even a technically sophisticated computer user when faced with a system that does not explain itself but simply acts.

In the next experiment, Susan, acting as the person in the middle of such a system, decided to try to take control of the elements, and discovered that all the roles of the different components could be changed radically by adding a "help agent." The help agent oriented the user by saying things like "ask her about a place," or "ask him -- he knows what to look for." The idea was to create a kind of "informational butler" that would observe both the user and the information system, keep a record of that individual's preferences, strengths, and weaknesses, and actively intervene to help the user find or do what the user wanted to find or do.

The next day, several of the crew were going to Southern California, to see what a prominent university cognitive science department could tell them about designing machines that people can use. About a week later, Brenda and I talked about what she had learned from the cognitive scientists, and the improvisation exercise.

"The cognitive science people are looking at human-machine interactions. Naturally, the hired hackers got into the act when the subject of the discussion was how to teach secretaries to use a file management system. One of the programmers at the staff meeting summarized the problem by asking, 'how do we get a secretary to understand that slash-single quote-DEL will delete a file?' That was his understanding of the human interface -- a matter of figuring out how to adapt a human to the esoteric communication protocol some programmer built into a machine."

That part of a computer game that makes the user step outside the game world, that doesn't help the user to participate in the pleasure of the game, but acts as a tool for talking to the program -- that's where distance comes in. That's what happens to the secretary when the programmer tells her that slash-single quote-DEL means "erase this." She doesn't want to ask the computer to erase it; she simply wants to erase it.

What Brenda was getting at seemed so strange and so counter to everything I had been taught that it took a while for it to sink in: In essence, she was saying that when it comes to computer software, the human habit of looking at artifacts as tools can get in the way. Good tools ought to disappear from one's consciousness. You don't try to persuade a hammer to pound a nail -- you pound the nail, with the help of a hammer. But computer software, as presently constituted, forces us to learn arcane languages so we can talk to our tools instead of getting on with the task.

"The tool metaphor gets in the way when it is applied at the level of the larger system that includes the human operator," Brenda explained. Even though your programmer gives you a file management system that is functional in a tool-like way, the weird way the human is forced to act in order to use the tool creates an unnecessary distance between the action the human is required to perform and the tool's function.

"We also know, however, that there is another set of computer capabilities that aren't at all tool-like. Games and creating art, for example. So what is it that a computer does, in that case? My answer is that its function is to represent things. Which, in the case of art or games, means that the function is at least the same as the outcome, because in art or games, representation is at least part of the outcome."

Kids don't play video games by the hour because it is a good way to practice hand-eye coordination, or for any other reason besides the sheer pleasure of playing. On the other hand, nobody uses a word processing program out of sheer enjoyment of using the program; they use a word processor because they want to write something. In the case of the word processor, the outcome is most important. In the video game, there is no separation from the user/player and the world represented in the game. In the word processor, the command language of the software creates a distance between the user and the task.

"One strategy in our research is to find out how to eliminate the part that keeps us distanced." Brenda explained. "I want to reach my hands right through the screen and do what I want to do," she added, with the kind of passionate conviction I hadn't encountered since Engelbart got that faraway look in his eyes and started talking about what humankind could do with a true augmentation system. I don't want to enter a bunch of commands," Brenda insisted. "I might not even want to speak a bunch of commands, if I have to speak them in a way that is different from the way I normally talk. I want first-person interaction. Great. But first I have to do away with all this stuff between me and the outcome.

"What metaphors haven't been used? Maybe the interface is the barrier. I think that it is more than a technological question. You can't expect to solve a problem by building a better interface if the whole idea of interface is based on an incomplete metaphor. To use a real artsy metaphor that will probably break down under scrutiny, I like to look at the computer as a system for making magic portals. Like that moment in The Wizard of Oz when Dorothy opens the door and everything changes from black and white to color. That is what I want to happen -- perceptually, cognitively, emotionally. The portal is an interim metaphor to me. We need something richer. I'm looking for something that will click into place and re-explain the idea of the interface.

"I want to make a fantasy that I can walk through," Brenda explained. "That is what an adventure game tries to do. Long before computers were available to regular folks, hackers on large mainframe computers were hooked on adventure games. Now there are adventure games that you can play on your home computer. What happens when you try to build a first-person adventure game?

"The first thing I do in this game I want to walk around in is to look at it. Maybe there are some graphics on the screen. Perhaps the screen is all around me. Maybe there is some text to read, or a sound track that reads it to me. All of these are important technical aspects, but they are peripheral to my concern. All the screen and speakers do is to establish an environment. Once I look around the environment, however, I want to interact with it.

"Let's say that the environment of this fantasy is something that a science fiction writer of the first caliber invented. Say it's a planet that I'm exploring for the United Federation of Planets. I start walking through this world. Today, with the state of the interface art as it is, if I want to move to the north and turn over a stone, I'd tell the computer, 'Move north. Turn stone.' Note that I have to tell the computer. I've just stepped out of the fantasy. And you destroy a fantasy when you step out of it.

"What kind of system enables me to simple move north and pick up the damn stone? I don't think it's just a question of making the environment lifelike. It isn't just a technical question for a fancier projector to solve. It's a question of how the world is established when it is constructed. How the author established the way in which people can relate to it.

"Maybe I can look around the planet until I find a guide. Remember the 'help agent' in the media room improvisation? This description of walking around the world sounds a lot like a theatrical improvisation. You walk up to the stage, and the director says, 'Okay, this is a new planet. You play an explorer. Go.' Nine times out of ten, something like that dwindles away, but if you are lucky you discover something useful about the character. Very rarely do you look back and say, "That was a wonderful story.'"

According to Brenda's theory, the reason is rarely memorable, even in a good improvisation, is because the actors are forced to use part of their mind to think about being playwrights. To achieve an excellent dramatic outcome the actor has to think about his character and manipulate the plot line at the same time, so that it all comes out in an interesting way. Unless you are an acting genius, you have to trade part of your acting power in order to think about the play. And you can't do a great job of crafting a drama if you have the acting job to juggle.

"This is where I think the computer can assist us," Brenda insists: "I still think one answer is to put the smarts of the playwright into a first-person fantasy-creating system.

"It has to be built into the way the imaginary world is constructed. Sitting on top of all your graphics and voice recognition and speech synthesis is an expert system that can make informed decisions about the potential of dramatic situations, using a large enough base of knowledge about the possible situations that can arise and a set of rules for sifting through the knowledge base."

Less fantastic, but nonetheless powerful versions of the "expert system" Brenda was talking about do exist now -- and in the next chapter we'll take a look at what another infonaut thinks about the potential of these "knowledge-transferring" programs. The hypothetical variation Brenda was describing would be able to learn form experience -- experience with the individual who is using it or with everybody who has ever used it. Brenda thinks that such a program could approach the kind of analysis that a drama critic does. "Maybe we can put Aristotle's rules for good drama in the system to start."

Right now, there are expert systems in existence that can help doctors to diagnose diseases. Those systems are able to apply diagnostic rules adapted from human doctors to a large collection of data, a knowledge base, regarding known symptoms. Substitute drama for disease, and the elements of drama (like universality and causality) for symptoms, and the automatic drama expert in our fantasy will be able to pick out the most dramatic responses and consequences for actions that the player performs, and weave them back into the fantasy. It's an idea that seems to be as far ahead of today's entertainment software as Alan Kay's Dynabook was ahead of the computer hardware of the 1960s.

Assume that you can simulate a medieval castle and give an audience member a 360 degree, first-person role in the dramatic action, so that every time you step into the Hamlet world as Horatio or Hamlet or Ophelia, you make different choices about the outcome. Artificial intelligence research tells us that you don't have to specifically store all the possible events that could occur in a giant data base if you can structure the representation of the world in such a way that its characteristics are formulated as tendencies to go in certain directions. When you pick up a stone, for example, you are likely to find crawly things under it.

Leaving aside the technical arguments about the feasibility of constructing such a system, Brenda is most concerned about what effects the experience of encountering such a system like the one she described might have upon our emotions as well as our cognitions: How does it feel to experience a world like that? How does it change my perception to walk through its portals? How do I find out where the edges are? What kind of transactions can I have with this world?"

The experience Brenda described is the experience at the human interface -- where the mind and machine meet. The interface hardware and software are what computer people call the "front end" of the system. The back end is what the system needs in the way of smarts so that outcomes end up being dramatically pleasurable. Right now, you can wander around in an adventure game and gather treasure and kill monsters and finish by winning or being killed. There isn't a sense of unfolding drama. In order for the front end of an adventure game to convey that sense of direct, first-person drama, it would have to be based on a very sophisticated back end.

"You use existing technology to make scenes branch according to your decisions, but that doesn't converge on a dramatic outcome, except in the most mechanical way. But you could take the same world with the same characters and the same elements and add this sense of drama, and come out with something that would be more like experiencing a drama at first hand.

"The kind of system I'm describing has to be able to find out what I want by remembering what kinds of things I have paid attention to. The system has to have a good enough model of me, and memory of how I have acted in the past, to make good guesses about how I'm likely to act in the future.

"I've tried to describe an element from the simplest thing that I think my colleagues and I will actually be able to do in the near future. Let's look down the road ten years. Say we really get the system working and we know how to synthesize dramatic outcomes and orchestrate sound tracks and images and give the person who uses the system a way to affect these representations.

"We can think of such a system not only as a medium for an interactive fantasy but as a kind of an interface to information that is not a fantasy. What if the world, instead of planet X or Shakespeare's Denmark, is the world of whales or the worlds of chemical reactions? That's a powerful idea that we can see at work right now in the best of contemporary educational software."

She offered the example of a game in which the players experience the fantasy of being cadets on a starship. Each cadet would be responsible for running part of the ship. The players can choose whether they want to specialize in navigation or propulsion or life support or computer systems. In real time, they run their parts of the ship. Then something goes wrong -- the life-support systems are threatened, the reactor is malfunctioning. Or something interesting occurs -- the exobiologists have spotted a planet to investigate. The players have to find out what to do and how to do it. In the first person.

"Now let's look at it from the point of view of drama theory," she proposed. "You accept easily the idea that I am a space cadet. I accept it too. This is what happens when a master actor impersonates a character. When I am impersonating someone, all of me is impersonating that character. What has to go away, to disappear from my own behavior to make that possible? The idea that I am me -- the person who doesn't know what I haven't learned -- has to go away. The same idea that often gets in the way of learning anything new.

"A willing suspension of disbelief that accompanies a first-person simulation enables the person who participates to feel what it would be like to have greater personal power. A world like that shows us what it's like not to have the limitations that we think we have in everyday life. When we see how much a kid learns about predicting simple trajectories and the rules of bodies in motion from playing even simple video games, I think it is easy to see the educational potential in using these 'fiction environments' as the door to worlds of information that are as useful or healthy to know as they are fun to learn about."

Of course, by this time, I was asking the same question that most of the people reading this chapter must be asking: "When are we going to play with these 'fiction environments'? How close is Atari to releasing actual products based on this research?"

The answer, unfortunately, is that it is unlikely that Atari is ever going to translate this research into consumer products. Six months after I talked to Alan Kay and observed Brenda Laurel's research group, the Systems Research Group was fired en masse. Brenda and Eric were given five minutes' notice. Alan went to Apple shortly thereafter. Once again, as in the case of ARC and PARC, it seemed that the management of the corporation that nurtured the most exciting research in interactive, mind-augmenting computer systems seemed to fail miserably when it came to developing products.

After she was fired, Brenda was a lot more willing to talk about the pressures of doing long-term research for a consumer-product-oriented company. In her opinion, the explanation for the demise of Atari Research, and the dramatic reversal of Atari Corporation's fortunes that led to the drastic cutback, is a simple one. "The Warner people" (who owned Atari), she claims, "never knew anything about innovation. The people they hired to run Atari were from Burlington Industries, Philip Morris, Proctor and Gamble -- dog food boys. How often does dog food change?"

Before she was in Systems Research, Brenda was in marketing. She claims that she told Raymond Kassar (former CEO of Atari) that "what people are going to want from us is not more deadhead entertainment, but stuff that helps their minds grow. The largest market of all is the market for personal power, for new equivalents to opposable thumbs."

Augmentation visionaries like Engelbart, prophets of interactive computing like Licklider, and infonauts like Alan Kay and Brenda Laurel tend to talk in grand terms about the ultimate effects of what they are doing -- the biggest change since the printing press or even since the opposable thumb. They all seem convinced that their projections will be vindicated by a technology that will inevitably come into existence despite the myopia of institutions like SRI, Xerox, and Atari.

With the increasing power of home computers, and the growing demand for entertainment and educational software, it seems likely that smaller groups, working in entrepreneurial organizations rather than academic or large-scale product-oriented institutions, will produce the fantasy amplifiers and mind augmentors of the near future. One of the most controversial areas of entrepreneurial research is in the field of applied artificial intelligence. The subject of the next chapter is involved in the commercial development of those intriguing programs that Brenda mentioned, the so-called expert systems that originated in the pure research that is being conducted at MIT and Stanford, and which seem to be invading the world of commercial software.

index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14
read on to
Chapter Thirteen:
Knowledge Engineers and Epistemological Entrepreneurs

howard rheingold's brainstorms

©1985 howard rheingold, all rights reserved worldwide.