splitbrains mac + Tools For Thought by Howard Rheingold
April, 2000: a revised edition of Tools for Thought is available from MIT Press including a revised chapter with 1999 interviews of Doug Engelbart, Bob Taylor, Alan Kay, Brenda Laurel, and Avron Barr. ISBN: 9780262681155, 1985 ed, ISBN: 0262681153
The idea that people could use computers to amplify thought and communication, as tools for intellectual work and social activity, was not an invention of the mainstream computer industry or orthodox computer science, nor even homebrew computerists; their work was rooted in older, equally eccentric, equally visionary, work. You can't really guess where mind-amplifying technology is going unless you understand where it came from.
- HLR

index
Chapter One: The Computer Revolution Hasn't Happened Yet
Chapter Two: The First Programmer Was a Lady
Chapter Three: The First Hacker and his Imaginary Machine
Chapter Four: Johnny Builds Bombs and Johnny Builds Brains
Chapter Five: Ex-Prodigies and Antiaircraft Guns
Chapter Six: Inside Information
Chapter Seven: Machines to Think With
Chapter Eight: Witness to History: The Mascot of Project Mac
Chapter Nine: The Loneliness of a Long-Distance Thinker
Chapter Ten: The New Old Boys from the ARPAnet
Chapter Eleven: The Birth of the Fantasy Amplifier
Chapter Twelve: Brenda and the Future Squad
Chapter Thirteen: Knowledge Engineers and Epistemological Entrepreneurs
Chapter Fourteen: Xanadu, Network Culture, and Beyond
Footnotes
Chapter Ten:
The New Old Boys from the ARPAnet

Bob Taylor's office window at Xerox Corporation's Palo Alto Research Center (PARC) overlooked the red-tiled towers of Stanford and the flat roofs of research parks stretched out to the horizon. The electronic window next to his desk overlooked another kind of world. While he started talking to me, he was also interacting with colleagues in his building and elsewhere in the global information community.

In 1983, it was not unusual to see an executive, especially a manager in a computer research organization, using a personal computer in his office. The unique thing about this personal computer was that it was an Alto — the first personal computer. Taylor and his group had been using it since 1974. A small cable connected the Alto to the Ethernet — a medium that linked the researchers at PARC with each other and with colleagues around the world.

The screen was taller than most computer displays, and it looked different from other computer screens, even when seen from across the room. Instead of a single screen-sized frame filled with numbers or letters or graphs, there were a number of squares of various sized, known in Xerox parlance as windows, that looked like overlapping pieces of paper on a desk. The symbols and images were also distinctly sharper than what I was accustomed to seeing on a computer screen.

The mouse, an update of Engelbart's innovation, was connected to the Alto with a thin wire. As Taylor slid the mouse around the desk surface next to the screen, a small dark pointer shaped like an arrow moved around the screen. When he clicked one of the buttons on top of the mouse or moved the pointer into a margin, the pointer changed shape and things happened on the screen. In 1984, Apple corporation's Macintosh computer introduced a mass market to this way of handling an electronic desktop. To Taylor, it wasn't particularly futuristic. Altos and Ethernets had been in operation since 1974 around here.

By 1983, Bob Taylor was only half-satisfied with his progress toward what he and a few others set out to achieve twenty years ago, because he believed that the new technology was only halfway built. Despite the fact that the office he was sitting in, the electronic workstation at his fingertips, and the research organization around him were functioning examples of what the augmentation community dreamed about decades ago, Taylor thought that it might take another ten or twenty years of hard work before the interactive informational communities foretold by Bush and Licklider would truly affect the wider population.

In 1965, at the age of thirty-three, Robert Taylor worked out of his office in the Pentagon, as deputy director, then as director, of the ARPA Information Processing Techniques Office. His job was to find and fund research projects involving time-sharing, artificial intelligence, programming languages, graphic displays, operating systems, and other crucial areas of computer science. "Our rule of thumb," he remembers, "was to fund people who had a good chance of advancing the state of information processing by an order of magnitude."

Bob Taylor was also responsible for initiating the creation of the ARPAnet — the prototype network community of computers (and minds) created by the Department of Defense, an effort that began in 1966 and became an informal rite of passage for the nucleus of people who are still advancing the state of the computing art. Larry Roberts, who was responsible for getting the network up and running, succeeded Taylor when Taylor left ARPA in 1969. After a year at the University of Utah, Taylor joined the research effort Xerox Corporation was assembling near Stanford.

In 1970, a combination of growing opposition to the Vietnam war, and the militarization of all ARPA research, meant that an extraordinary collection of talent in the new fields of computer networks and interactive computing were looking for greener pastures at a time when one corporation decided to provide the greenest pastures imaginable.

In 1969, Peter McColough, CEO of Xerox Corporation, announced his intention to make Xerox "the architect of information" for the future. To that end, a research organization was assembled in Palo Alto, in the early 1970s. McColough put a man named George Pake in charge. One of the first things Pake did was hire the best long-term computer visionary, research organizer, and people-collector he could find — Bob Taylor. At first, the newly recruited engineers, hackers, and visionaries worked in temporary quarters located in the Palo Alto flatlands, near the Stanford University campus. In the mid 1970s construction began on a prime piece of ground above Hewlett-Packard, next to Syntex, in that fertile enclave known as "The Stanford Industrial Park."

If there was ever a model environment for the technological cutting edge of the "knowledge sector," PARC was it. From the physicists in the laser laboratories and the engineer-artisans in the custom microchip shops to the computer language designers, artificial intelligence programmers, cognitive physiologists, video jockeys, sound engineers, machinists, librarians, secretaries, cooks, janitors, and security guards, you got a nice, model-utopian feeling from everybody you encountered.

The physical plant itself is an inescapable exercise in innovation. It took me a while to stop thinking of the place as being upside down. Since the terraced glass-and-concrete structure was built halfway embedded in Coyote Hill, Zuni Pueblo style, the main entrance is on the top floor. To get to the second floor from the ground floor, you go down. The linked quadrangles of offices, laboratories, and meeting rooms wind around atriums and gardens. The cafeteria overlooks Palo Alto; you can take your tray out to the terrace and look down on the bay from the vantage of this twenty-first-century cliff dwelling.

Off the corridors that wind around the quadrangles are office cubicles, many with their doors open. Inside the open cubicles, various people talk on telephones or stare at their distinctively oblong Alto screens. Some cubicles have plants, posters, bean-bag chairs (advertisement), stereos, bicycles. They all have bookshelves with rows of books and the bright blue and white binders used on the reports PARC publishes for the outside world. Many of the cubicle dwellers are young. A larger proportion of them than you might expect are women. It has always been a multinational-looking crowd.

I had no problem distinguishing Taylor from the assorted scientists, engineers, professors, hackers, longhairs, and boy and girl geniuses around him. The few differences in style were subtle but visible, nevertheless. While many of his colleagues opt for sandals, down jackets, techno-hippie ponytails, blue jeans, and rumpled cords with or without bicycle clips, Taylor is likely to be found in a pressed tweed jacket and unrumpled slacks. His blond hair is casual but neat. When he's trying to see if you are following his line of thought, he tilts his forehead in your direction and targets you with pale blue eyes over what would pass for granny glasses if his shirt were denim instead of oxford cotton. He smiles often, sometimes as a form of punctuation. A trace of Texas drifts into his voice at times.

It is Taylor's belief that the idea of personal computing was a direct outgrowth of what Licklider started in the early 1960s with time-sharing research. Time-sharing, like the first high-level languages, was a watershed for computer science and for the augmentation approach. It also created a new subcommunity within the computation world, a community of interests that cut across the boundaries of military, scientific, academic, and business computing. It was a relatively small subculture within the larger community of computer scientists and computer systems builders. They were bonded by a common desire for a certain kind of computer they wanted for their own use, and by a decade of common experiences as a part of the ARPA research effort to build the kind of computers they were then using. Many of the time-sharing veterans who started out as undergraduate hackers at project MAC or as ARPA-funded engineers in Berkeley and Santa Monica were to meet later, in the research sanctums of Bell, SRI, Rand, and (mostly) at PARC.

Time sharing was an early and effective application of the philosophy that the existing means of using computers should be tailored to the way people function, rather than forcing people who want to use them to conform to mechanical constraints. Without the development of multiaccess computing in the early sixties, the idea of personal computing would never have been more than a dream.

In the early 1960s, data processing was what one was expected to do with a computer, and one hardly ever did it directly. First, a program and its raw data had to be converted to a shoebox full of punched cards. The cards were delivered to a data processing center, where a system administrator decided how and when they were to be fed into the main computer. (These fellows were, and still are, a rich source of anecdotes in support of the "programming priesthood" mythology.) You came back an hour or a day or a week later and retrieved a thick printout and a hefty bill. The keypunch-submit-wait-retrieve ritual was called "batch processing."

By 1966, groups in California and Massachusetts were well on the way toward raising the art of computer programming to a high enough level to do some truly interesting things with computers. Licklider and a few others suspected that if they could make the power of computers more directly accessible to people writing and running programs, programmers might be able to construct new and better kinds of software at far greater speed than heretofore possible.

Among the capabilities that came with the increasingly sophisticated electronic hardware and software were powers to model, represent, and search through large collections of information. With sufficient speed and memory capacity, computers were gaining the power to assist the creative aspects of communication. But serious obstacles had to be overcome to bring that power out where people could use it.

It is hardly possible to interact dynamically with your program when you have to dump boxes of punchcards into readers, then decipher boxes of printout. Since a large part of the process of building a program is a matter of tracking down subtle errors in complex lists of instructions, the batch processing ritual put an effective limit on how much programmers could do, how fast they could do it, and the quality of the programs they could produce.

Batch processing created two problems: The computers could handle only one program (and one programmer) at a time, and programmers weren't able to interact directly with the computer while their programs were running. Time-sharing was made possible because of the enormous gap between the speed of computer operations and the rate of information transfer needed to communicate with a human. Even the fastest typist, for example, can enter only a single keystroke in the length of time it would take the computer to perform millions of operations. Time-sharing gives each of the 20, 50, or 100 or more people who are using the computer the illusion that he or she has the computer's exclusive "attention" at all times, when in reality the computer is switching from one user's task to another's every few millionths of a second.

When the first programmers gained interactive access to the computer, they also gained a new freedom to create ever more powerful programs and see the results more quickly than ever before. Programmers of the first multiaccess computers of the sixties were able to submit programs a piece at a time and receive responses a piece at a time, instead of trying to make the whole programming job work, for better or worse, in a single batch. By eliminating the "wait and see" aspect of batch-processing, time-sharing made it possible for programmers to treat their craft as a performing art.

"When I became director of the ARPA Information Processing Techniques Office, the time-sharing programs were already running," Taylor recalls, "but they weren't complete, so the work continued while I was director. It was clear, though, that this was an important breakthrough in information processing technology, so I became involved in the technology transfer between the different experimental systems, and eventually to military and civilian computer applications.

"We came up against some rigid attitudes when we talked to many people in the industry. IBM ignored the ARPA stuff at first. They simply didn't take it seriously. Then GE agreed to cooperate with MIT and Bell Laboratories to develop and market a large time-sharing system. IBM said, 'Whoops, something's happening here,' and they went off with a crash project to retrofit one of their 360 systems to time-sharing. They took orders for a few and the system bombed. They couldn't make the software work because hadn't been down the same roads that the ARPA funded groups had been down years before."

Time-sharing research caused a kind of schism in the corporate research field. The first-generation priesthood seemed to be missing out on the inside action, for a change. Companies that paid attention to the time-sharing experience gained in the long run. It made Digital Equipment the "second name" in the industry. DEC paid attention to the ARPA-funded work and hired people when they got out of school, and profited from time-sharing.

The first thing Taylor went after, once the time-sharing project was on its way to completion, was a way of interconnecting the time-sharing communities. He had a privileged overview of the then-fragmented computer research world, since a good deal of his time was spent traveling to universities and think tanks, finding and funding researchers. Progress in the separate subfields of computer research was accelerating through the early 1960s. By 1966, the time was approaching when the pieces of the puzzle would be ready for assembly, and the separated teams would have to be in close communication.

"Within each one of the time-sharing communities people were doing a variety of different kinds of computer research," says Taylor, "so the overall project of making the time-sharing system itself work was much more global than any one of the individual research fields that were being explored by different members of the time-sharing community — AI research, computer hardware architecture, programming languages, graphics, and so forth.

"We were surprised time and time again by applications of the time-sharing system that nobody planned but somebody invented anyway. The ability to have files and resources within a time-sharing system was one difficult problem to be solved. On the way to solving it, people discovered a new way of communicating with each other — something that was unexpected and became a unique medium in the research community." Fifteen years since computer jockeys started having fun with it, that medium has become the commercial version known as "electronic mail."

Taylor saw the necessity of connecting to one another those isolated research communities that Licklider had seeded and Sutherland had nurtured. Many of the people in related fields but different institutions knew of each other, and many more did not. By 1956-1966, ARPA was supporting most of the nonindustrial systems research in the country, and thus Bob Taylor and his colleagues had a more up-to-date and comprehensive picture of the state of computer research than any individual researchers.

The people Taylor funded then undertook the planning and creation of a network of computers, located in different parts of the country, linked by common-carrier communication lines, capable of sharing resources and interacting remotely with the growing community of computer researchers. The people who were to build and ultimately make use of the system began to get together in person to talk about the technology needed to link resources in the manner they envisioned. Instead of working in isolation, a small group of leaders from the time-sharing research effort began to work in concert to design the first on-line, interactive communities.

A truly interoperating community capable of freely sharing resources across the boundaries of individual machines or geographical locations was more difficult to bring into existence than is suggested by the simplified general idea of plugging computers together via telephone lines. Very serious hardware and software problems had to be solved, and the "user interface" where the person meets the machine had to be further humanized.

Every year, starting in 1966, following a tradition established by Licklider and Sutherland, Taylor called a meeting of all the principal investigators of all his projects. It would be held in a dramatic place far removed from the usual locales of Cambridge, Berkeley, or Palo Alto. With all these meetings, Taylor, who was neither an engineer nor a programmer (he was, in fact, a philosophy major and an experimental psychologist by training), began the all-important mixing and sifting of ideas he knew would be necessary to the cohesion of such a large, dispersed, and ambitious project.

"I constructed the meetings so they all had to get to know one another and argue with one another technically in my presence," Taylor recalls. "I would ask questions that would force people to take sides on technical issues. Lasting friendships were built from the give and take. I asked them difficult questions. Then, after they went back to their laboratories and campuses, their communications increased in both quality and quantity, because they knew each other."

Taylor also initiated annual conferences of graduate students. The best graduate students of the old ARPA researchers had meetings of their own, away from the "older" folks like Taylor, who was, after all, in his midthirties. Like the bands of roving builders who planned the Gothic cathedrals of Europe, many of the computer-system builders who participated in the ARPA grad students' meetings were to meet again later at SAIL (Stanford Artificial Intelligence Laboratory) and PARC, and later still at Apple and Microsoft.

Taylor's idea of connecting the researchers by connecting their computers was inspired by a phrase he read in one of Licklider's 1966 papers, in which he proposed the idea of a very large-scale time-sharing system that he called "an intergalactic network." Taylor took it a step farther: If you could build a communication network, why not a computer network?

Instead of building larger numbers of longer-range communication lines between terminals and their time-sharing systems, Taylor saw potentially greater benefits in creating technology for different time-sharing systems to communicate with each other over long distances. Taylor sold ARPA on he idea, then hired a young Lincoln Lab researcher named Larry Roberts as project manager. The meetings and separate research projects continued for three years, before the first bits were sent over the ARPAnet in 1969. By this time, Taylor's opposition to the Vietnam war was growing, and he was reasonably certain that the project he had initiated was nearing completion, so he left ARPA.

While the number crunchers, batch processors, and electronic bookkeepers continued to hold sway over the computer industry, the core members of the interactive computing community were beginning to experiment with their computers-and with themselves — through this unique new prototype of an interconnected computer community. It quickly turned out, to the delight of all participants and to nobody's surprise, that the experimental network was evolving into a stimulating environment for communicating and sharing research information and even for transporting and borrowing computer programs.

The implications for human communication that were beginning to emerge from the experience of this computer-connected research community were discussed in an article published in April, 1968, titled "The Computer as a Communication device." The principal authors were none other than J.C.R. Licklider and R. Taylor.

Although the Department of Defense had an obvious interest in fostering the development of the technology they created in the first place, and the interconnection of computers had certainly become a necessity in conducting advanced weapons research, Licklider and Taylor were not applying the network idea to the Strategic Air Command or nuclear weapons research, but to the everyday communications of civilians.

The authors emphasized that the melding of communication and computation technologies could raise the nature of human communication to a new level. They proposed that the ability to share information among the members of a community and the presence of significant computational power in the hands of individuals were equal components of a new communicating and thinking environment they envisioned for the intermediate future. The implications were profound, they felt, and not entirely foreseeable: "when minds interact, new ideas emerge," they wrote.

The authors did not begin the article by talking about the capabilities of computers; instead, they examined the human function they wished to amplify, specifically the function of group decision-making and problem-solving. They urged that the tool to accomplish such amplification should be built according to the special requirements of that human function. In order to use computers as communication amplifiers for groups of people, a new communication medium was needed: "Creative, interactive communication requires a plastic or moldable medium that can be modeled, a dynamic medium in which premises will flow into consequences, and above all a common medium that can be contributed to and experimented with by all."

The need for a plastic, dynamic medium, and the requirement that it be accessible to all, grew out of the authors' belief that the construction and comparison of informational models are central to human communication. "By far the most numerous, most sophisticated, and most important models," in Licklider's and Taylor's opinion, "are those that reside in men's minds."

Collections of facts, memories, perceptions, images, associations, predictions, and prejudices are the ingredients in our mental models, and in that sense, mental models are as individual as the people who formulate them. The essential privacy and variability of the models we construct in our heads create the need to make external versions that can be perceived and agreed upon by others. Because society, a collective entity, distrusts the modeling done by only one mind, it insists that people agree about models before the models can be accepted as fact.

The process of communication, therefore, is a process of externalizing mental models. Spoken language, the written word, numbers, and the medium of printing were all significant advances in the human ability to externalize and agree upon models. Each of those developments, in their turn, transformed human culture and increased collective control over our environment. In this century, the telephone system added a potent new modeling medium to the human communication toolkit. Licklider and Taylor declared that the combination of computer and communication technologies, if it could be made accessible to individuals, had the potential to become the most powerful modeling tool ever invented.

As an example of how a prototype computer communication system could be used to boost the process of decision-making, Licklider and Taylor described an actual meeting that had taken place on just such a system. It was a project meeting involving the members of a computer-science research team. Although all the participants in the meeting were in the same room, they spent their time looking at their display screens while they talked. A variety of diagrams, blocks of text, numbers, and graphs passed before their eyes via those screens.

The facility was, in fact, Doug Engelbart's Augmentation Research Center. The machine in another room that made the meeting possible was the latest kind of multiaccess computer that the time-sharing research of the last few years had produced.

Using the project meeting as a model, Licklider and Taylor showed how computers could handle the informational housekeeping activities involved with a group process. More importantly, they demonstrated how this subtle kind of communication augmentation could enhance the creative informational activity that took place. The ability to switch from microscopic details to astronomical perspectives, to assemble and reassemble models, to find and replace files, to cut and paste and shuffle, to view some information publicly and make private notes at the same time, to thumb through the speaker's files or check his references while he is talking, made it possible for people to communicate with each other through the computer system in a way not possible in a nonaugmented meeting.

"In a few years," the authors predicted, in the very first words of their article, "men will be able to communicate more effectively through a machine than face to face." Referring to their model technical meeting at SRI, Licklider and Taylor estimated that "In two days, the group accomplished with the aid of a computer what normally might have taken a week."

This small group — the people together with the hardware and software of a multiaccess computer — constituted what Licklider and Taylor identified as one node of a larger, geographically distributed computer network. The key idea, Taylor and Licklider now recall, had been proposed by Wesley Clark in a cab ride to Dulles Airport, after a 1966 meeting about the network Taylor was trying to put together. The problem lay in deciding which levels of the existing computer and communication systems had to be changed to couple incompatible machines and software.

Many of the planners believed that a huge "host" computer in the center of the country would have to be specially designed and programmed to act as a translator. Clark suggested that a small, general-purpose computer at each node could be turned into a "message processor." Through long distance common-carrier communications, these "interface message processors" (known eventually as "imps") and their local multiaccess computer communities could be integrated into a kind of supercommunity.

The imps would take care of all the behind-the-scenes traffic controlling and error-checking functions needed to ensure accurate transmission of data — a significant task in itself — so the individual users wouldn't have to worry about whether the files they want to read or the programs they need to use are a thousand miles away or down the hall.

The resulting communication system became part of a new kind of computing system that was not confined to any single computer. Teams of ARPA-supported scientists found that they could invoke the use of a program residing in a computer located in Berkeley, California, feed the program with data stored in Los Angeles, then display the result in Cambridge, Massachusetts. The network was suddenly more important than the individual computers, as the computers became "nodes" in a geographically distributed supercomputer.

It began to be possible to think of a computer network that was not centrally controlled from any one place, in which the traffic control and data communication and behind-the-scenes number crunching required were invested in the software instead of the hardware. Instead of a huge host computer in the center of it all that received a stream of information from one computer, translated the stream into a form that could be decoded by another computer, and relayed the translated information to the receiving computer, the smaller imps at each node would accept and pass along information packets that had been translated into a common format by the imp connected to the originating computer.

The controlling agent in a "packet switched" network like the ARPAnet was not connected to a central computer somewhere, nor even the "message processors" that mediated between the computers, but the packets of information, the messages themselves. Like the addresses on letters, the code by which information was packaged for transmission put into each packet all the information necessary for getting the message from origin to destination, and for translating between different kinds of computers and computer languages.

While the networking technology was evolving rapidly the number of computer terminals proliferated and the accepted way of using computers was beginning to change. By 1968, the punchcards and printouts of 1960 were being replaced by ever-more interactive means of communicating with the computer: a keyboard and teletype printer and, in some exotic quarters, a graphic display screen were becoming standard input and output devices for programmers.

To old-liners who were used to submitting punched cards and receiving machine code printouts on huge fanfolds from line printers, the ability to type a command on a keyboard and see the computer's immediate response on their own printer was nothing short of miraculous. Through the rapidly spreading use of time-sharing, many people were able to use individual terminals to directly interact with large computers. To these who knew about the plans to connect their time-sharing communities into a supercommunity, 1968 was a time of exciting and rapid change in a field that was still virtually unknown to the outside world.

The idea of a community that could be brought into existence by the construction of a new kind of computer system was perhaps the most radical proposal in the 1968 paper. The ARPAnet was not on-line until 1969, but at that point the time-sharing groups had constructed enough of the superstructure for the outlines of the new network to be known and visible.

Taylor and Licklider were more concerned about the further development of this test-bed for advanced communications and thought amplification than they were dedicated to the use of the network as an operational entity for conducting weapons research. Writing with the knowledge that ARPAnet was to begin operation within a year, and would probably be unknown outside defense or computer science circles, Licklider and Taylor pointed out:

. . . Although more interactive multiaccess computer systems are being delivered now, and although more groups plan to be using these systems within the next year, there are at present perhaps only as few as half a dozen interactive multiaccess computer communities.

These communities are socio-techno pioneers, in several ways out ahead of the rest of the computer world: What makes them so? First, some of their members are computer scientists and engineers who understand the concept of man-computer interaction and the technology of interactive multiaccess systems. Second, others of their members are creative people in other fields and disciplines who recognize the usefulness and who sense the impact of interactive multiaccess computing upon their work. Third, the communities have large multiaccess computers and have learned to use them. and fourth, their efforts are regenerative.

The authors were looking beyond the networks of their day, and the computer systems that were commercially available, to the technology they knew would be possible and affordable on a large scale within decades. Convinced that the technology they and their colleagues had created, and the community of users that had grown up around that technology, were the forerunners to far more powerful and more widely usable systems, they called for the development of a version of certain time-sharing systems into a tool that could be used to amplify human communications:
. . . These new computer systems we are describing differ from other computer systems advertised with the same labels: interactive, time-sharing, multiaccess. They differ by having a greater degree of open-endedness, by rendering more services, and above all by providing facilities that foster a working sense of community among their users. The commercially available time-sharing services do not yet offer the power and flexibility of software resources — the "general purposeness" — of interactive multiaccess systems of the System Development Corporation in Santa Monica, the University of California at Berkeley, Massachusetts Institute of Technology in Cambridge and Lexington, Mass. — which have been collectively serving abut a thousand people for several years.

The thousand people include many of the leaders of the ongoing revolution in the computer world. For over a year they have been preparing for the transition to a radically new organization of hardware and software, designed to support many more simultaneous users than the current systems, and to offer them — through new languages, new file-handling systems, and new graphic displays — the fast, smooth interaction required for truly effective man-computer partnership.

Time-sharing, tremendously exciting as it was to programmers, was seen as only a means to an end by those who were aiming to build communication amplifiers. To those who were gung-ho about the future of multiaccess computing, Taylor and Licklider talked about the ultimate goal of the various projects they had initiated: the creation of tools to enhance the thinking of individuals and augment communications among groups of people.

Engelbart's group at SRI, Ivan Sutherland's computer graphics work at MIT and Harvard, David Evans and his students at the University of Utah, the Project MAC hackers at MIT, and other groups scattered around the country were constructing pieces of a whole new technology. Foreseeing the day when such systems would be practical on a large scale, Licklider and Taylor reminded their colleagues that the new information processing technology could revolutionize not only research centers and universities, but offices, factories, and ultimately schools and homes.

Looking toward what was then the long-term future, Licklider and Taylor projected a positive attitude about the possible impact of supercommunities that might include not only computer scientists and programmers but housewives, schoolkids, office workers and artists:

But let us be optimistic. What will on-line interactive communities be like? In most fields they will consist of geographically separated members, sometimes grouped in small clusters and sometimes working individually. They will be communities not of common location but of common interest. In each field, the overall community of interest will be large enough to support a comprehensive system of field-oriented programs and data.

In each geographical sector, the total number of users — summed over all the fields of interest — will be large enough to support extensive general-purpose information processing and storage facilities. All of these will be interconnected by telecommunications channels. The whole will constitute a labile network of networks — ever changing in both content and configuration.

The authors envisioned the creation of an interconnected system of software-based tools that would provide "investment guidance, tax counseling, selective dissemination of information in your field of specialization, announcements of cultural, sport, and entertainment events that fit your interests, etc. In the later group will be dictionaries, encyclopedias, indexes, catalogues, editing programs, teaching programs, testing programs, programming systems, data bases, and — most important — communication, display, and modeling programs." They could have been describing from life the facilities that were available at PARC, ten years later.

Licklider and Taylor were most emphatic that the impact would be great, on both individuals and organizations, when all the elements, which they could only speculate about in 1968, were perfected sometime in the future:

First, life will be happier for the on-line individual because the people with whom one interacts most strongly will be selected more by commonality of interests and goals than by accidents of proximity. Second, communication will be more effective, and therefore more enjoyable. Third, much communication will be with programs and programmed models, which will be (a) highly responsive, (b) supplementary to one's own capabilities, rather than competitive, and (c) capable of representing progressively more complex ideas without necessarily displaying all the levels of the structure at the same time — and which will therefore be both challenging and rewarding. And fourth, there will be plenty of opportunity for everyone (who can afford a console) to find his calling, for the whole world of information, with all its fields and disciplines, will be open to him — with programs ready to guide him or to help him explore.

For the society, the impact will be good or bad, depending mainly on one question: Will "to be on-line" be a privilege or a right? If only a favored segment of the population gets a chance to enjoy the advantage of "intelligence amplification," the network may exaggerate the discontinuity in the spectrum of intellectual opportunity.

On the other hand, if the network idea should prove to do for education what a few have envisioned in hope, if not in concrete detailed plan, and if all minds should prove to be responsive, surely the boon to humankind would be beyond measure.

Strangely lyrical and surprisingly romantic prose coming from two computer-research organizers in the Pentagon. But by 1971, when Taylor recruited fifty or sixty of the best people in the field for the Computer Science Laboratory at PARC, the cream of the interactive computer designers had enough engineering and software research behind them from the time-sharing and ARPAnet projects to make them confident that such a utopian scenario might be possible — especially if a corporation with the resources of Xerox was willing to take the high-stakes gamble.

The people who built the first interactive, multiaccess computers, the first intellectual augmentation systems, and the first packet-switching computer networks were gathering under the same roof for the first time, in order to turn those dreams into prototypes as soon as possible. Butler Lampson, Chuck Thacker, Jim Mitchell, Ed McCreight, Bob Sproull, Jim Morris, Chuck Geschke, Alan Kay, Bob Metcalfe, Peter Deutsch, Bill English — to those who knew anything about the esoteric world of computer design, the PARC computer science founders constituted an unprecedented collection of talents.

It wasn't the kind of shop where old-style hierarchies and pecking orders would do any good. You don't run an outfit like that as much as you mediate it — which is where Bob Taylor came in. The kind of thing they were building, and the kind of people it took to build it, required a balance between vision and pragmatism, the kind of balance that couldn't be enforced by artificially imposed authority.

What they all agreed upon was what they wanted to get their hands on, in the way of a first-rate research facility. The potential of computers as tools to be used by individuals, and the communications possibilities opened by linking computers, were what motivated the PARC team. It was time to demonstrate that the theories about using personal computers to manage personal communications could work in an office like theirs. If they could demonstrate that such devices could speed their own work, they would be on the way to selling the rest of the world on the vision they held form the time-sharing days.

The first thing they needed in order to retool the world of information work was a computer designed for one person to use, something that went far beyond previous attempts. Because they knew that vision was the human sense capable of the most sophisticated informational input, the PARC computerists knew they wanted a sophisticated graphic screen to bring the computer's power to the user. Complex, dynamic, visual models required a large amount of computer power, so the decision to emphasize the visual display meant that the hardware would have a great deal more memory and speed than anyone else in the computer world had heretofore put at any one individual's command.

"We wanted hardware as capable as we could afford to build," Taylor recalls, "because we needed capable computing tools to design an entire software architecture that nobody in the world yet knew how to make. We wanted for our own use what we thought other information workers would eventually want. We needed the computing power and the research environment to build something expensive but very flexible and growable that would someday be much less expensive but even more capable. We all understood when we planned the Alto that the main memory of what we wanted might cost $7000 by the time it was produced, in 1974, but would drop to about $35 ten years later."

The hardware shop at PARC was only set up to produce small batches for the PARC software designers, but eventually 1500 Altos were built for Xerox executives and researchers, for associates at SAIL and SRI, as well as for the U.S. Senate, House of Representatives, certain other government agencies, and even the White House Staff. It was the first machine designed to put significant computing power on a person's desk.

The job the Alto designers did was all the more remarkable when compared with the first "personal computers" the outside world was to learn about years later. The 1975 Altair, the granddaddy of the homebrew computers, had all of 1/4K main memory (also known as RAM, this represents the amount of storage space the computer devotes to "working memory," and thus indicates the rough limit of how much work it can do with reasonable speed). The first Apple models sold, in 1977, had 8K. When IBM introduced its personal computer, in 1981, the standard model had 16K. The Alto, in 1974, started with 64K and was soon upgraded to 256K. The distinctive bit-mapped screen and the mouse pointing device weren't to be seen on a non-Xerox product until 1983, when Apple produced Lisa.

The hardware, of course, was just a part of the story. These devices were built for the people whose job it was to create equally spectacular software innovations. And the personal computers themselves weren't enough for those who longed for the kind of community they had known with the ARPAnet.

"We didn't start talking about the hardware and software until we talked about what we wanted to do personally with such a system," Taylor remembers. "We knew there were technical problems to solve, and we would challenge them in due time. First we had to consider the human functions we wanted to amplify. For example, people use their eyes a great deal to assimilate information, so we wanted a particularly powerful kind of display screen. Then all the time-sharing veterans insisted they wanted a computer that didn't run faster at night."

What Taylor meant was that the time-sharing programmers had all been accustomed in the mid 1960s to doing their serious computing in the middle of the night, when the amount of traffic on the central computer was light enough to perform truly large information processing tasks without delay. The first radical idea they agreed upon was that each Alto had to have as much main memory as one of the central computers from the time-sharing systems of only a few years back. And it had to be fast.

"People can give commands to a computer much more rapidly and easily by seeing and pointing than by remembering and typing, so we adopted and then adapted the mouse," added Taylor. "It is hard for people to learn artificial languages and even harder for machines to learn natural languages. The existing computer languages didn't give first-time users and experimental programmers equal power to interact with the computer, so we created new kinds of languages."

"Most importantly, people often need to do things in groups. There are times when we want to use the Alto as a personal tool, and times when we want to use it as a communication medium, and times when we want to do both. Our purpose in bringing all that computing power to individuals was not to allow them to isolate themselves. We wanted to provide the gateway to a new communication space, and ways to fly around in it, and a medium for community creativity, all at the same time."

When time-sharing first got going, and hackers began to proliferate late at night in the corners of university computer departments, the subcult of computerists found that while they could all communicate with the central computer at the same time, they couldn't all necessarily communicate with each other, or share each other's programs or files. It took some effort, but the time-sharing systems programmers eventually solved the problem.

The solution to the difficult problem of sharing resources among different users of a multiaccess computer became no less difficult when it had to be translated to the problem of sharing resources between many equally powerful, geographically separated, often incompatible computers (as with ARPAnet). The carefully designed connectivity of time-sharing could not be patched onto the new system.

The PARC network had to be built from the ground up, along with the personal workstations and shared servers for filing, printing, and mail. The server notion meant that certain otherwise stock-model Altos would be programmed for the tasks of controlling these network services, instead of building separate devices to perform these tasks. The concept of the resulting Ethernet, as it was called, stemmed from the determination to make the network itself a tool at the command of the individual user.

The PARC folks were hungry for personal computing power, but they didn't want to give up that hard-won and effort-amplifying community they were just beginning to know on the ARPAnet. Dan Swinehart, an SRI alumnus who joined PARC early in the game, remembers that "From the day the Alto was proposed, Butler Lampson and Bob Metcalfe pointed out that if we were going to give everybody at PARC a self-contained computer instead of hooking them all into a central time-sharing system, we'd need a connecting network with enough communicating and resource-sharing capability that the people at the personal work stations wouldn't be isolated from each other."

Thus, the companion to the ALTO was the Ethernet, the first of the "local area networks." With the advent of network technology, the hardware became less important and the software became more important, because such a network consists of a relatively simple hardware level, where a small box plugs the individual computer into the network, and a series of more sophisticated software levels known as protocols that enable the different devices to interoperate via a communication channel.

With common-carrier networks — the kind where teenage hackers use their telephones to gain access to Defense Department computers — the small box is known as a modem and works by translating computer bits into a pattern of tones that the public telephone system uses to communicate information. A local area network uses a different kind of small box that converts computer data into electrical impulses that travel from computer to computer via a short cable, rather than the audio tones that are sent over common-carrier communication lines.

Local area networks are meant for environments like PARC — any campus or laboratory or group of offices where many machines are distributed over a small geographical area. Several local networks can also be linked over long distances via "message processors" known as gateways to the common-carrier-linked internetwork. This scheme embeds local networks in more global supernetworks.

Today's network technologies use the packet-switching techniques originally developed during the creation of the ARPAnet — exactly the kind of coding of information that Shannon predicted in 1948. Information is transported and processed in packets of information — bursts of coded on-off pulses — that carry, in addition to the core data of the message, information on how the message is to be transmitted and received. If your computer uses the right kind of hardware and software translators, your data will find its own way through the network according to the control and routing information embedded in the packets.

The technical details of packet switching won't matter to the vast majority of the people who will end up using network systems in the future, but the notion of "distributed computing" signals an important change to a new phase in the evolution of computation. Distributed systems, in which a number of people, each with their own significantly powerful personal computers, join together into even more powerful computational communities, are altogether different from the centrally controlled and highly restricted computers of the early days.

Where we will all choose to go, or be forced to go by human nature or historical circumstances, once we are given access to such a system, is a wide-open question, once you get beyond the revolutionary but relatively simple applications to office work. Almost all the augmentation pioneers now use the analogy of the early days of automobiles to describe the present state of the system. Engelbart and Taylor agree that the personal computers millions of enthusiasts are using today are not even at the stage the automobile industry reached with the model T. More important, there is not yet a widespread transportation support structure for the messages between individuals.

There are no standard ways to build or drive the informational vehicles that have been devised only recently. The existing highways for large-scale, high-bandwidth information transportation don't even cover a fraction of the countryside. There are no service stations or road maps. The tire industry and the petroleum industry of the knowledge age don't exist yet. There may be prototypes of mind-extending technologies at places like PARC, but there is not yet an infrastructure to support their use in wider society.

The researchers at PARC were wildly successful in their efforts to build powerful personal computers, years before the business and consumer communities were prepared to accept them, but Xerox marketing management failed to take advantage of the head start achieved by their research and development teams by quickly turning the prototypes into products. The failure of Xerox to exploit the research at PARC was partially a result of the lack of the kind of infrastructure described by the automobile analogy. Technology transfer in such a fast-moving field as microelectronic devices is a tough enough gamble. The problem gets more complicated when those devices are intended to affect the way people think. Building a system from scratch and showing that it works is still a long way from convincing most of the people in the work force to change the way they've always done things.

By the mid 1970s, the nation's smartest computer researchers realized that the Alto, Ethernet, and Smalltalk (an equally advanced computer language) prototypes created at PARC had advanced the state of interactive computing far beyond the level achieved by the ARPA-sponsored time-sharing projects that had revolutionized computers a decade previously. By the late 1970s, Xerox management was ready to think about turning PARC's successes into a product.

While the PARC whiz kids raced ahead on advanced research into dozens of information-related sciences and technologies, the Star and the Ethernet were readied for market. Star was designed to be much more than a production-model Alto: The main memory was 512K, twice as much as the enhanced Alto, and the Star's processor was built to run three times as fast as the Alto. The Star's software included a language named Mesa (created in Taylor's lab), along with a whole toolkit of application programs for editing, filing, calculating, computing, creating graphics, distributing electronic mail.

One of the clich s of the computer industry in the early 1980s was that "if Xerox had marketed the Star when it was technically ready to go, they would have stolen an industry out from under IBM and Apple." As it happened, April, 1981, when the Star 8010 Information System was announced, was still too early for the larger segments of office professionals to realize that they were information workers. Xerox marketing management insisted that the workstation was not only a breakthrough in providing tools for individuals, but a part of an integrated office system of interconnected components that shared mail, printing, and filing services. But nobody outside a few privileged test sites knew what that meant.

Until word processing came out of nowhere (as far as the people in offices were concerned) to replace most of the typing pools in the early 1980s, it wasn't clear to the people who bought office equipment for corporations that computers and office workers were bound to get acquainted rapidly. To the first knowledge workers at aerospace firms, it was very clear that there was a major difference between these machines and the devices they had formerly known as computers.

The place where the mind meets the machine, the long-neglected frontier of computer development, was advanced to a new high level by those at ARC and PARC who created the partially psychological, partially computational engineering of the user interface. The dreams of the augmentation pioneers were finally materialized in the products of their students, who took the first steps with the Star to engineer the machine to the minds of the potential users. The Star designers reiterated the connection between sophisticated visual representation and the ability to amplify thought:

During conscious thought, the brain utilizes several levels of memory, the most important being the "short-term memory." Many studies have analyzed the short-term memory and its role in thinking. Two conclusions stand out. (1) conscious thought deals with concepts in the short-term memory . . . and (2) the capacity of short-term memory is limited. . . . When everything being dealt with in a computer system is visible, the display screen relieves the load on the short-term memory by acting as a sort of "visual cache." Thinking becomes easier and more productive. A well designed computer system can actually improve the quality of your thinking. . . .

A subtle thing happens when everything is visible: the display becomes the reality. The user model becomes identical with that which is on the screen. Objects can be understood purely in terms of their visible characteristics.

The idea that the right kind of computer systems could affect the way people think — the seed planted by Vannevar Bush and nurtured by Licklider and Engelbart — was not lost on the Xerox interface builders. In regard to the principle that they called "consistency," the Star team noted:
One way to get consistency into a system is to adhere to paradigms for operations. By applying a successful way of working in one area to other areas, a system acquires a unity that is both apparent and real. . . .

These paradigms change the very way you think. They lead to new habits and models of behavior that are more powerful and productive. They can lead to a human-machine synergism.

After ten years, PARC had achieved its technological goals, and more. The Mesa and Smalltalk languages were both significant advancements of the software art. If bold and imaginative research were all that the success of a company depended on, Xerox would have been in a position to challenge even the dominating force of the information industry. But Peter McCollough was no longer the CEO, and Xerox top management failed to comprehend the ten-year technological lead their research division had handed them.

Some of the most important members of the starting team left PARC in the early 1980s to join other companies or to start their own firms. Such job changes at the higher levels of the electronics and computer industries were far from unknown in Silicon Valley; in fact PARC was distinguished from similar institutions for many years because of the unusual lengths of time put in by its principal scientists. But when Xerox failed to become the first name in the industry, and the hobbyist side of personal computing had grown to the point where some of the original hobbyists were recruiting PARC scientists and building their own personal computer empires, the first high-level PARC defectors began to seed the rest of the industry with the user interface concepts embodied in the Star.

Bob Metcalfe, the man responsible for the creation of the Ethernet, left to start 3-Com, a company specializing in local area network technology. Alan Kay, whose Smalltalk team made impressive contributions to the Star interface, left to become the chief scientist at Atari. John Ellenby, who helped reengineer the Alto 2, became the chairman of Grid. In the fall of 1983, Bob Taylor resigned, after thirteen years leading the laboratory team he had built.

Several of the PARC alumni became associated with those industry newcomers who had emerged from the homebrew computer days. Some of the former whiz kids from PARC were making alliances with the next generation of whiz kids. Charles Simonyi, by then in his early thirties, who was in charge of producing the word processing software for the Alto, left PARC to join Bill Gates, the twenty-seven-year-old chairman of Microsoft, a company that started out as a software supplier to the computer hobbyists in the Altair days of 1975, and is now the second-largest microcomputer software company in the world.

Steve Jobs, chairman of Apple, then in his late twenties, visited PARC in 1979. He was given a demonstration of the Alto. Larry Tesler, the member of the PARC team that gave Jobs that demonstration, left PARC in 1980 and joined Apple's new secret project that Jobs promised would redefine the state of the art in personal computers. In 1983 Apple unveiled Lisa — a machine that used a mouse, a bit-mapped screen, windows, and other features based on the Star-Alto-Smalltalk interface. The price for the system was around $10,000. This was $6000 less than the more powerful Star, but still hardly in the range of the consumer market. In 1984, Apple brought out a scaled-down, cheaper version of Lisa, the Macintosh, with the same user interface, and revolutionized the personal computer market.

If time-sharing research had been the unofficial initiation ceremony and the ARPAnet was the rite of passage, the PARC era was the end of the apprenticeship era for the augmentation community. New generations of researchers and entrepreneurs were entering the software fray through the infant computer industry. By the early 1980s, it didn't take a computer prophet to see that big changes were going to continue to happen as the mass market began to awaken to the potential of personal computing. Although the hardware and the software of the first tens of millions of personal computers fell far short of what the PARC veterans were working toward, the stakes of the game had changed with the emergence of a mass market.

The beginnings of a much wider computer-using community also meant the end of arcane jargon and software designs that required complex interactions with the computer. The design principles demonstrated by the Star and the Lisa pointed the way for the future computer designers. At PARC, they were already onto the Dorado, the Dolphin, and other post-Star computers. Now that truly capable computing machinery was becoming available, it was becoming more widely known that the commercially successful programs of the future would be those that succeeded in bringing the power of the computer out to the person who needs to use it.

The "rule of two" is, incredibly, still in effect, promising even more powerful and less expensive computer hardware in the late 1980s. In 1984, Bob Taylor, now with Digital Equipment Corporation, started doing what he does best — assembling a computer systems research team for a final assault on the objective. Some of the key members of his team were graduate students when ARPA funded time-sharing, and had been involved in the ARC and PARC eras. The latest arena for their ongoing effort to bootstrap interactive computation technology to the threshold of truly powerful personal computing was named "Systems Research Center" — or SRC, pronounced "circ" ("as in circus").

"Come to my office in five years," Taylor challenged me, at the beginning of this gun-lap in the augmentation quest, "and I'll show you a desktop machine twice as fast as the biggest, most expensive supercomputer made today. Then it will become possible to create the software that can take advantage of the capabilities we've known about for a long time."

Taylor now believes that three factors will lead to the most astonishing plateau in information processing we've seen yet: first, a new level of systems software will be able to take advantage of computer designs that make each personal workstation into a kind of miniature distributed network, with multiple parallel processors inside working in coordination; second, large scale integration processors will be small and cheap enough to put fast, vast memory into desktop machines; third, and most important, the people who built time-sharing, graphics, networks, personal computers, intelligent user interfaces, and distributed computing are now at the height of their powers, and they have put hundreds of thousands of person-hours into learning how to build new levels of computer technology.

Advances in network technologies, graphics, programming languages, user interfaces, and cheap, large-scale information storage media mean that the basic capabilities dreamed of by the designers of the first personal computers are likely to become widely available before the turn of the century. One hopes that we will be ready to use them wisely. It would be a sad irony if we were to end up creating a world too complicated for us to manage alone, and fail to recognize that some of our own inventions could help us deal with our own complexity.

| index | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | Footnotes |
read on to
Chapter Eleven:
The Birth of the Fantasy Amplifier

howard rheingold's brainstorms

1985 howard rheingold, all rights reserved worldwide.