Garry Kasparov, the best human chess player in the world, is going to be beaten by a toy, and you can read about it right here on Electric Minds.
A very expensive toy, to be sure, and one with an impressive logo and heritage. But IBM's Deep Blue '97, for all of its chess-playing RS/6000 programming, is essentially just a toy designed to do only one thing, and do it very well: play a board game. It's fairly likely that the chess-playing capabilities of Deep Blue will be available to consumers within a decade or two, playable on consumer-quality home systems or even in pocket videogames. Toys -- very smart toys. This Future Surf is about what happens when our toys become as intelligent as we are, and then (in the next model year) smarter.
When we build the first consciously aware machine, what will it think of us?
Artificial intelligence is an elusive prize. The term itself has a history of grand pronouncements and even grander failures. Proponents of a limited approach to what computers could do tend to prefer the term "learning systems." This "soft AI" camp doesn't focus on creating self-aware machines; instead, its adherents build expert systems, focused neural networks, and other limited-task projects. Proponents of a more radical vision go with the expression "machine intelligence." The "hard AI" group pushes on to try to build a general-purpose intelligent system, able to learn and adapt, and functioning at human brain equivalence or better. So far, the Soft AI perspective is winning -- Deep Blue (and its various non-chess capabilities) is a wondrous example of how powerful a computer can be and still be as dumb as a box of sand, which is, if you think about it, what it is.
Hard AI is a lot more fun.
Minsky and MITThe traditional artificial-intelligence perspective can best be summed up in a single word: Minsky. Perhaps the best-known AI researcher in the United States (and probably in the world), MIT's Marvin Minsky is a pioneer in cutting-edge approaches to computer learning and machine cognition. His book "The Society of Mind" was among the first nonfiction attempts to explain the ideas and implications of machine cognition to a general audience. He later used many of these ideas in a science fiction novel, "The Turing Option." Minsky's classic 1982 article, "Why People Think Computers Can't," which appeared in AI Magazine in the fall of 1982, is a nontechnical answer to the standard objections about thinking machines. Although the cognitive sciences and the technology of computing have advanced significantly in the last fifteen years, this article remains quite relevant in the artificial-intelligence debate.
MIT's Artifical Intelligence Lab is, in part because of Minsky, the Mecca for machine cognition and robotics work in the US. Projects currently underway in the lab include the development of a system to communicate location to mobile robots via natural language, and Cog, a humanoid robot project.
Cog is particularly interesting. The idea is that a humaniform robot will more easily interact with humans, as people are accustomed to certain reactions and forms of body language when they speak with others. In addition, some cognitive philosophers posit that part of human consciousness derives from our relationship to the body; a bodiless machine intelligence, therefore, could never manifest a human-style intelligence.
Working closely with the AI lab at MIT is the Center for Biological and Computational Learning. Focusing on learning how intelligent systems learn, the CBCL supports projects in neuroscience, learning and recognition, and visual identification. This last area is an especially deep field of study, as visual recognition is a fundamental aspect of intelligent interaction with the world. Pawan Sinha has a fun and interesting slideshow on visual object recognition available on the site, done as an illustrated conversation between Larry King and Bill Clinton (!).
Cognition, Consciousness, and the Rise of IntelligenceTo understand why it is so difficult to build a machine that thinks, it is useful to pay attention to those who study the human brain.
To this end, MIT's Mind Articulation Project is a joint project involving neurophysiologists, cognitive scientists, linguists (including Noam Chomsky), computer scientists, and philosophers, all seeking to understand the nature of cognition. Their work focuses on determining which parts of the brain control which functions; they make great use of new noninvasive techniques for detailed observations of brain activity. How the brain recognizes speech is a main area of study.
Stanford's John McCarthy is looking at a somewhat more esoteric question. In "Making Robots Conscious of Their Mental States," he examines just what sorts of self-awareness an artificially intelligent system would need, or could have. MacCarthy is interested in determining when and how introspection is needed. This is from the Introduction to "Making Robots Conscious of their Mental States":
Suppose I ask you whether the President of the United States is standing, sitting, or lying down at the moment, and suppose you answer that you don't know. Suppose I then ask you to think harder about it, and you answer that no amount of thinking will help ... A certain amount of introspection is required to give this answer, and robots will need a corresponding ability if they are to decide correctly whether to think more about a question or to seek the information they require externally.
To really understand machine cognition, however, you have to have a grasp on why and how humans think. One of the better ways to spend a weekend is to read William Calvin's "The Ascent of Mind." An exploration of how intelligence emerged and the relationship between evolution and paleoclimatology, "Ascent" is a good introduction to the issues surrounding the brain and the mind (as well as the environment) -- and the entire book is available online. Dr. Calvin's later works (excerpts and chapter summaries are also available on his website) delve deeper into how brains work and, most importantly, how cognition works.
Calvin is no stranger to questions of the relationship between computers and the human brain. While cautious about describing brains as functioning in a manner at all similar to computers (he has observed that the brain is historically described as working just like whichever technology or process is considered "cutting-edge"), he does not immediately dismiss the idea of smarter-than-human machine intelligence as outlandish. A machine need not work just like a human brain to be considered intelligent. Calvin devotes the final chapter of his most recent book, "How Brains Think," to this question of whether we can or should develop superhuman intelligences.
CycDoug Lenat has a fairly novel approach to building a machine intelligence: use common sense. Literally.
Lenat and his programmers at Cycorp believe that they may be able to help achieve a human-style machine intelligence (or, at least, a human-friendly interface for computing systems) by the accretion of building blocks of commonsense rules of thumb about the world. Constructed in a formal machine code, the Cyc (from encyclopedia) system is not a massive neural network, but an "inference engine" intended to be able to draw logical conclusions when presented with novel situations. The Cyc FAQ gives a good overview of the Cyc team's intentions for the system. "Comparing Cyc to Other AI Systems" is an early look at Cyc and other AI projects.
Cyc-based software will soon be available for both Macs and PCs; in the meantime, the public can view the Cyc Ontology, a detailed discussion of the current set of heuristics Cyc uses (free registration is required). An example (from the introduction):
The Cyc project is certainly intriguing. The underlying notion for the group working on Cyc is not that the building blocks of rules will, over time, achieve intelligence per se, but that a Cyc Ontology Guide will be at the root of neural-network or other adaptive learning systems to help prevent the "brittleness" often found in these methods. Cyc is, in many respects, an attempt to create a meta-expert system for everyday interaction with humans, allowing the specialized systems to focus on particular limited tasks.
Moravec, Penrose, and the Debate over Hard AII read Hans Moravec's "Mind Children" soon after it first came out in 1988. Along with Eric Drexler's "Engines of Creation," which is about molecular nanotechnology, it twisted my brain into thinking about the future in ways I had not considered. "Mind Children" is not an easy read; Moravec challenges the reader to follow the leaps of his arguments, and his conclusions will disturb many. He endorses uploading minds from flesh brains to computers, sees no problem with digital copies of "individuals" roaming around, occasionally coming together to share experiential data, and believes that humans as flesh-and-blood creatures don't have much of a future.
For these reasons, Moravec remains obscure to the mainstream and a hero to those on the edge (in particular, the Extropian/trans-humanist types love him). His 1995 essay "Pigs in Cyberspace," for example, describes a future where the physical world is transformed into ultra-efficient data storage and communication media, and where minds uploaded to digital information systems rub shoulders with AIs created just for these cyberspatial realms. Soon the uploaded human consciousnesses abandon unnecessary human traits (such as the need for body sensation, even if simulated) and take their place among the "unhuman" beings. Moravec writes of this future in a way that makes it seem very distant, but all too real.
A recent Moravec essay that's a bit more down to earth is "The Age of Robots." An examination of the impact of robots and artificial intelligence on human society, it is a sober and thoughtful work. The speculation is sometimes radical -- this is clearly a Hans Moravec document -- yet the ideas are clear and the argument persuasive.
Not necessarily persuasive to everyone, however. Moravec's approach to machine intelligence is definitely in the Hard AI camp. The British mathematician Roger Penrose has risen to the challenge of the Hard AIers with a series of works designed to prove -- in the strict mathematical sense -- that nonbiological intelligence is impossible. The first of Penrose's treatises on this topic was 1991's "The Emperor's New Mind." Dense and filled with the minutiae of math and information theory, it pulls no punches in declaring the Hard AI position both technically and philosophically bankrupt.
Unsurprisingly, Moravec is at the forefront of those responding to Penrose's claims. The debate has intensified with the publication of Penrose's new work, "Shadows of the Mind." Penrose's prologue to the work gives a flavor of the work. Psyche: An Interdisciplinary Journal of Research on Consciousness provided a forum for discussion between Penrose and those critical of "Shadows," including Moravec and a (very technical) review by Solomon Feferman, of Stanford's Department of Mathematics. Appropriately, Psyche also gave Penrose ample space to reply to his critics.
The SingularityIf those who hold the hard AI position are right, the next century will likely see more change than the human species has ever witnessed before. If we can construct a machine intelligence based on processor-type technology, the inexorable push of technological development means that these systems will only get more powerful, more sophisticated, and smarter. If Moore's Law continues to hold as it has for the last thirty years, that improvement will amount to a doubling every 18 months for the same cost.
Moravec estimates that the processing power of the human brain is, more or less, 10 teraflops (trillion floating-point operations per second). Late in 1996, Intel and Sandia National Labs constructed a 1-teraflops massively parallel machine. If Moore's law holds, we should see a 10-teraflops machine within five to six years. But let's be conservative: we could easily not see a functional implementation of 10+ teraflops computing until 2010. Let's be very conservative, and say it takes another twenty years after that to come up with a roughly human-equivalent system. If a human-equivalent machine were invented on January 1, 2030, a machine twice as powerful as the human brain would be available by mid-year 2031, four times as powerful by January 2033, and so forth. By 2050, a machine over 65,000 times faster than the human brain would be available, assuming that the introduction of trans-human intelligence systems doesn't increase the speed at which more powerful systems are developed.
Of course, if we put the 10-teraflops estimate at 2003, and human equivalence at 2005, then we're talking about machines 65,000 times as powerful as the human brain by 2030, or (to turn Moore's Law around) human-equivalent machines 65,000 times less expensive than the Intel/Sandia machine is now (one figure they conveniently neglected to include on the web page). If the Intel/Sandia machine cost $1 billion, a roughly equivalent machine in 2030 could cost around $16,000 -- about the price of a good server.
Science fiction writer and computer scientist Vernor Vinge, author of (among others) "True Names" and "A Fire Upon the Deep," has come up with a concept based upon this exponential improvement in processing power. He calls the point at which computing power has increased past our ability to understand it the Singularity. I strongly encourage you to read Vinge's article, as it is an excellent example of taking a process with which we are all comfortable today (computers are getting faster) and thinking seriously about what happens if it continues. I also suggest that you read William Calvin's essay on the Singularity.
The concept becomes even more powerful when combined with some other bleeding-edge technological concepts. A person by the name of Eliezer S. Yudkowsky has written an essay called "The Low Beyond," which combines some of Hans Moravec's ideas with Vinge's Singularity, and mixes in a healthy dose of molecular nanotechnology. You may not believe everything that Yudkowsky says about emerging technologies, but if he's even 10 percent right, it is likely that 2047 will bear little resemblance to 1997. This is perhaps the best and scariest spin on the Singularity (and nanotech and machine intelligence) I've ever read.
So we start with Deep Blue and we end up with the Deep Unknown. Although AI's promise has so far exceeded its delivery, the growth of processor speed combined with the continual improvement in process makes the emergence of an apparently conscious machine an increasing likelihood over the next couple of decades. One need not accept the notion of a sociotechnological Singularity to see the extraordinary implications -- for our culture, for our economy, for our identity -- of such a device. The possibilities are both exciting and frightening. Within most of our lifetimes we will probably see a world where our own creations are suddenly much, much smarter than we could ever be without them. What's more, if Moravec and Vinge are right, the twentieth century could well be the coda of the unmodified biological human brain.
Check your seat belts. Things start moving fast from here.
Join the conversation about artificial intelligence, and also follow Deep Blue's quest to defeat Garry Kasparov.
If you're looking for something that could change the face of civilization as we know it, the spread of tropical diseases to the rest of the world (or the mutation of existing diseases to new strains that can't be treated) have a much better chance for success than a disastrous run-in with an asteroid.
Most Active Topics:
Topic 13 Deep Futures
Topic 11 Twenty-Five Years Out...
Also in Future Surf:
Head First into the Future
It's the Only Way to Live.
Intense Pulses of Light
electric minds |
virtual community center |
world wide jam |
edge tech |
Any questions? We have answers.