Join the conversation at CRNtalk!
CONDUCTED JULY 2005
Question 1: Tell us about yourself. What is your background, and how did you first learn about the concept of the singularity?
I'm Michael Anissimov, 21 years old as of 2005. I may be relatively young for a futurist, but I grew up reading non-fiction works in areas relevant to the topics I talk about today - nanotechnology, artificial intelligence, bioethics, technological and social change. I've always been interested in science and the future, but what really set me on my present course was when I read Nano by Ed Regis in 1996. The book introduced me to the topic of nanotechnology and the idea of highly accelerated technological change.
In early 2001 I began participating in the transhumanist community, a crossroads for dialogue on these issues. I joined various mailing lists and began to correspond with professionals specializing in technological, scientific, or philosophical fields. Like other serious transhumanists, I read pretty much all the material on the Internet related to transhumanism at the time. (Today, in 2005, this would be much more difficult, if not impossible.) I also scaled up my offline reading, expanding into other scientific fields such as cognitive science, which is closely related to Artificial Intelligence.
Sometime in late 2001, after reading as much as possible about biotechnology, nanotechnology, and Artificial Intelligence, I realized that Artificial Intelligence was the technology that would have the greatest and quickest impact on humanity once developed. It was also the only form of technology that could have an independent conscience rather than merely acting as a tool of its owners. Based on arguments I had considered, I also reexamined my initial assumptions on Artificial Intelligence forecasting and realized that the technology would be likely to arrive much sooner than most people would expect - between 2010 and 2030 rather than later in the century or never. This put me in the same group as prominent futurists like Nick Bostrom, Ray Kurzweil, and Ian Pearson.
I initially learned about the concept of the Singularity while browsing Ray Kurzweil's website, KurzweilAI.net. I noticed that quite a few people had different conceptions of what exactly this Singularity thing was all about, and set to work trying to understand this slippery concept. I discovered the writings of Eliezer Yudkowsky, which explained the Singularity in the clearest, most precise, and best informed terms I could find. Yudkowsky associated the Singularity with the arrival of recursive self-improvement, that is, minds directly improving their own software and hardware: super-smart and super-fast minds building still-smarter and faster minds. This correctly described the Singularity as something more foreign and unusual than the acceleration of technological change or the confluence of human social trends.
I began to volunteer for the Singularity Institute for Artificial Intelligence (of which Yudkowsky is a Research Fellow), and in 2004 was named Advocacy Director. My focus nowadays is on building the Singularity Institute as an organization and securing the funding we need to start an AI project with a realistic chance of success, including the creation of a multi-year donor community akin to the Methuselah Mouse Prize's "The Three Hundred". We have come a long way since our founding in 2000, but there is still a lot of work to be done.
Question 2: What is the concept of the technological singularity? It seems that there are multiple definitions of this term.
In his 1993 presentation at a NASA conference, mathematician Vernor Vinge defined the technological Singularity as "...a change comparable to the rise of human life on Earth. The precise cause of this change is the imminent creation by technology of entities with greater than human intelligence." The arrival of greater-than-human intelligence is the original and only correct definition of the technological Singularity. This intelligence could come in either the form of an enhanced human, human upload, or a smarter-than-human Artificial Intelligence; any would qualify as a Singularity. Other definitions, focusing on the acceleration of technological change, the greater global cooperation of human beings, and so on, are contortions of the original definition, made up after the fact. (The most common misunderstanding is viewing the Singularity as an asymptote in the graph of future technological change.) The creation of greater-than-human intelligence would have far-ranging consequences, which are commonly discussed in Singularity dialogues, but none of these consequences are themselves definitions of the Singularity.
It is essential to remember that the Singularity concept is only related to technology insofar as technology will eventually make it possible for us to create minds genuinely smarter than we are (smarter-than-human intelligence). Smarter than us in the way that we are smarter than chimps, not smarter than us in the way a human genius is smarter than a human idiot. This will be especially important to remember as the Singularity concept becomes more mainstream and a variety of pseudo-definitions proliferate. Underlying all current culture, knowledge, civilization, and experience is human-level intelligence. The underlying embryological routine that creates human brains has not varied appreciably in over 50,000 years. Education and experience can only modify the "software" of the brain, never the hardware. We have never seen an artifact, a book, an organizational process, or a piece of technology created by a superhuman intelligence.
Through experience, calculation, and educated guesswork, a human being can come to more fully realize his or her own limitations. The same holds for the human species. Our physical limitations are among the most obvious – no human being can run a mile in under two minutes, lift ten times their weight, hold their breath underwater for more than twenty minutes, and so on. Our cognitive limitations, however, are less obvious. We have limitations on working memory, attention span, our ability to chunk concepts, to abstract critical features from sensory data, to combine symbols in useful ways. Many of these limitations are so universal that we don't recognize them at all, like chimps who possess cleverness in specific domains but lack the ability to formulate intelligent ideas on the human level. Confronted with an actual transhuman intelligence, the results might seem "magical" - underlying their actions and choices would be an intelligence so beyond us that the only natural reaction from a human would be outright shock.
Our cognitive limitations are deep and extensive, and cannot be dispelled by mere education. They are inherent to human brainware, like the inability to fly is inherent to human anatomy. The idea of the Singularity is to transcend these limitations by reengineering brains or creating new brains from scratch, brains that think faster, with more precision, greater capabilities, better insights, ability to communicate, and so on. These superintelligences would then, by virtue of their intelligence, possess superhuman ability to physically restructure and enhance their own brains and minds, leading to a positive feedback spiral of recursive self-improvement. This original, most useful Singularity definition is suspicious to many people because it requires acknowledging that human beings have deep cognitive limitations that can be eliminated through purposeful, technologically-facilitated redesign. This proposition contradicts a lot of superficially appealing anthropocentric ideas with roots that stretch back to the beginning of human thought. Those who believe the mind derives from something other than the brain, will, of course, object to the idea that hardware improvements to the brain would result in more powerful minds.
Fundamental to understanding the Singularity is having a firm grasp on the idea of "smartness". "Smarter-than-human intelligence" would mean intelligence that can imagine things we can't, invent things we would never think of, derive insights that the smartest human beings would never see, and communicate in ways we are simply incapable of. It means greater creativity, ability, flexibility, clarity of thinking, complexity of thinking, speed of thinking, ability to chunk concepts, abstract regularities from noisy data, view things from multiple levels of abstraction simultaneously, introspect more accurately, pick out correlations and analogies between previously unrelated knowledge realms, and thousands or millions of other intellectual abilities we just aren't bright enough to define clearly yet. Given the ability to self-modify, improvements in one area could translate into new ideas for improving other parts of the brain. A domino effect would be sure to ensue. Massively superintelligent entities could emerge over seemingly modest timescales. The moral of the story: if you're going to build a smarter-than-human intelligence, make sure you build something you can trust with your life - and everyone else's lives too.
Having said all that about smarter-than-human intelligence, it's worth pointing out explicitly that some people's definition of the Singularity being an asymptotic spike in technological progress is a hijacking of the original meaning of the term. The acceleration of technological progress is a separate issue from the possible creation of greater-than-human intelligence and should be considered independently of it. Although the creation of such intelligence would certainly accelerate technological progress (in directions towards greater intelligence), the acceleration of technological progress is not strictly required to create superintelligence. Accelerating technology will likely result in a Singularity that arrives sooner than it otherwise would have, but technology created by human minds alone could never result in the explosive discontinuity that accompanies the manufacture of minds that are superintelligent, run on superior hardware, and are recursively self-enhancing. Sometimes I call the prospect of a upwards technological runaway a "Spike", as did Damien Broderick in his book of the same name, to distinguish it from the original "technological Singularity" that Vinge was talking about.
Oftentimes people claiming to talk about the Singularity will talk about the Spike instead, because it's easier to talk about everyday technological acceleration than technologically-created superintelligence. Technological acceleration is an idea with a history, associated with computing, post-WWII science, economic prosperity, etc. Superintelligence is much harder to explain. Upon closer examination, so-called "superintelligences" in science fiction stories are nothing of the sort - they are often narrow-minded villains, easily outsmarted by noble protagonists with human-level intelligence. So instead of telling people that the Singularity means the arrival of beings we'd never be able to outsmart in a million years, some futurists tell people that the Singularity means technological progress will accelerate until we all gloriously merge with our handheld computing devices. This understates the risk of the Singularity by sugar-coating it, not to mention counterproductively discarding Vinge's original definition of the term.
Question 3: Artificial Intelligence (AI) proponents have been predicting AI breakthroughs ever since the first AI conference in 1956. As a result, many believe that artificial intelligence is 10 years away, and always will be. How do you respond to such skepticism?
Like the fields of psychology and other soft sciences, Artificial Intelligence is a field that has historically contained a lot of quacks. The definition of "Artificial Intelligence," "the ability of a computer or other machine to perform those activities that are normally thought to require intelligence," is so broad that it is hard to tell where AI begins and everyday software programming ends. AI researchers often fall prey to anthropomorphism: they project human characteristics onto their nonhuman programs, much in the same way that mythology projects spirits onto natural phenomena. Researchers put tons of work into creating a doll, and they really want that doll to be a real person. It can be surprising and exciting to create a program vaguely resembling some facet of human intelligence, so AI researchers can easily get carried away and think completion is around the corner when in fact it's a long ways off. This situation is exacerbated by strong competition for research grants and public attention, which can encourage researchers to exaggerate their current results and future prospects.
Despite all these false alarms, we must be realistic and acknowledge that Artificial Intelligence will be created eventually. Just because there is a strong human urge to anthropomorphize software programs doesn't mean that no software program will ever become intelligent. The human brain is a structure with a function, or rather a set of functions. Like any other functional structure, the human brain is susceptible to reductive analyses and eventually reverse-engineering. There is no Ghost in the Machine, no immaterial soul, pulling all the switches from some invisible hiding place. Mathematically rigorous metrics of intelligence have been formulated, and computer scientists continue to create programs that display progressively better performance in tasks related to induction, sequence prediction, pattern detection, and other areas relevant to intelligence. If all else fails, we will use high-resolution brain scans to uncover the structure of a specific human brain and emulate it on a substrate with superior performance relative to the original organics. Analog functioning could be perfectly duplicated in a digital context. A computer-emulated human mind with the ability to reprogram itself would be an Artificial Intelligence for most practical purposes. Whether this type of AI would be conscious, or whether any AI could ever be conscious, is somewhat beside the point. An intelligence with the ability to predict future events and make choices to influence them is bound to have an impact on our world, whether or not it has subjective experience.
Part of recognizing progress in Artificial Intelligence is keeping your eyes on the right areas. Oftentimes the areas where real progress is occurring are not called "Artificial Intelligence" at all, but theoretical computer science, evolutionary psychology, information systems, or mathematics. Creating Artificial General Intelligence will require researchers with serious knowledge of Natural General Intelligence. It will require an awareness of the underlying mathematics of intelligence, not just programming savvy. It will require lots of computing power, probably somewhere between one-thousandth and ten times the computing power of the human brain. The overenthusiastic Artificial Intelligence researchers of the 60s and 70s were using computers with the computational power of a cockroach brain. How could we have expected them to create intelligence, even if they had the right program? Conversely, even a bright graduate student might be able to create a functioning Artificial Intelligence with ten or a hundred times human brain power at her disposal. Many so-called "AI skeptics" are just thinkers afraid of the prospect that the human brain is no more than neurological machinery, much like the biologists of the early 1800s were terrified by the prospect that the patterns of life were entirely rooted in mere chemistry; but this fact has already been established in cognitive science for decades.
Another issue in Artificial Intelligence progress is what we might call a threshold effect. A certain threshold of functional complexity must be assembled before we have anything we can reasonably call an AI. Human-equivalent and human-surpassing Artificial Intelligences will be concrete inventions. This is similar to the way that the light bulb and the steam engine were both concrete inventions. 50% of a light bulb does not produce more light than 25% of a light bulb; they both produce none. There is no such thing as "50% of an Artificial Intelligence". You either have an AI or you don't. Although there may be important intermediary milestones which produce interesting results, it is not fair to expect constant, even progress when technology often proceeds in fits and starts. Technologies that have proven themselves in the past, such as the computing industry, can continue to attract brainpower and funding even if no critical breakthroughs occur, because workers in the field are confident that eventually a threshold will be passed and a breakthrough will occur.
Question 4: Some AI critics have noted that no "Moore's Law" for software exists. There appears to have been little real progress in software since Vernor Vinge's seminal 1993 paper, and Vinge himself admits that software bottlenecks could doom the concept of a technological singularity. Is this dearth of progress in software a major concern for you?
I strongly disagree that little real progress has been made in software between 1993 and 2005. The vast majority of the software industry, along with their end users, would disagree as well. Millions of software programs have been released since 1993, millions of new programmers have entered the business, and millions of new, innovative ideas have been applied to software design in that time frame. The open-source revolution made it easier for anyone with programming knowledge to contribute to software projects and collaborate with huge numbers of other talented programmers. New software is being released all the time, and people do actually switch from old software to new software, not because of some gigantic collective illusion, but because it's truly better for its intended use. This, along with what people actually using the software say about it, lead me to think that software progress is truly occurring.
Part of the problem is a lack of any specific criteria for identifying when progress in software has occurred. Is it the size, in lines of code, of the largest software program in existence? The yearly sales of software in general? The total number of happy end users? The number of new software programs introduced each year? The estimated amount of money saved each year by software users? The list can go on and on. All values of the proposed criteria have been going up over the years, most exponentially. What values are the critics proposing, the values which are rising only linearly rather than exponentially? I have no idea.
Another problem is distinguishing human performance from software performance. In computing hardware, a clear metric (computations per second) exists, and the hardware interacts only indirectly with the user to perform its goals (through software). In software, it's difficult to create metrics for success and then divvy up the credit between the users, the user's advisors and colleagues, a specific software program, other software programs being used, available information on the web, and so on. If a user buys a new software program and says, "This software is really useful for helping me do my job!", then obviously the credit would go to the software. Many users have been saying this for a long while now.
For what it's worth, Ray Kurzweil claims the following; "Both hardware and software have increased enormously in power. Today, a semiconductor engineer sits at a powerful computer assisted design station and writes chip specifications in a high-level language. Many layers of intermediate design, up to and including actual chip layouts, are then computed automatically. Compare that to early semiconductor designers who actually blocked out each cell with ink on paper." Computer-aided-design in semiconductor design is probably one of the most blatant examples of progress in software. Is there similar software progress in other domains? It's hard to say... some domains may not lend themselves to advances in software. For example, I'm using WordPad to write this interview, because it does just fine for the job. But what if I wanted to design new aerospace hardware? Perhaps advances in software in that domain have been occurring far more rapidly than advances in the domain of word processing, and these advances prove far more useful from a practical point of view.
In any case, if software progress ends up being a huge barrier in Artificial Intelligence development, then we could always scan a human brain in extreme detail and run a simulation of that brain, with similar input-output streams to an actual human being. Sensory data goes in, intelligent decisions go out. This approach, however, would take far longer and require far more advanced technology than we currently have at our disposal. It would, however, eventually be certain to succeed.
Question 5: Estimates of the computer power needed to match the raw computing power of the brain vary. The AI pioneer Marvin Minsky has claimed that a PDA could achieve sentience, yet Ray Kurzweil argues that current computers are a million times slower than a human brain. How much faster do computers need to be before they match the computing power of the human brain?
Different people give different estimates of this value. The usual cocktail-napkin estimate goes as follows: The human brain has about 1011 neurons, each with about 5,000 synapses. Neural signals are transmitted along the synapses about 200 times per second or less. An electrochemical neural signal is fairly simple and probably transmits no more than a few bits, say 5. Multiply all the above, and you get about 1017 operations per second, a realistic upper bound for the computing power of the human brain. The maximum performance achieved by the world's fastest computer in 2005, IBM's Blue Gene/L, is 136,800 gigaflops (billion floating operations per second), or about 1014 operations per second. By this line of reasoning, we could come to the conclusion that the human brain is about a thousand times faster than the fastest current supercomputer, or about a million times faster than a desktop PC. If Moore's law continues at its current pace, we would then reach human-equivalent computing power in a supercomputer sometime around the early 10s and personal computers sometime in the early 20s. (In fact, Japan has plans to create a 10 petaflop (quadrillion floating operations per second) supercomputer by 2011.)
The problem with the above way of looking at the issue is that it neglects two important and complementary facts: first, not every single neural signal contributes optimally or efficiently to human intelligence, and secondly, just because nature happened to evolve intelligence when brains became a certain size doesn't mean that human programmers, with a theory of intelligence, can't do far better. Intelligence is a relatively recent phenomenon in the biological world, which evolved by exploiting the preexisting neural hardware in hominid brains. This hardware didn't evolve for thinking specifically, but for sensing, performing motor skills, executing adaptive behaviors, and so on. Some of it may be essential for intelligence to work, a lot of it is probably not. "Creating intelligence" is not the same thing as "copying a human brain". The human brain is highly redundant. The fundamental "quanta of thought" may consist of thousands or millions of neural signals firing at once rather than a single neural signal. Human programmers may be able to lessen computing costs by abstracting away large amounts of non-essential cognitive complexity the human brain just happens to use for intelligence. This has already been done in specialized domains such as vision and hearing. Preliminary estimates (by AI researcher Hans Moravec, in his work on the retina) suggest that 0.1% or less of the computation going on in the brain is actually necessary to perform the task that part of the brain is doing. This led Hans Moravec to placing his estimate of human brain equivalence at 1014 ops/sec.
I'm not sure what the processing power of the human brain is, or how much of that processing power actually goes into human general intelligence. So it's hard to say when computers will match the processing power of the human brain. Put concisely, I would say that if it isn't here yet, it will be here very soon.
Question 6: Your website discusses the links between AI and nanotechnology. If, as some skeptics have argued, Drexlerian nanotechnology never becomes feasible, how will that hamper the development of machine intelligence?
I regard it as extremely unlikely that Drexlerian nanotechnology will never become feasible. The feasibility of using nanoscale machine systems to create useful, macro-scale products has been analyzed at length by Drexler and others. The human body is made up of working nanomachines. Among the simplest applications of Drexlerian nanotechnology is the creation of powerful nanocomputers, nanocomputers which will offer orders of magnitude times human brain computing power, even if we use the most liberal estimates for human brain-equivalent computing power and the most conservative estimates for nanocomputers. These nanocomputers will allow the development of Artificial Intelligence to accelerate rapidly, if AI hasn't already been developed yet, and if general-purpose nanocomputers are available to AI researchers.
If Drexlerian nanotechnology is delayed, say, arriving in 2025 instead of 2015, then whether or not AI is delayed will be dependent upon how much computing power is necessary to produce AI, and whether or not Moore's law grinds to a halt due to the absence of nanotechnology. Even if nanotechnology were not developed by 2025, I'm sure that computer manufacturers will exploit every possible alternative technological avenue to make our computers faster. Tens of billions of dollars of R&D are poured into this field on a yearly basis. Proposed ways to keep Moore's law going include 3-D chips, quantum computing, DNA computing, wireless links to centralized processors of huge size, and many others. The semiconductor industry appears confident that Moore's law will continue until 2015 at least.
Will AI be a computing-power-hungry area to do research in, requiring nanocomputers to succeed? Compared to conventional software applications, yes. But how much more? Giving a ballpark estimate would require a theory of intelligence that explicitly says how much computing power would be necessary to produce a prototype that solves challenging problems on human-relevant timescales. For the reasons stated earlier, I doubt that human-equivalent computing power will be necessary to create AI. Getting AI right is more a matter of having the right theory than pouring tons of computing power into theories which don't work. If your theory is garbage, it could require hundreds or thousands of times human brain computing power to produce intelligence. Conversely, if one's theory is good, they might be able to create intelligence on the computing hardware of the day, only to see it running in comparatively slow motion. But if the researcher could prove to others that their software really is intelligent, then funds would no doubt be readily available, allowing the researcher to purchase better machines to let their AI run in realtime or faster.
To be perfectly honest, I would prefer that (human-friendly) AI be developed before Drexlerian nanotechnology ever arrives. This is the standard opinion of the Singularity Institute and the majority of Singularity activists. The risk of nanotechnology being introduced to our current society is high, and the safe development of superintelligence is the best way of preparing for that risk. If we confront nanotechnology first, then we face that risk alone, in addition to the risk of creating AI in a possibly chaotic post-nanotech world. In that sense, if it turned out that nanotechnology was harder than we originally thought, then I would be pleased, because it would delay risk.
Question 7: What is your current "best estimate" for when the technological singularity will occur?
This depends on how difficult it will be to create an Artificial Intelligence smart enough to improve the source code underlying its own intelligence independently. Given special sensory modalities for analyzing and upgrading its own code, this may turn out to be not much harder than creating an AI of human-similar intelligence in the first place. But AI in general is a very tough problem. AI researchers will begin having access to computing machines with equivalent computing power to the human brain in the early 10s or the 20s, depending on which estimate you use, although based on what I said before, human brain equivalency is a poor metric for judging the difficulty of AI.
According to some researchers in theoretical computer science, the Artificial Intelligence problem has already been solved! The only problem is that the proposed solution assumes unlimited computational resources. Marcus Hutter, a researcher at the Dalle Molle Institute for Artificial Intelligence (IDSIA) in Switzerland, one of the top AI labs in the world, published a book in 2004 entitled Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. The book unifies two very well-known concepts, sequential decision theory and algorithmic information theory, to create an optimal reinforcement learning agent capable of being embedded in an arbitrary external environment. The agent is mathematically optimal and the results are understandable to anyone familiar with sequential decision theory and algorithmic information theory. This is a very significant achievement! In his book, Hutter discusses ways to cut back on the computational demands of his agent, dubbed AIXI, and progress has been made. Hutter and his thirty colleagues at the IDSIA continue to work towards computable Artificial Intelligence.
One might think that I and other AI enthusiasts would be excited and positive about this development, but in fact myself, and many others, are not. The reason is that because a superintelligent AI without just the right type of basic motivations would be a risk to human life rather than a boon to it. Humans have our distinct sets of goals we care about, like food, money, sex, intellectual pursuits, etc. A young AI would necessarily have much more basic desires, only the fundamental set of desires it would need to be called an AI at all. These desires include the desire to accumulate sensory information, add to its database of prior probabilities, take actions resulting in increasing its ability to learn, perhaps the desire to reprogram itself to learn better, fulfilling cognitive optimization goals, and so on. Protecting humans, peace and love, what we call "reasonableness" or "common sense" - these things are not necessarily part of the picture. These are complex goals with complex motivations underlying them, complex motivations that wouldn't pop up by chance in an AI. They would have to be put there explicitly. If they are not, then disaster could ensue. A superintelligent AI with the ability to design and fabricate robotics for itself could quickly wipe out all humans on Earth unless it cared specifically for their welfare. Not to say that such a superintelligent AI would be malicious, it just wouldn't care. The desire to acquire knowledge or compile a list of probabilities would override all other concerns, and the entire planet could be at risk, considered simply as raw material to be converted into a gigantic information-processing device for such an AI.
Until more work is done on the question, "Which goals, or motivations, will an AI need in order to make our lives better rather than worse, and how can we put these goals in terms of actual math and code?", I will feel intimidated rather than delighted by advances in Artificial Intelligence. Programming in explicit obedience sounds simple and intuitive, but we forget the large pool of mutual assumptions humans have when ordering around other humans, and the inherently low ability of any one human to change the world in ways that are huge and significant. If you phrase an incorrect order to a human, the negative outcome will be small. But if you phrase an incorrect order to an AI, it might kill everyone on the planet trying to execute that order. The only way we can dodge the responsibility to create human-friendly AI is by believing that no AI, even if recursively self-improving, will ever become powerful enough to defeat human defenses, which I judge to be a false belief.
It's hard to place a concrete estimate on when the Singularity will occur. My snap answer is "soon enough that you need to start caring about it". The rise of superhuman intelligence is likely to be an event comparable to the rise of life on Earth, so even if it were happening in a thousand years, it would be a big deal. Like Vernor Vinge, who said he would be surprised if the Singularity happened before 2005 or after 2025, I'd say that I would be surprised if the Singularity happened before 2010 or after 2020.
Question 8: In his book, After the Internet: Alien Intelligence, James Martin argues that "Much of the future value of computing will, for corporations, lie in creating nonhuman intelligence rather than having computers imitate humans." Martin argues that the Internet will create a unique form of intelligence that is intrinsically different than human intelligence. Do you agree?
I agree that we have much to gain by creating intelligences that are nonhuman in their nature and behavior. Humans, for instance, find it difficult to be consistently altruistic and rational. Do I think that nonhuman intelligence will emerge from the Internet? Not at all. The power of the Internet lies in the people that use it. For the most part, the Internet is just a set of protocols for getting data from one human being to another. Like a telephone network, but of a more general nature. Because using the Internet can allow us to gain a substantial amount of knowledge, we make the mistake of thinking that the Internet itself is capable of containing knowledge or processing it to create new ideas. This is not the case. The Internet will become "smarter" over time, but the vast majority of this smartness will always come from the people actually posting their ideas and communicating about them.
Question 9: Are you aware of any corporations that are actively engaged in AI research? Are there any active Government AI projects? Can the field of AI steadily improve with so little real funding?
Recently, IBM launched the "Blue Brain" project, a huge effort with the goal of simulating the entire human brain down to the molecular level. Even though IBM builds some of the fastest supercomputers in the world, I strongly doubt their simulation will be of the resolution required to produce a functioning virtual intelligence. General Electric Corporate R&D labs research "computing and decisioning systems", with applications from helping physicians diagnose disease, to automating credit decisions for banks, to monitoring jet engines. Researchers from Hewlett-Packard Labs have published dozens of papers in Artificial Intelligence and their site registers hundreds of search results for the term. Artificial Intelligence is listed by AT&T labs as one of their primary research areas, as sophisticated, intelligent software must be built to deal with the terabytes of data AT&T's networks see flow by each day. The massive Microsoft Research facilities have hundreds of researchers focused on Artificial Intelligence, with research projects with titles like "Adaptive Systems and Interaction", "Machine Learning and Applied Statistics", "Machine Learning and Perception", and so on. Thousands of other corporations worldwide are actively engaged in AI research, with millions going into the field daily and billions yearly.
Around 1997, the field of AI was declared revived. At an Artificial Intelligence conference held at MIT, development leaders from Microsoft, Netscape, General Electric, Ascent Technology, and Disney discussed numerous examples of AI enabled products and product enhancements.
Government funding of AI and AI-related projects is readily available. For example, the German Research Center for Artificial Intelligence is a non-profit contract research institute employing over 210 skilled workers and 150 part-time research assistants working on over 60 projects on an annual budget of over 15 million euros. The United States' Defense Advanced Projects Research Agency (DARPA) spends tens of millions of dollars every year on Artificial Intelligence systems playing a part in advanced aerospace systems, embedded software and pervasive computing, tactical technology and decisionmaking aids, network-centric warfare, and command, control, and communications systems. There are numerous examples of other countries and companies working on AI.
My disclaimer is that many of the "Artificial Intelligence projects" listed above are specialized systems qualified as intelligent in their field, but dumb in the general sense. Creating General Artificial Intelligence is often viewed as more pie-in-the-sky and therefore has a harder time getting real funding. Nonetheless, I'd say that the field does get a few million per year and is being worked on by maybe a few dozen researchers. This will undoubtedly increase as time goes on, and it has been increasing in a loose exponential pattern throughout this decade already.
Question 10: Are you troubled by the arguments of AI critics such as Roger Penrose and John Searle? Are you disturbed by AI pioneer Marvin Minsky's recent claim that AI research is effectively in a rut?
Roger Penrose and John Searle are not truly "AI critics" in the blanket sense, in that neither is saying that no form of Artificial Intelligence will ever be possible. Both simply object to the idea that an algorithmically programmed computer can be conscious. Penrose takes it a bit further, lumping characteristics such as "insight" into the type of consciousness/intelligence that humans have and computers supposedly cannot. Even if I agreed with their arguments, they would not be showstoppers for Artificial Intelligence in the way that people are fond of portraying them. Sophisticated AIs, even lacking consciousness, could still run their non-conscious thinking algorithms very rapidly and without error, allowing them to manipulate aspects of the world to better fulfill their goal systems. Advanced, non-human-friendly AIs would still constitute a risk to our existence, and advanced, human-friendly AIs could still help us advance our civilization enormously.
In his section on AI in "Nanotechnology and International Security", Mark Gubrud writes the following: "By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle."
Penrose's argument against AI boils down to "human brains use non-algorithmic thinking operations for intelligence, and because present-day computers can only run algorithms, these computers can never be intelligent in the same way that human beings are". Notice this only refers to algorithmic computers, not any type of computer. If it turns out that nonalgorithmic computations are necessary to implement intelligence, then surely we can develop new computing architectures to bypass the roadblock. This would undoubtedly cause delays, but who cares? The impact of Artificial Intelligence would be so great (in either the negative direction or the positive direction, but not both) that a delay of a few years or even decades is just a drop in the bucket. In any case, my response to Roger Penrose's argument is summed up well by a phrase once uttered by John von Neumann; "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!"
John Searle appeals to our intuition that a person processing data, say translating Chinese characters into English or vice versa, in an algorithmic fashion does not necessarily understand what they are processing. This is true. But does this mean that all cognitive systems which use a series of computational steps to produce an intelligent output in response to some input don't truly "understand" what they are doing? I don't think so. "Understanding" is about the ability to make comparisons between knowledge domains, procedures for processing and collecting knowledge, applying knowledge to practical problems, and so on. Simple expert systems may create the illusion of understanding when they are just processing rules, and this may not qualify as true "understanding", but a more complex expert system with the characteristics I listed would eventually be said to "understand" whatever knowledge it possesses, perhaps with even a superhuman level of understanding. I see Searle's arguments as a warning for us not to assume that a system possesses understanding or intelligence too early, but not as a death knell for creating understanding or intelligence in any artificial system.
Cognitive systems perform physical operations which result in intelligent behavior. If the set of physical operations we initially think will produce intelligent behavior doesn't end up working, then we can always choose another set. Eventually, we find a set that works. It's really that simple. The set of physical operations which produce intelligence might, in the most extreme possible case, have to be strictly identical to the physical operations present in a human brain, but it is immensely unlikely that human biological intelligence is the only type of intelligence possible.
Question 11: Is technological progress accelerating or slowing? It seems that a large number of writers and scientists are arguing the latter.
Far more are arguing the former than the latter. No matter which quantitative metric you care to use, practically every area of technological progress is accelerating. Better tools and theories allow us to create the next round of superior tools and theories more quickly. Arguments that technological progress is slowing often seem to be either thinly-veiled manifestoes about why the acceleration of technological progress is bad, or romantic throwbacks to the legendary founders of scientific or technological fields. On AccelerationWatch.com, John Smart provides a number of defenses for the notion that technological progress is slowing. Ray Kurzweil provides many arguments as well. Thinkers can examine both sides of the argument and decide for themselves. Of course, accelerating technological progress is not a necessary condition for a technological Singularity, as argued earlier.
Question 12: What are your goals for the next decade?
Tough question! My ultimate goal is to be as useful as possible to the effort to create human-friendly Artificial Intelligence and usher in a beneficial Singularity. Today, in 2005, I see the non-profit Singularity Institute for Artificial Intelligence (SIAI) as the organization most serious about this effort, most focused on it, and most willing to recognize and address the dangers which accompany the potential benefits of AI. I'd like to be Advocacy Director for SIAI for a long time, coordinating volunteer projects, reaching out into the AI field, writing articles that communicate the work of SIAI to the public, raising the money necessary to begin a six-person team programming project, speaking at conferences, making friends with technologists and futurists, and eventually writing a book on the Singularity. I want to see SIAI writing code by 2006 or 2007, so that we can begin the long process of debugging and testing, comparing our code to other code out there, soliciting feedback from others, and developing theoretical ideas about AI design that more clearly delineate the scope and size of the challenge, bringing us closer to our goal.
SIAI has started making Silicon Valley our locus of activity, which I find very exciting. There are many intelligent people in the Valley, with a lot of vision, proven success, optimistic attitudes and a desire to make the world a better place. Shortly, I plan to move from San Francisco to Silicon Valley to play a closer part in the fascinating activities and gatherings which are constantly taking place in the world's greatest technology mecca. I hope to see the Singularity meme move into the mainstream via books like Ray Kurzweil's upcoming The Singularity is Near. I want to work on arguments for the near-term feasibility of Artificial General Intelligence, the massive consequences which would follow its successful development, and the necessity of creating AI which is explicitly benevolent. I believe it is possible to get the high-tech communities of Silicon Valley interested in the Singularity and willing to help, as long as the right approach is used. The fact that Silicon Valley denizens are so concerned about nanotechnology and its safe application make me think that they would also consider Artificial Intelligence a bigger issue if they knew a bit more about it.
I'm focused on finding people who are Singularitarians (advocates of a beneficial Singularity), but just don't know it yet. We're a technological movement created for philosophical and moral reasons rather than financial reasons, and I think that connects Singularitarians much more closely than the typical bonds you find between people in technology companies. This makes us more like family members than co-workers or simply thinkers with similar interests. If you were to ask me what my single greatest goal for the next decade would be, I'd tell you it's bringing the Singularitarian family closer together.
This interview was conducted by Sander Olson. The opinions expressed do not necessarily represent those of CRN.
Copyright © 2002-2008 Center for Responsible Nanotechnology TM CRN is an affiliate of World Care®, an international, non-profit, 501(c)(3) organization.