• twitter
  • facebook
  • linkedin
  • googleplus

A.I. researchers say Elon Musk's fears 'not completely crazy'

Artificial Intelligence
Credit: gengiskanhg/Wikimedia Commons (CC BY-SA 3.0)

Artificial intelligence researchers have own worries about intelligent systems

High-tech entrepreneur Elon Musk made headlines when he said artificial intelligence research is a danger to humanity, but researchers from some of the top U.S. universities say he's not so far off the mark.

"At first I was surprised and then I thought, 'this is not completely crazy,' " said Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University. "I actually do think this is a valid concern and it's really an interesting one. It's a remote, far future danger but sometime we're going to have to think about it. If we're at all close to building these super-intelligent, powerful machines, we should absolutely stop and figure out what we're doing."

Musk, most well-known as the CEO of electric car maker Tesla Motors, and CEO and co-founder of SpaceX , caused a stir after he told an audience at an MIT symposium that artificial intelligence (AI), and research into it, poses a threat to humans.

"I think we should be very careful about artificial intelligence," Musk said when answering a question about the state of AI. "If I were to guess at what our biggest existential threat is, it's probably that… With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."

He added that there should be regulatory oversight -- at the national and international level -- to "make sure we don't do something very foolish."

Musk's comments came after he tweeted in early August that AI is "potentially more dangerous than nukes."

His comments brought images of movies like The Terminator and Battlestar Galactica to mind. The science-fiction robots, stronger and more adaptable than humans, threw off their human-imposed shackles and turned on people.

The statements come from the man who founded Tesla Motors, a company that has developed an Autopilot feature for its dual-motor Model S sedan. The Autopilot software is designed to enable the car to steer to stay within a lane and manage speed by reading road signs.

Analysts and scientists disagree on whether this is artificial intelligence. Some say it's not quite AI technology but is a step in that direction, while others say the autonomy aspect of it goes into the AI bucket.

andrew moore carnegie mellon artificial intelligence Carnegie Mellon University

Andrew Moore, dean of the School of Computer Science at Carnegie Mellon University.

Last month, Musk, along with Facebook co-founder Mark Zuckerberg and actor and entrepreneur Ashton Kutcher, teamed to make a $40 million investment in Vicarious FPC, a company that claims to be building the next generation of AI algorithms.

Musk told a CNN.com reporter that he made the investment "to keep an eye" on AI researchers.

For Sonia Chernova, director of the Robot Autonomy and Interactive Learning lab in the Robotics Engineering Program at Worcester Polytechnic Institute, it's important to delineate between different levels of artificial intelligence.

"There is a concern with certain systems, but it's important to understand that the average person doesn't understand how prevalent AI is," Chernova said.

She noted that AI research is used in email to filter out spam. Google uses it for its Maps service, and apps that make movie and restaurant recommendations also use it.

"There's really no risk there," Chernova said. "I think [Musk's] comments were very broad and I really don't agree there. His definition of AI is a little more than what we really have working. AI has been around since the 1950s. We're now getting to the point where we can do image processing pretty well, but we're so far away from making anything that can reason."

sonia chernova ai wpi WPI

Sonia Chernova is director of the Robot Autonomy and Interactive Learning lab in the Robotics Engineering Program at Worcester Polytechnic Institute.

She said researchers might be as much as 100 years from building an intelligent system.

Other researchers disagree on how far they might be from creating a self-aware, intelligent machine. At the earliest, it might be 20 years away, or 50 or, even 100 years away.

The one point they agree on is that it's not happening tomorrow.

However, that doesn't mean we shouldn't be thinking about how to handle the creation of sentient systems now, said Yaser Abu-Mostafa , professor of electrical engineering and computer science at the California Institute of Technology.

Scientists today need to focus on creating systems that humans will always be able to control.

"Having a machine that is evil and takes over… that cannot possibly happen without us allowing it," said Abu-Mostafa. "There are safeguards… If you go through the scenario of a machine that wants to take over or destroy the world, it's a nice science-fiction scenario, as long as we don't allow a system to control itself."

He added that some concern about AI is justified.

"Take nuclear research. Clearly it's very dangerous and can lead to great harm but the danger is in the use of the results not in the research itself," Abu-Mostafa said. "You can't say nuclear research is bad so you shouldn't do it. The idea is to do the research and understand the facts and then have controls in place so the research is not abused. If we don't do the research, others will do the research."

The nuclear research program offers another lesson, according to Stuart Russell, a professor of electrical engineering and computer science at the University of California Berkeley.

Russell, who focuses his research on robotics and artificial intelligence, said, that like other fields, AI researchers have to take risk into account because there is risk involved – maybe not today but likely some day.

"The underlying point [Musk] is making is something that dozens of people have made since the 1960s," Russell said. "If you build machines that are more intelligent than people, you might not be able to control them. Sci-fi says they might develop some evil intent or they might develop a consciousness. I don't see that being an issue, but there are things we don't have a good handle on."

For instance, Russell noted that as machines become more intelligent and more capable, they simultaneously need to understand human values so when they're acting on humans' behalf, they don't harm people.

The Berkeley scientist wants to make sure that AI researchers consider this as they move forward. He's communicating with students about it, organizing workshops and giving talks.

"We have to start thinking about the problem now," Russell said. "When you think nuclear fusion research, the first thing you think of is containment. You need to get energy out without creating a hydrogen bomb. The same would be true for AI. If we don't know how to control AI… it would be like making a hydrogen bomb. They would be much more dangerous than they are useful."

To create artificial intelligence safely, Russell said researchers need to begin having the necessary discussions now.

"If we can't do it safely, then we shouldn't do it," he said. "We can do it safely, yes. These are technical, mathematical problems and they can be solved but right now we don't have that solution."

FREE Computerworld Insider Guide: Five IT certifications that won’t break you
Join the discussion
Our Commenting Policies

    36 Comments
    2 days ago
    christophe grosjean
    I wonder if in some way this risk seen as far ahead is not already happening in financial markets. Isn't milliseconds trading mostly controled by AI ? Do people have much control over the resulting investment choices ?
    2 days ago
    Urmil Divecha
    You don't have to agree or disagree with someone in order to see someone's point of view. I think the scenario that Musk brought up is interesting and thought provoking.
    Let's look at the history of planet Earth , and see if we can find a parallel. Last time higher intelligence was created, it was nature who did it through the process of evolution, and the product was the human mind. We are like the "AI" from the perspective of early life on earth. In the beginning, there was a stronger sense of connection and dependency for resources. But eventually we humans got to a point to where we dont think we are dependent on nature any more.  So, how did it turn out for nature in the long run? Not so well - there is the massive loss of forest cover and mass extinction of animal species today, not to mention our impact on the planet's climate.
    We could go into details of how evolution,in its sense of continuous refinement, is similar to today's approach to AI development and by that parallel, the next higher intelligence is where apes were at one point - swinging from trees. Then we would really scare ourselves, but we are not going to do that because our Starbucks coffee is ready and we look forward to get back into the snarling trafffic on our way to work..  :p

    3 days ago
    Ari BenDavid
    For now I am more concerned with "artificial stupidity:" all of the programs and apps that are out of control in the sense they make it hard to make it through the day. Just take for example the voice prompts for a typical corporate phone system or help line. No human to cut throught he endless run around. As computers become more capable the problems with not be like my nephews' fear of a "zombie apocalypse" but sociological and corporate issues of what to apply where.
    4 days ago
    Ivan_Karamasov
    Strong AI will almost certainly happen. And when and if it does it may be impossible to control. Technological development is growing exponentially which makes it almost impossible to make any long term predictions. It is not so hard to envision how it will be in 5 years or even 10 years from now. But in 20 years? Nobody really knows. I personally don't think we will reach strong AI in the next 20 years, and I hope not in my lifetime (approx 50 years?), but who knows.
    4 days ago
    Lena Main

    Layered logic, no matter how complex, will never be "intelligent".


    Until you see a translator that works, AI will be very far off. 

    4 days ago
    Gaylord Cohen
    I said it before, and I'll say it once again...."All of this has happened before. All of this will happen again. So say we all."
    5 days ago
    Jim Balter
    Not completely crazy is still crazy ... and foolish and ignorant. If you read the article carefully, you'll see that Andrew Moore just don' want to come right out and insult Musk.
    5 days ago
    Jaroslaw Foltynski
    AI is coming and there is absolutely nothing we can do about it. It may happen in 30 or 100 years, but it is inevitable. Simply because our progress is based on increasing processing power (faster computers, more memory etc), and that is exactly what advanced AI needs: processing power and fast access to data. The only way to stop AI would be to stop improving computers (which is practically impossible).

    I am not worried about AI created for research purposes in some research institutions (where it can be contained). I am talking about time, when every home PC has more processing power than the human brain. With that, someone, somewhere will come with an idea how to create it in a way that can't be controlled.

    There is a lot o confusion about what kind of AI we are talking about. I am talking about a software which can efficiently improve itself (which can rewrite its own code). As simple as that. When that happens, there is nothing to stop it.
    4 days ago
    Ivan_Karamasov
    I think you are naive when it comes to how easy it will be to control strong AI. Have your read Nick Bostrom's book "Superintelligence: Paths, Strategies, Dangers"? 
    5 days ago
    Jay Edelman
    There was a time, not so long ago, that frustrated AI researchers, unable to come up with a process to develop or even a definition of what would constitute a truly 'intelligent' system, theorized that intelligence might simply be an 'emergent' property that arises of out of sufficiently complex systems (I'm oversimplifying here, but just a bit).  This convenient intellectual cop out allowed for so-called AI experts to project that AI would be achievable even in the absence of an underlying theory of consciousness or systematic development of the required subsystems.  Likely that line of reasoning was largely wishful thinking, but the hope/fear that sufficiently complex information systems could spontaneously become conscious and 'intelligent' permeates the thinking of the public.  Some projected that the old phone system, and/or its successor, the Internet, could become self aware as more and more circuits (connections) were added. Best we can tell, it never happened, and never will.  AI remains one of the greatest challenges remaining for the human race, and will require much more knowledge and technological development than most people realize.  Won't happen in my lifetime, or probably in the lifetime of anyone reading this contemporaneously.
    5 days ago
    Terry Capps
    there should be regulatory oversight -- at the national and
    international level -- to "make sure we don't do something very
    foolish."
    That's just what we need: Politicians making more even laws concerning things that they don't understand.
    5 days ago
    Terry Capps
    that should have been "even more" in my previous post.
    5 days ago
    TomaNota Movil
    Es iluso creer que un ser consciente no se auto protege. Es iluso pretender controlar un ser consciente,  la historia habla de incontables revueltas frente a la opresión. Es iluso creer que un ser consciente es bondadoso, no hay mas que ver los errores en la programación de la mente humana que la llevan a actos de maldad o crueldad.
    El debate esta en realidad en si somos capaces de crear una entidad consciente de si misma. Una vez creada, podría ser un Ghandi o un Goebels, si que pudiéramos hacer nada por evitarlo.
    5 days ago
    Brad Arnold
    At the rate that computer processing power is increasing, ASI is inevitable.  How are you going to keep an ASI in a box, or prevent one from being created?  You are seeing an inevitable phenomena, where at least one ASI will appear, and it will be the master of it's own destiny.  You better just pray to God that it treats us better than we have treated other lifeforms.  By the way, it is cold and blustery here in the Twin Cities, MN - why complain, it won't change anything anyway.
    6 days ago
    SelfAwarePatter
    Humans, and all other types of animals, are evolved survival machines.  An AI isn't going to be a threat unless we make it a survival machine in its own right.  We're unlikely to do that.  We're more likely to make AI navigation machines, transportation machines, monitoring machines, etc.  The chances of one of these types of machines accidentally morphing into a survival machine is somewhere in the neighborhood of a financial system accidentally becoming a 3D adventure game.
    6 days ago
    Gregory Woughter
    I think what Elon Musk is talking about is Self-Aware AI. It's not a threat if it's not self aware because until it is self aware it's going to just do what it's programmed to do.

    If AI become self aware then I think it's common sense to think the first thing a self aware being would do would be to protect itself, and to set goals for itself. I don't think that self protection equals destruction of humanity, just the opposite, because even AI need what humans bring.

    What we call AI now is just programs with fancy filters, not real self aware intelligence.
    6 days ago
    Bruce Curtis
    Self-awareness does not require self protection.  That is something nature programmed into living things.  An AI would only have that attribute if it was programmed in by its designers.
    5 days ago
    Heath Sims
    It's to do with the AI's recursive function. If it can use whatever is in its power to achieve its goal, and to decide the scope of the said goal, then we are doomed. E.g 'delete all spam' as the recursive function. The AI sets out to stop all the world's spam but in order to do that, it needs to destroy some servers and kill the people behind it, otherwise the spam won't stop.
    5 days ago
    Jim Balter
    You have no idea what a recursive function is.
    2 days ago
    Richard Alexander
    Even if we intentionally programmed a machine to kill everyone, it likely wouldn't get very far, certainly not far enough to call it our most existential crisis. It would require not only elaborate planning, but also access to esoteric materials, without being shut down in the process, something that would be difficult to achieve when it is nothing but an immobile box. 
    23 hours ago
    Simmol
    Killing is not the only option :) This is the option we see as the one and only permanent solution. 
    We are not afraid from AI we are afraid from Humans :) We make them look like humans and then we are afraid they would think like one ... kill everyone that oppose it :)
    6 days ago
    Richard Alexander
    AI, like everything else we make, won't do more than we try to make it do, and likely will do less. AI is not going to become accidentally sentient. It will take a lot of hard work to make AI anything like a conscious person, and it would so easily break if the system containing it is not robust. Long before AI becomes sentient, we would have to know how to make it sentient. The laws of physics also suggest that for a sentient AI to be a threat, its intelligence would have to be contained within a single device. Distributed intelligence would think too slowly, simply by the limitations of the speed of light. 
    6 days ago
    Gregory Woughter
    I can't agree with that second premise at all. Distributed intelligence would be much more robust and computers think much faster than humans. I don't think a single device would be the way, rather I expect it would be distributed.

    The only way a single unit might make sense is if there were competing AI.
    5 days ago
    Jim Balter
    Humans think much faster than computers because human brains are massively parallelized. Computers perform computational tasks much faster than humans because they are designed to do that and human brains aren't, but performing computational tasks isn't thinking.
    2 days ago
    Richard Alexander
    Distributed intelligence means computing between devices that are located at least some meters distant from each other. At the speed of light, it would take a signal a few nanoseconds to travel from one machine to another, and the amount of data is limited to some tiny fraction of what the machine is processing. What you end up with are bottlenecks that slow processing. A real-time response becomes impossible, as the machine is effectively lobotomized. Nature had to stick intelligence into as tiny a container as possible, because signal processing becomes unruly at larger sizes. Until we can put the most powerful machines we have today--several of them--into a space the size of a basketball, we aren't going to get human-like intelligence, simply by the laws of physics. 
    6 days ago
    Phipps Canpisio
    So now I have to worry that Skynet may become reality?  Thank G*d in 20 years I'll probably be too old to care.
    6 days ago
    Robert Del Frate
    To call what we're now using artificial "intelligence" is a real stretch.  A bunch of "if" statements no matter how deep or nested is not intelligence.  That things may progress to the point where we actual start to design real intelligent systems is certainly a possibility for the future and we really do need to think about what that implies.
    6 days ago
    Nick Corcodilos
    Thank you for the reality check.
    6 days ago
    Johnny Le
    You and I don't have access to it, but Elon said some companies have come really close, and that's why he raises the concern now.
    6 days ago
    Bruce Curtis
    The limits of AI research are public knowledge.  We are nowhere close to true AI.  Even IBM's Watson is just a fancy search engine, not AI.
    5 days ago
    Heath Sims
    Look at Vicarious, they are doing some scary stuff with teaching the AI to read/observe visually. And Deep Mind already has the ability to deduce from trial and error the rules of video games, scoring higher than a human. Deep Mind also drew an Ascii art cat when asked what a cat was after it watched millions of Youtube videos.
    Elon Musk is a share holder of both Vicarious and Deep Mind, has two honorary doctorates, owns two highly advanced technology companies, and is a self made billionaire. I think he is who you should be listening to, not some journalist, or a teacher at a university, or your own uneducated opinion.
    5 days ago
    Jim Balter
    It's only scary to fools and ignoramuses.

    I'm a shareholder too; that doesn't make me an expert.
    5 days ago
    Jim Balter
    Mr. Musk is very wrong.
    6 days ago
    Password Hater
    Yes- but what minds are AI modeled after. Are those stable?
    6 days ago
    Bruce Curtis
    They're not modelled after anything as of yet because AI does not exist.  Automation is not AI.
    6 days ago
    Bill
    For a really good look at this issue, and a fascinating and sometimes disturbing look at brain and mind research today, check out "The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind":

    http://www.amazo­n.com/Future-Mind-Sc­ientific-Understand-­Enhance/dp/038553082­X/ref=sr_1_5
    View All 36 Comments