1 An Open Letter To Everyone Tricked Into Fearing Artificial Intelligence

Posted by Government 7 years, 10 months ago to Technology
2 comments | Share | Flag

This article is two years old, but I think it still holds up.

>The history of AI research is full of theoretical benchmarks and milestones whose only barrier appeared to be a lack of computing resources. And yet, even as processor and storage technology has raced ahead of researchers' expectations, the deadlines for AI's most promising (or terrifying, depending on your agenda) applications remain stuck somewhere in the next 10 or 20 years. I've written before about the myth of inevitable superintelligence, but Selman is much more succinct on the subject. The key mistake, he says, is in confusing principle with execution, and assuming that throwing more resources at given system will trigger an explosive increase in capability. “People in computer science are very much aware that, even if you can do something in principle, if you had unlimited resources, you might still not be able to do it,” he says, “because unlimited resources don't mean an exponential scaling up. And if you do have an exponential scale, suddenly you have 20 times the variables.” Bootstrapping AI is simultaneously an AI researcher's worst nightmare and dream come true—instead of grinding away at the same piece of bug-infested code for weeks on end, he or she can sit back, and watch the damn thing write itself.

>At the heart of this fear of superintelligence is a question that, at present, can't be answered. “The mainstream AI community does believe that systems will get to a human-level intelligence in 10 or 20 years, though I don't mean all aspects of intelligence,” says Selman. Speech and vision recognition, for example, might individually reach that level of capability, without adding up to a system that understands the kind of social cues that even toddlers can pick up on. “But will computers be great programmers, or great mathematicians, or other things that require creativity? That's much less clear. There are some real computational barriers to that, and they may actually be fundamental barriers,” says Selman. While superintelligence doesn't have to spring into existence with recognizably human thought processes—peppering its bitter protest poetry with references to Paradise Lost—it would arguably have to be able to program itself into godhood. Is such a thing possible in principle, much less in practice?
SOURCE URL: http://www.popsci.com/open-letter-everyone-tricked-fearing-ai?con=TrueAnthem&dom=fb&src=SOC


Add Comment

FORMATTING HELP

All Comments Hide marked as read Mark all as read

  • Posted by $ CBJ 7 years, 10 months ago
    Why worry about the future threat of artificial intelligence? At the moment we need to deal with the more immediate threat of natural stupidity.
    Reply | Mark as read | Best of... | Permalink  
  • Posted by terrycan 7 years, 9 months ago
    There will always be stupid people and smart people too. My fear of AI is we will lose the ability to fix our own machines and provide for ourselves.
    Reply | Mark as read | Best of... | Permalink  

FORMATTING HELP

  • Comment hidden. Undo