Wednesday, March 9, 2011

Kurzweil and Singularity

Raymond Kurzweil is undoubtedly brilliant, but his prediction that we will reach singularity in 2045 (with artificial intelligence so much smarter than humans that it can improve itself) suffers from the same flaws as predictions of flying cars in the 1950s.  It turns out that flying is orders of magnitude more complicated than driving, and not just the next step in personal transportation.  Moonshot in 1969 gives us outposts on the rings of Saturn by 2009?  Again, the devil’s in the details, and the same goes for artificial intelligence.
Will we get there?  Most certainly.  But 2045?  In my opinion, that’s too soon, too many devils waiting in the details.  We might build something that can reliably pass the Turing Test 99% of the time, but we won’t outpace ourselves (at least on the inventiveness scale) until some time later than that.  2085?  2125?  2165?  These sound a little more reasonable to me.

2 comments:

  1. Kurzweil seems to have a simplistic view of intelligence (if we can all directly connect via brain-to-computer interfaces to Google and thereby access the vast quantity of data stored there, at lightening speed we will be super intelligent). However remembering and/or being able to access huge chunks of information and being able to reel it off, at a moments notice is not the same as understanding said data, (EG. one can imagine someone who, via computer to brain interface could instantly access all the data on Charles Dickens, however then going on to discuss his life and how this influenced his work, rather than reeling off facts about the author would be a much more complicated exercise). To do the latter would require true intelligence while the former would merely require lightening speed.
    There is also the complicated issue of emotion and how it interacts with intelligence. How would one build emotion (in its true meaning) into a machine?
    Maybe Kurzweil is right but the jury is still out on that question.

    ReplyDelete
  2. Agreed. Building the "learning-machine" part of a true AI is far simpler than building the structure that makes it all make sense.

    For people, this is our hardwiring: our instincts for language, social interaction, pain avoidance, pleasure-seeking, problem-solving, acquiring knowledge, empathy/sympathy, protection of self/family/group/species/life, etc. Without these, Kurzweil's dream of friendly AI is much less likely.

    How would one program a computer (even an extremely sophisticated one) to understand shame, love, exclusion, satisfaction, fear, yearning, contentment, pride, slapstick humor, creepiness, pain, or regret? For an adult human, these are simple "calculations", coming from both our internal "learning-machine" and our hardwired programming.

    We can certainly get there, but 33 years is way too ambitious. What I could see by 2045 would be personal AIs (living just in the world of bits) that could learn, gather and analyze data, interact with us, even make conjectures and find novel solutions (some programs do that now), all while on tasks set by people.

    ReplyDelete