« Virtual Personal Assistant | Main | Some reasons to submit to Asimov's »

Kurzweil and futurism

| 1 Comment

Back in late 2006, I posted an entry here about Ray Kurzweil's 2005 prediction of the Singularity by 2029. The gist of the entry was that I was dubious about strong AI happening by 2029, on the grounds that people have been saying we'd have strong AI "within thirty years" since the 1960s.

Tonight, after following a chain of links I'm not gonna bother writing down, I got curious about the state of this prediction. It's been five years since his book The Singularity Is Near, and specifically since his prediction of Turing Test Capability Within a Quarter Century; I wondered whether he still thinks we're on track to hit that date.

Unfortunately, I'm not seeing any updated predictions from him; I suspect they're out there, but I haven't come across them in a cursory search.

But that search led me to a futurism-blog posting from this past January analyzing some of RK's predictions for 2009. It turns out that in his 1999 book The Age of Spiritual Machines, RK made over a hundred predictions of stuff that would happen by 2009, and futurism blogger Michael Anissimov was taking RK to task over seven of those predictions which he felt had not come true.

RK replied (at the end of Anissimov's entry) that most of his predictions had come completely true by the end of 2009, that most of the rest were essentially true or would be true soon, and that if he was only off by a couple of years on a ten-year prediction, that was a pretty good track record. (Which actually kind of supports Anissimov's point that futurists should focus on probability ranges rather than on specific years.)

One of RK's claims in Spiritual Machines (see above link) was this:

The majority of text is created using continuous speech recognition (CSR) dictation software, but keyboards are still used. CSR is very accurate, far more so than the human transcriptionists who were used up until a few years ago.

That was one of the predictions that Anissimov said hadn't come true. Kurzweil replied:

In November 2009, the idea of large-vocabulary, continuous, speaker-independent speech recognition on a cell phone was still off in the future. Just one month later, this became one of the most popular free apps for the iPhone (Dragon Dictation from Nuance, which used to be Kurzweil Computer Products, my first major company)[....]

Which reminded me that I have Dragon Dictation on my iPhone but haven't used it much. If it's really that good, I thought, perhaps I should start using it to write blog entries. And so I pulled out my iPhone and spoke into the DD app.

It thought for a bit, and here's what it displayed:

In reading this blog post talking about Kurzweil's predictions for 19 sorry for 2009 from meeting mentioning 697 interesting to see him pluckers says that there are seven of his predictions that are totally untrue Criswell says that actually almost all of his hundred eight predictions out were either completely true or essentially true in 2009 if someone coming true now so I may be his exact date of the singularity by 2029 is not intended to be quite so precisely accurate up but I do still wonder whether five years after for five years after class connection she's still expecting that date

Which is rather amusing, but confirms my previous belief that Kurzweil's "CSR is very accurate" prediction was a tad premature. It got a lot right, but boy did it get a lot wrong. It's nowhere near as accurate as a human transcriptionist.

Kurzweil might respond that his prediction was just a tad early; that speech recognition will proceed by leaps and bounds, and a year from now his prediction will have come true. But I'm dubious.

Because: CSR is a very very difficult problem. I wrote a Words & Stuff column about speech recognition—specifically referring to Dragon's transcription software, in fact—back in 1998. We've come a long way since then; the software didn't run on a cell phone back then. But the core problem of doing accurate speaker-independent speech recognition continues to be a thorny one—except in limited domains, like the one I was talking about in my previous entry.

I don't mean to pick on RK about this one thing, nor to be too smug about it. A lot of his ten-years-from-now predictions have indeed arguably come true by 2009, depending on how much slack you're willing to cut him on words like "commonly," and on whether you agree with him that by "computer" he meant something like "anything with a processor," and whether a cell phone in a pocket counts as "embedded in clothing and jewelry," and so on. The original predictions evoked a certain vision of the future, but they came with a bunch of terms that can now be interpreted to mean something other than what a lot of us might have previously assumed they meant.

Here's another one: "The majority of reading is done on displays." Yep, I read an article a few months ago that said that that is indeed true now—if by "reading" you include things like reading blogs and news articles and tweets and email. In the context of that paragraph of his book, though, I would have assumed he meant "The majority of reading of books" (this is bolstered by the phrase, several paragraphs later, "books (those that still exist in paper form)"); whereas I would guess that despite the Kindle and the Nook and the iPad, we're still ten years away from the majority of book-reading being done on screens.

Anyway, I'm willing to cut RK a fair bit of slack; predicting the future is tough, and by most accounts RK has generally done a better job of predicting future tech than most other people have. And I think that probably most of the things he predicted for 2009 in that book will indeed come true eventually, though some may be another ten years or more away.

Which brings me back to the Singularity, and to what I was trying to say in my speech-to-text experiment quoted above: Kurzweil indicates in his response to Anissimov that his real prediction is not precisely so much that strong AI will happen in or by 2029, as that it will happen by sometime not too long after that:

My point is that if a computer passes the Turing test by 2033 rather than 2029 my vision of the future would be "essentially correct."

Which, on the one hand, sounds eminently reasonable; what's a few years off in a thirty-year prediction, especially when naysayers are saying it's hundreds of years off or will never happen? And yet, on the other hand, it brings us back to where I started: these predictions of when we'll have strong AI keep slipping outward, generally (on average) by somewhere around one year per year.

In 1993, Vinge said the Singularity would happen "Within thirty years," which suggests 2023 (though he hedged his bet by saying he'd be surprised if it happened "after 2030"). In 2005, Kurzweil predicted strong AI "within a quarter century," and specifically named 2029 as the target date. Now, in 2010, about five years later, he's saying it might not happen 'til 2033—about four years after his previous prediction.

It is entirely possible that I am much too pessimistic about this topic. As I noted last time I wrote about this, Kurzweil knows a lot more about the current state of AI research than I do; and he's a compelling speaker with a pretty good track record in various ways. (His history with inventions is pretty remarkable too; see his Wikipedia entry for details.) So maybe this'll really happen by 2029; or maybe it won't happen by 2029 but will by, say, 2039.

(Tongue firmly in cheek here: it's not really slipping by one year per year; the above numbers suggest the target date has slipped by about ten years in the past seventeen. So by 2027, the target date should be 2043; by 2044, the target date may be 2053; and we'll catch up with the target and finally get strong AI sometime around 2065.)

Or maybe, as various of y'all noted last time I wrote about this, "strong AI" isn't well-enough defined to be a very good target. We'll see.

But for now, much as I want benevolent AIs to come along and rescue us from this vale of tears, and much as materialist me sees no reason that we can't eventually produce strong AI, I remain a little skeptical.

1 Comment

Kurzweil's arguments for why those seven predictions didn't really fail are laughable. If we cut him the amount of slack he's asking for, several of those "predictions" were true in 1996.

Sure, most of those predictions will probably come true at some point, but if he was off by a decade when making predictions about the last decade, that's not very encouraging. If the error increases linearly, then he'd only be off by 30 years for his prediction about a Turing-Test capable AI. But if the error increases at a faster rate (which would be the case for any simulation of a chaotic system), he could be off by hundreds of years, meaning that his predictions are essentially useless.


Post a comment