In Vernor Vinge's seminal 1993 article on the Singularity, he wrote:
Within thirty years, we will have the technological means to create superhuman intelligence.
Later in the piece, he elaborated:
Progress in computer hardware has followed an amazingly steady curve in the last few decades [...]. Based largely on this trend, I believe that the creation of greater than human intelligence will occur during the next thirty years. (Charles Platt [...] has pointed out that AI enthusiasts have been making claims like this for the last thirty years. Just so I'm not guilty of a relative-time ambiguity, let me more specific: I'll be surprised if this event occurs before 2005 or after 2030.)
Ray Kurzweil's 2005 book The Singularity Is Near: When Humans Transcend Biology says several times that we'll have human-level artificial intelligence by 2029; see also "Why We Can Be Confident of Turing Test Capability Within a Quarter Century," excerpted (or maybe partly excerpted) from that book.
So Kurzweil doesn't contradict Vinge's more specific claim (that the Singularity will arrive before 2030). Still, it's been 13 years since Vinge's essay, and the countdown to strong AI has apparently moved from "[w]ithin thirty years" to within twenty-five years.
None of what I'm saying is new. I don't mean to say "Look at me, I'm so clever, I found the flaw." Kurzweil knows a lot more about the current state of AI research than I do, and the above-linked article provides a lot of info about the current state of various steps on the road to strong AI.
But I can't shake a certain dubiousness, based almost entirely on the "AI enthusiasts have been making claims like this for the last thirty years" thing. Strong AI has been just around the corner for a long time, and I suspect its proponents have been making strong arguments for it being just around the corner for much of that time. (Preemptive response to obvious next question: I see no particular flaws in Kurzweil's argument, and I personally don't believe there's anything going on in a human brain that's impossible to duplicate by other physical processes. Which I guess means I don't see any reason we can't achieve strong AI eventually. But whether we're anywhere near actually achieving it is another question.)
So I guess what I'm really saying is that I think it'll be interesting to check back in ten years and see whether Kurzweil still thinks we're on target for 2029.
Kurzweil and Mitch Kapor made a Long Now bet on this topic in 2002, btw. So Kurzweil does apparently still think we're on target at least four years after setting his target date. See their wager on the Turing Test for details; see also Kurzweil's Why I Think I Will Win and Kapor's Why I Think I Will Win. And Kurzweil's responses to Kapor's argument.
You may also be interested in Jan-Willem Bats's Singularity FAQ for Dummies (predicts strong AI "well before 2030") and Charles Stross's entertaining Singularity! A Tough Guide to the Rapture of the Nerds, which tongue-in-cheek predicts human-level AI "some time just after 11:14am on Sunday, July 16th, 2017."
And this entry started out intended to be about three lines long, and I'm now running late, so I'd better run.