Futurists pretty regularly get asked about artificial intelligence, often along with robots, and perhaps some consternation about a Terminator scenario. I will sometimes ask if they’ve heard of the Singularity (unless I’m reasonably confident they have not), which I [over]-simplify to the point at which machine intelligence surpasses human intelligence.
In prepping for a recent interview, I reflected back on how my perception of AI has evolved over the years. My earlier memories are doing research and forecasts about the future of AI as a consultant in the 1990s.
- One was that the definition of AI was a bit slippery. My operational definition borrowed from someone (sorry, forgot) was along the lines of “what computer science hasn’t figured out yet.”
- Another was the project by Doug Lenat to codify common sense for AI application. At the time, the verdict was that it was going to be really hard and take a long time. Sure enough, I found a March 2016 piece “An AI with 30 Years’ Worth of Knowledge Finally Goes to Work.” It did take a long time!
We were fairly cautious in our AI forecasts, basically suggesting great long-term potential, but that it would be harder than we thought in the short term. Not exactly jumping out on a limb there, but it was reasonably accurate.
I often get asked to comment on new tech advances involving AI and robots, usually along the lines of “is this legit?” The answer is almost always the same – along the lines of, yes we will be technologically capable of doing it, but whether people want it, or accept it, suggests it’s application is in the more distant future.
Another thing we are asked to do is take a position on how AI will turn out. Do we believe in the Singularity or not? Will the Terminator scenario happen? Are humans obsolete? Will we eliminate disease and hunger? And so on. I’d like to make a point about foresight here – check that, my view on foresight here. I try not to take a position on this question for a couple of reasons. First, I’m really not sure…at all. Second, and much more important in my view, is that as a futurist, it’s a little dangerous to take a public position on a future, in the sense that it may start, explicitly or implicitly, to cause you to advocate for it. Next thing you know, you’re a proponent, being asked to defend your position against the opposing view. It could be more subtle, like when you for no particular reason decide to root for a sports team, and by the end of the contest, you’re furious that “your” team lost. Taking a position also reinforces the “predictions” approach that the media prefers. In fairness to the media, who wants a waffly answer that “it depends,” “several plausible scenarios,” etc. compared to someone naming a date and time when somethign will happen? So, I will continue to do my best to stay true to alternatives and not lock in….actually relatively easy in this case, given that I am indeed quite uncertain. Andy Hines
Jim Dator says
Andy, could this be the source of your definition of Artificial Intelligence?
David Miller (a robotics specialist at the International Space University and the University of Oklahoma) says that “Artificial intelligence is whatever machines haven’t learned to do yet.”
I have quoted Miller in several things I have written, so maybe it came from there. Or maybe Miller was citing someone else?
Jim Dator
Andy Hines says
It sure could be, thanks! I remember using this back in the 1990s when I was working with Joe Coates and we did several pieces that included a look at AI.