How far has artificial intelligence progressed? Google’s latest study shows computers are able to interact with humans just fine when offering IT support, but get annoyed when asked about philosophical concepts they don’t understand.
In a paper entitled “A Neural Conversational Model“, Oriol Vinyals and Quoc Le of the Google Brain machine learning research project tested a computer chat robot with conversation modelling, which is the ability to predict what will come next in a conversation.
The researchers tried out actual IT helpdesk questions from humans, as well as conversation starters made up of dialogue taken from movie scripts, and the results were quite amusing.
The computer knows lots of facts about animals, inanimate objects and famous people, so it was able to answer general knowledge questions and identify that a cat has no wings and cannot fly.
It was also able to recognise that Egyptian queen Cleopatra was “regal” and that Argentinian footballer Lionel Messi “is a good player”.
But when asked about more abstract concepts, such as what the purpose of life is, the computer gave cryptic answers that showed it did not really understand the meaning of living, dying, existence or emotions.
Asimov’s three laws of Robotics are as follows;
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
ASIMO BY HONDA : HUMANOID /\
Along with the billions of dollars Google is spending on various cutting edge companies, in May of 2013 it launched a Quantum QTM +0.00% Artificial Intelligence Lab to study how quantum computing might advance machine learning and artificial intelligence. For Google this obsession with Artificial Intelligence and robotics may very well be about building better models of the world so they can make more accurate predictions of future outcomes. If Google wants to cure diseases, they need better models of how they develop. If they want cars to drive by themselves, they need better models for how transportation networks operate. If they want to create effective environmental policies, they need better models of what’s happening to our climate. And if Google wants to build a more useful search engine; they need to better understand you and how you interact with what’s on the web so you get the best answer tailored specifically for you. Or maybe they just want to create an autonomous robot army, but that sounds crazy. Or does it?