Google closer to developing human-like intelligence

Artificial Intelligence

Artificial Intelligence

Computers will have developed “common sense” within a decade and we could be counting them among our friends not long afterwards, one of the world’s leading AI scientists has predicted.

Professor Geoff Hinton, who was hired by Google two years ago to help develop intelligent operating systems, said that the company is on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.

The researcher told the Guardian said that Google is working on a new type of algorithm designed to encode thoughts as sequences of numbers – something he described as “thought vectors”.

Although the work is at an early stage, he said there is a plausible path from the current software to a more sophisticated version that would have something approaching human-like capacity for reasoning and logic. “Basically, they’ll have common sense.”

The idea that thoughts can be captured and distilled down to cold sequences of digits is controversial, Hinton said. “There’ll be a lot of people who argue against it, who say you can’t capture a thought like that,” he added. “But there’s no reason why not. I think you can capture a thought by a vector.”

Hinton, who is due to give a talk at the Royal Society in London on Friday, believes that the “thought vector” approach will help crack two of the central challenges in artificial intelligence: mastering natural, conversational language, and the ability to make leaps of logic.

He painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

“It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.”

In the past two years, scientists have already made significant progress in overcoming this challenge.

Richard Socher, an artificial intelligence scientist at Stanford University, recently developed a program called NaSent that he taught to recognise human sentiment by training it on 12,000 sentences taken from the film review website Rotten Tomatoes.

Part of the initial motivation for developing “thought vectors” was to improve translation software, such as Google Translate, which currently uses dictionaries to translate individual words and searches through previously translated documents to find typical translations for phrases. Although these methods often provide the rough meaning, they are also prone to delivering nonsense and dubious grammar.

Thought vectors, Hinton explained, work at a higher level by extracting something closer to actual meaning.

The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.

The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.

The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.

At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.

Hinton said that the idea that language can be deconstructed with almost mathematical precision is surprising, but true. “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.”

Dr Hermann Hauser, a Cambridge computer scientist and entrepreneur, said that Hinton and others could be on the way to solving what programmers call the “genie problem”.

“With machines at the moment, you get exactly what you wished for,” Hauser said. “The problem is we’re not very good at wishing for the right thing. When you look at humans, the recognition of individual words isn’t particularly impressive, the important bit is figuring out what the guy wants.”

“Hinton is our number one guru in the world on this at the moment,” he added.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits.”

A flirtatious program would “probably be quite simple” to create, however. “It probably wouldn’t be subtly flirtatious to begin with, but it would be capable of saying borderline politically incorrect phrases,” he said.

Many of the recent advances in AI have sprung from the field of deep learning, which Hinton has been working on since the 1980s. At its core is the idea that computer programs learn how to carry out tasks by training on huge datasets, rather than being taught a set of inflexible rules.

With the advent of huge datasets and powerful processors, the approach pioneered by Hinton decades ago has come into the ascendency and underpins the work of Google’s artificial intelligence arm, DeepMind, and similar programs of research at Facebook and Microsoft.

Hinton played down concerns about the dangers of AI raised by those such as the American entrepreneur Elon Musk, who has described the technologies under development as humanity’s greatest existential threat. “The risk of something seriously dangerous happening is in the five year timeframe. Ten years at most,” Musk warned last year.

“I’m more scared about the things that have already happened,” said Hinton in response. “The NSA is already bugging everything that everybody does. Each time there’s a new revelation from Snowden, you realise the extent of it.”

“I am scared that if you make the technology work better, you help the NSA misuse it more,” he added. “I’d be more worried about that than about autonomous killer robots.

 

Source:  theguardian.com

Artificial Intelligence By 2020

 

Artificial Intelligence Will Leapfrog Human’s By 2020:

"Artificial Intelligence Will Leapfrog Human's By 2020" -Says SciFi Great

“Artificial Intelligence Will Leapfrog Human’s By 2020” -Says SciFi Great

Artificial intelligence will surpass human intelligence after 2020, predicts Vernor Vinge, a world-renowned pioneer in AI, who has warned about the risks and opportunities that an electronic super-intelligence would offer to mankind. “It seems plausible that with technology we can, in the fairly near future,” says scifi legend Vernor Vinge, “create (or become) creatures who surpass humans in every intellectual and creative dimension. Events beyond such an event — such a singularity — are as unimaginable to us as opera is to a flatworm.” “The Singularity” is seen by some as the end point of our current culture, when the ever-accelerating evolution of technology finally overtakes us and changes everything.  It’s been represented as everything from the end of all life to the beginning of a utopian age, which you might recognize as the endgames of most other religious beliefs. While the definitions of the Singularity are as varied as people’s fantasies of the future, with a very obvious reason, most agree that artificial intelligence will be the turning point.  Once an AI is even the tiniest bit smarter than us, it’ll be able to learn faster and we’ll simply never be able to keep up.  This will render us utterly obsolete in evolutionary terms, or at least in evolutionary terms as presented by people who view academic intelligence as the only possible factor.  Because that’s how people who imagine the future while talking online wish the world worked, ignoring things like “Hey, this is just a box” and “What does this power switch do?” There’s no question that technology is progressing at an ever-accelerating rate – we’ve generated more world-changing breakthroughs in the last fifty years than the entirety of previous human history combined.  The issue is the zealous fervor with which some see the Singularity as the end of all previous civilization, a “get out of all previous problems” card which ignores the most powerful factor in the world:  human stupidity. We’ve already invented things which would have been apocalyptic agents of the devil by any previous age.  We can talk with anyone all around the world, and we use it to try to sell insurance.  We tamed light itself in a coherent beam utterly unseen in nature, and use it to throw very sharp, very complicated rocks into other people’s heads.  We built an insanely complex computer web spanning the planet, and use it to pretend to be Nigerian. Of course we use it for good things as well but those who think the invention of artificial minds will end our idiocy are far overestimating their abilities.  We turned production line processing, international economics, world-spanning transport and professional design tools into “Billy The Singing Sea Bass” statues at 19.99 retail.  An AI would have to be Terminator Jesus to even begin to change our tune.  If an AI ever does exist, it’s going to wonder why it’s being asked for new ways to try to sell Cialis without using the word “penis” or “Cialis”. Pretty much every prediction of when the so-called “Singularity” will come depend on constant increases – ignoring how, for the first time ever, we are actually reaching the limits of what can actually be done.  This isn’t the idiotic “the world is flat” limits that we sailed past (and back around again) once someone grew the balls to try it, these are actual factual “you can’t build it any smaller because atoms are only so big”.  Of course we’re going to overcome those, because we’re awesome, but trying to timetable it is like writing a schedule for imagination. So whatever you think the Singularity is, it’s going to happen.  No question.  Entire international panels have been set up to study the potentially lethal effects of certain advances, but no-one would dream of stopping research – and even if they did they couldn’t stop other people.  But don’t be surprised when the main result of artificial intelligence research isn’t a utopian society or utterly authentic sex-bots, but the fact your spam filter doesn’t work anymore.