deeplearning

What are progressive neural nets?

This is an abstract from a TechWorld’s very interesting article from Scott Carey.

General artificial intelligence, a machine that is capable of human-level expertise in multiple tasks, was the hot topic during the morning of the Rework Deep Learning Summit in London yesterday, with two of the UK’s best AI companies Google DeepMind and Swifkey weighing in on the advances being made, and how far we are from a truly human AI.

In his seminal piece about DeepMind for Wired magazine in June 2015, David Rowan wrote: “[DeepMind] showed that their artificial agent had learned to play 49 Atari 2600 video games when given only minimal background information. The deep Q-network had mastered everything from a martial-arts game to boxing and 3D car-racing games, often outscoring a professional (human) games tester.”
What this obfuscated was that the deep neural network was learning how to master each game one at a time. The same neural network couldn’t, for example, flick between two different games and maintain its skill like a human would.

Speaking yesterday morning DeepMind research scientist Raia Hadsell explained why this is such a challenge as the company works towards creating a general artificial intelligence.
Traditionally a deep learning network will be trained through deep reinforcement learning (DeepRL) by being fed huge amounts of data and given time to learn how to perform a task, such as recognising the elements of an image, mastering Space Invaders, or beating Lee Sedol at Go.
“These are each really powerful and each of these can achieve a superhuman level at those tasks,” says Hadsell, “but each network is separate. There is no neural network in the world yet that can be trained to both identify images, play Space Invaders and listen to music.”
“We can’t even learn multiple games. Let’s make it easier and say we want one neural network that can learn 10 different Atari games, as probably any self respecting 10 year old can do. This is extremely hard. If you try to learn all of them at once, the rules for playing Pong or Qubert interfere with each other. If you try to learn one at a time that’s fine, but you forget the ones you learned first.”

Continual deep learning
So, unlike riding a bike, neural networks don’t retain the ability to perform tasks once they are taught a new one. This is where Hadsell’s area of expertise in continual deep learning comes in.
Hadsell laid out what her research aims to achieve: “We would like to start with a task, get to expert performance on it then move to sequential tasks using the same neural network to get to expert performance on all of these tasks without catastrophic forgetting of the earlier ones and including transfer from task to task. I want task one to have a positive transfer to task four if they are similar. I would like to play one, know that it is safely encoded in my neural network and move to the next one.”
Hadsell and her team at DeepMind have been working on progressive neural networks to try and get closer to this version of AI than they have been able to previously.
The key to progressive neural networks is how they are architected. Instead of a single neural network that can perform a single function, DeepMind wants to be able to link these together.
Hadsell calls each neural network a column, and these are linked together “laterally at each layer and I am also going to freeze the weights [parameters of the model] so that when I train the second column I am going to learn how to use the features of column one but I’m not going to overwrite them.”
It’s all pretty technical, but the result would be a cluster of linked-up neural networks which would resemble the way a human brain learns and retains information.

Drawbacks
The limitations of progressive neural networks comes down to scaling. Hadsell explained, in fairly technical terms: “As I keep on adding columns and adding these lateral connections then I have a problem of scaling and I will quickly end up with something that will be too large to be tractable because the parameter growth is quadratic.”
The nature of the system does mean that the scaling issue kind of solves itself though. Hadsell said: “Our analysis shows that the new columns you learn, so say the fifth column, or game, it has learnt, actually uses very little of that new column because so many of the features have already been learned and transferred as being useful to this game (task).”

Leave a Reply