Google is trying to solve the one major crisis in machine learning- recognising objects in an image. Its a part of their Deep dream program, where they asked its neural network to alter the images that they fed into it by applying different layers to it.

The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer. To make the machine understand patterns, which they are not good at, Google fed the machine with millions of input images and gradually tweaked the parameters to make them arrive at the desired conclusion, i.e to make them able to recognise a definite pattern through a very large set of case study samples.

With this deep dream program, they are going a step further. Here, they fed the system with a photo and asked it recognise any pattern in it, that the machine already knows from its previous training programs, and build on it. So, that makes the network better to the point of being able to identify objects in an image that may not be present in the image at all. Like this photo of a blue sky, where the machine identified the outlines of the clouds and matched it with certain animals.

source: research.googleblog.com

So Software Engineers Alexander Mordvintsev and Mike Tyka along with Software Engineering Intern Christopher Olah started this program to visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training. An exerpt from their research blog, Inceptionism: Going Deeper into Neural Networks states:

“Why is this important? Well, we train networks by simply showing them many examples of what we want them to learn, hoping they extract the essence of the matter at hand (e.g., a fork needs a handle and 2-4 tines), and learn to ignore what doesn’t matter (a fork can be any shape, size, color or orientation). But how do you check that the network has correctly learned the right features? It can help to visualize the network’s representation of a fork. 

source: research.googleblog.com

 

There are dumbbells in there alright, but it seems no picture of a dumbbell is complete without a muscular weightlifter there to lift them. In this case, the network failed to completely distill the essence of a dumbbell. Maybe it’s never been shown a dumbbell without an arm holding it. Visualization can help us correct these kinds of training mishaps.”

The team is also wondering whether the neural network can become a tool for artists by providing them with a new way to remix visual concepts- or may be it can shed light on the roots of the creative process in general.

Here is a series of photos by world’s first Machine Artist-

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

 

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 

source: deepdreamgenerator.com

 
Just for fun, Google have also created deepdreamgenerator.com, where you can sign up, upload your own image and create a deep dream version.