Iterative_Places205-GoogLeNet_12.jpg.sm.jpg
Iterative_Places205-GoogLeNet_12.jpg.sm.jpg
building-dreams.png
building-dreams.png
Iterative_Places205-GoogLeNet_4.jpg.sm.jpg
Iterative_Places205-GoogLeNet_4.jpg.sm.jpg
Iterative_Places205-GoogLeNet_19.jpg.sm.jpg
Iterative_Places205-GoogLeNet_19.jpg.sm.jpg

Statement from the Creator

 

"Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t. So let’s take a look at some simple techniques for peeking inside these networks.

 

"We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer."

 

"One way to visualize what goes on is to turn the network upside down and ask it to enhance an input image in such a way as to elicit a particular interpretation."

 

"Instead of exactly prescribing which feature we want the network to amplify, we can also let the network make that decision. In this case we simply feed the network an arbitrary image or photo and let the network analyze the picture. We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.."