• Fast and efficient deep neural network trained end to end using gradient based optimization
• Trained from scratch using randomly initialized parameters — no transfer learning
• No data augmentation or pre-processing
• 100% unsupervised — data set is just a flat unordered list of raw images
• Learns straight from the pixels — no extra information, structure, examples, or relations between the images are provided
• No examples of valid variations are provided
• Learns without any feature engineering or heuristics
• Generalizes beyond the training data — does not memorize
• No baked in knowledge of what constitutes valid or reasonable variations
The images presented here are from a model trained from scratch unsupervised on the CIFAR-10 Image Dataset (Learning Multiple Layers of Features from Tiny Images, Alex Krizhevsky, 2009). Each row represents a set of variations on an input image selected from the validation set. First image starting on the left for each row is the original image from the dataset, all other images in the row are synthetically generated by our network.
The neural network accepts a single image as input and produces a single image as output. It has a stochastic element to it such that one can run the same image through multiple times generating multiple variations on the input. The variational outputs represented here in each row were not hand-selected.