Deep belief network architecture diagram

Deep Belief Networks: Hinton Revives Deep Learning

What Happened

Geoffrey Hinton published 'A Fast Learning Algorithm for Deep Belief Nets,' showing that deep neural networks could be effectively trained by pre-training each layer as a restricted Boltzmann machine. This solved the long-standing problem of training networks with many layers.

Why It Mattered

Reignited the deep learning revolution. Hinton proved that deep networks weren't dead — they just needed better training techniques. This paper is considered the starting gun for modern deep learning.

Key People

Organizations

Tags

Related Milestones

Geoffrey Hinton, pioneer of backpropagation in neural networks
Research

Backpropagation Rediscovered

Rumelhart, Hinton, and Williams published 'Learning Representations by Back-propagating Errors' in Nature, demonstrating that backpropagation could train multi-layer neural networks effectively. The same year, the PDP (Parallel Distributed Processing) group published their influential two-volume work on connectionism.

David RumelhartGeoffrey HintonUC San DiegoCarnegie Mellon University
AlexNet deep neural network architecture diagram
Research

AlexNet: The ImageNet Moment

AlexNet, a deep convolutional neural network, won the ImageNet competition by a staggering margin — reducing the error rate from 26% to 16%. Trained on two NVIDIA GTX 580 GPUs, it was dramatically deeper and more powerful than previous entries. The AI community was stunned.

Alex KrizhevskyIlya SutskeverUniversity of Toronto
Artificial neural network diagram representing McCulloch-Pitts neuron model
Research

First Mathematical Model of Neural Networks

McCulloch and Pitts published 'A Logical Calculus of Ideas Immanent in Nervous Activity,' creating the first mathematical model of an artificial neuron. They showed that simple binary neurons connected in networks could, in principle, compute any function computable by a Turing machine.

Warren McCullochWalter PittsUniversity of Chicago
Frank Rosenblatt, inventor of the Perceptron
Research

The Perceptron

Frank Rosenblatt built the Mark I Perceptron, the first hardware implementation of an artificial neural network. It could learn to classify simple visual patterns. The New York Times reported it as an 'Electronic Brain' that the Navy expected would 'be able to walk, talk, see, write, reproduce itself and be conscious of its existence.'

Frank RosenblattCornell Aeronautical Laboratory
Marvin Minsky, co-author of Perceptrons
Research

Perceptrons: The Book That Killed Neural Networks

Minsky and Papert published 'Perceptrons,' mathematically proving that single-layer perceptrons could not solve the XOR problem or other non-linearly separable tasks. While technically correct, the book was widely interpreted as proving neural networks were fundamentally limited — though multi-layer networks could solve these problems.

Marvin MinskySeymour PapertMIT

Get the latest AI milestones as they happen

Join the newsletter. No spam, just signal.