Geoffrey Hinton, pioneer of backpropagation in neural networks

Backpropagation Rediscovered

What Happened

Rumelhart, Hinton, and Williams published 'Learning Representations by Back-propagating Errors' in Nature, demonstrating that backpropagation could train multi-layer neural networks effectively. The same year, the PDP (Parallel Distributed Processing) group published their influential two-volume work on connectionism.

Why It Mattered

Revived neural network research from its decade-long exile. Backpropagation became the standard training method for multi-layer neural networks and underpinned the deep learning wave that followed.

Key People

Organizations

Tags

Related Milestones

NETtalk neural network back-propagation diagram
Research

NETtalk: Neural Network Learns to Speak

NETtalk was a neural network that learned to pronounce English text aloud, starting from babbling sounds and gradually becoming intelligible — mimicking how a child learns to speak. It captured public imagination and demonstrated backpropagation's potential.

Terrence SejnowskiCharles RosenbergJohns Hopkins University
John Hopfield, inventor of Hopfield networks
Research

Hopfield Networks: Physics Meets Neural Networks

Physicist John Hopfield showed that a type of recurrent neural network could serve as content-addressable memory, using concepts from statistical physics. The network would converge to stable states that could store and retrieve patterns — connecting neuroscience, physics, and computation.

John HopfieldCaltech
Research

Backpropagation Discovered (Initially Ignored)

Paul Werbos described the backpropagation algorithm in his PhD thesis — a method for training multi-layer neural networks by propagating errors backward through the network. However, in the anti-neural-network climate of the 1970s, the work went largely unnoticed.

Paul WerbosHarvard University
Deep belief network architecture diagram
Research

Deep Belief Networks: Hinton Revives Deep Learning

Geoffrey Hinton published 'A Fast Learning Algorithm for Deep Belief Nets,' showing that deep neural networks could be effectively trained by pre-training each layer as a restricted Boltzmann machine. This solved the long-standing problem of training networks with many layers.

Geoffrey HintonSimon OsinderoUniversity of Toronto
LeNet-5 convolutional neural network architecture
Research

LeNet: Convolutional Neural Networks

Yann LeCun demonstrated that convolutional neural networks (CNNs) could be trained with backpropagation to recognize handwritten digits. The refined LeNet-5 (1998) achieved 99%+ accuracy on MNIST and was deployed by banks to read checks — running in ATMs for years.

Yann LeCunAT&T Bell Labs

Get the latest AI milestones as they happen

Join the newsletter. No spam, just signal.