Backpropagation Discovered (Initially Ignored)

What Happened

Paul Werbos described the backpropagation algorithm in his PhD thesis — a method for training multi-layer neural networks by propagating errors backward through the network. However, in the anti-neural-network climate of the 1970s, the work went largely unnoticed.

Why It Mattered

Showed that multi-layer neural networks could, in principle, be trained end-to-end. Its long neglect became a cautionary example of how important ideas can stall for years when a field turns against an entire line of research.

Key People

Organizations

Tags

Related Milestones

Marvin Minsky, co-author of Perceptrons
Research

Perceptrons: The Book That Killed Neural Networks

Minsky and Papert published 'Perceptrons,' mathematically proving that single-layer perceptrons could not solve the XOR problem or other non-linearly separable tasks. While technically correct, the book was widely interpreted as proving neural networks were fundamentally limited — though multi-layer networks could solve these problems.

Marvin MinskySeymour PapertMIT
Geoffrey Hinton, pioneer of backpropagation in neural networks
Research

Backpropagation Rediscovered

Rumelhart, Hinton, and Williams published 'Learning Representations by Back-propagating Errors' in Nature, demonstrating that backpropagation could train multi-layer neural networks effectively. The same year, the PDP (Parallel Distributed Processing) group published their influential two-volume work on connectionism.

David RumelhartGeoffrey HintonUC San DiegoCarnegie Mellon University
Artificial neural network diagram representing McCulloch-Pitts neuron model
Research

First Mathematical Model of Neural Networks

McCulloch and Pitts published 'A Logical Calculus of Ideas Immanent in Nervous Activity,' creating the first mathematical model of an artificial neuron. They showed that simple binary neurons connected in networks could, in principle, compute any function computable by a Turing machine.

Warren McCullochWalter PittsUniversity of Chicago
Frank Rosenblatt, inventor of the Perceptron
Research

The Perceptron

Frank Rosenblatt built the Mark I Perceptron, the first hardware implementation of an artificial neural network. It could learn to classify simple visual patterns. The New York Times reported it as an 'Electronic Brain' that the Navy expected would 'be able to walk, talk, see, write, reproduce itself and be conscious of its existence.'

Frank RosenblattCornell Aeronautical Laboratory
John Hopfield, inventor of Hopfield networks
Research

Hopfield Networks: Physics Meets Neural Networks

Physicist John Hopfield showed that a type of recurrent neural network could serve as content-addressable memory, using concepts from statistical physics. The network would converge to stable states that could store and retrieve patterns — connecting neuroscience, physics, and computation.

John HopfieldCaltech

Get the latest AI milestones as they happen

Join the newsletter. No spam, just signal.