LeNet-5 convolutional neural network architecture

LeNet: Convolutional Neural Networks

What Happened

Yann LeCun demonstrated that convolutional neural networks (CNNs) could be trained with backpropagation to recognize handwritten digits. The refined LeNet-5 (1998) achieved 99%+ accuracy on MNIST and was deployed by banks to read checks — running in ATMs for years.

Why It Mattered

Invented the convolutional neural network architecture that would later revolutionize computer vision. LeNet proved neural networks could solve real commercial problems, even during the AI winter.

Key People

Organizations

Tags

Related Milestones

AlexNet deep neural network architecture diagram
Research

AlexNet: The ImageNet Moment

AlexNet, a deep convolutional neural network, won the ImageNet competition by a staggering margin — reducing the error rate from 26% to 16%. Trained on two NVIDIA GTX 580 GPUs, it was dramatically deeper and more powerful than previous entries. The AI community was stunned.

Alex KrizhevskyIlya SutskeverUniversity of Toronto
Residual network skip connection block diagram
Research

ResNet: Deeper Than Ever

Microsoft Research introduced ResNet with skip connections (residual connections), enabling the training of networks with 152+ layers — 8x deeper than previous networks. ResNet won ImageNet 2015 with 3.57% error, surpassing human-level performance (5.1%) for the first time.

Kaiming HeXiangyu ZhangMicrosoft Research
Geoffrey Hinton, pioneer of backpropagation in neural networks
Research

Backpropagation Rediscovered

Rumelhart, Hinton, and Williams published 'Learning Representations by Back-propagating Errors' in Nature, demonstrating that backpropagation could train multi-layer neural networks effectively. The same year, the PDP (Parallel Distributed Processing) group published their influential two-volume work on connectionism.

David RumelhartGeoffrey HintonUC San DiegoCarnegie Mellon University
Reinforcement learning agent-environment interaction diagram
Research

TD-Gammon: Reinforcement Learning Plays Backgammon

Gerald Tesauro created TD-Gammon, a neural network that learned to play backgammon at expert level through self-play using temporal difference reinforcement learning. It discovered novel strategies that surprised human experts.

Gerald TesauroIBM
Shakey the robot at the Computer History Museum
Research

Shakey the Robot

Shakey was the first mobile robot that could reason about its actions. It combined computer vision, natural language processing, and planning to navigate rooms, push objects, and solve simple tasks. It used the A* search algorithm and STRIPS planner.

Charles RosenNils NilssonStanford Research Institute

Get the latest AI milestones as they happen

Join the newsletter. No spam, just signal.