Marvin Minsky, co-author of Perceptrons

Perceptrons: The Book That Killed Neural Networks

What Happened

Minsky and Papert published 'Perceptrons,' mathematically proving that single-layer perceptrons could not solve the XOR problem or other non-linearly separable tasks. While technically correct, the book was widely interpreted as proving neural networks were fundamentally limited — though multi-layer networks could solve these problems.

Why It Mattered

Effectively killed neural network research for over a decade. Funding dried up, researchers moved to other approaches. The damage was immense — and the book's conclusions were overgeneralized.

Key People

Organizations

Tags

Related Milestones

Research

Backpropagation Discovered (Initially Ignored)

Paul Werbos described the backpropagation algorithm in his PhD thesis — a method for training multi-layer neural networks by propagating errors backward through the network. However, in the anti-neural-network climate of the 1970s, the work went largely unnoticed.

Paul WerbosHarvard University
Research

SHRDLU: Natural Language Understanding

Terry Winograd created SHRDLU, a program that could understand and respond to English commands about a simulated 'blocks world.' Users could ask it to move objects, answer questions about their arrangement, and even understand pronouns and context within its limited domain.

Terry WinogradMIT
Regulation

The Lighthill Report

British mathematician James Lighthill published a devastating critique of AI research, concluding that the field had failed to deliver on its promises. 'In no part of the field have the discoveries made so far produced the major impact that was then promised.' The report led to massive funding cuts for AI research in the UK.

James LighthillUK Science Research Council
ELIZA chatbot conversation example
Research

ELIZA: The First Chatbot

Joseph Weizenbaum created ELIZA, a program that simulated a Rogerian psychotherapist using simple pattern matching. Despite being purely rule-based with no understanding, users became emotionally attached to it and insisted it truly understood them — a phenomenon Weizenbaum found deeply disturbing.

Joseph WeizenbaumMIT
Artificial neural network diagram representing McCulloch-Pitts neuron model
Research

First Mathematical Model of Neural Networks

McCulloch and Pitts published 'A Logical Calculus of Ideas Immanent in Nervous Activity,' creating the first mathematical model of an artificial neuron. They showed that simple binary neurons connected in networks could, in principle, compute any function computable by a Turing machine.

Warren McCullochWalter PittsUniversity of Chicago

Get the latest AI milestones as they happen

Join the newsletter. No spam, just signal.