AI Timeline

The complete history of artificial intelligence, from first principles to the agentic era.

81 milestones·11 eras·1943–2026
Scroll into the chronologyTimeline first, tools appear below

Get the next major AI milestone in your inbox

A concise update when new milestones, launches, and agent-era shifts are added to the timeline.

19431955

Theoretical Foundations

The mathematical and philosophical groundwork for artificial intelligence was laid by visionaries who dared to ask: can machines think?

4 milestones
Research
Artificial neural network diagram representing McCulloch-Pitts neuron model

First Mathematical Model of Neural Networks

McCulloch and Pitts published 'A Logical Calculus of Ideas Immanent in Nervous Activity,' creating the first mathematical model of an artificial neuron. They showed that simple binary neurons connected in networks could, in principle, compute any function computable by a Turing machine.

Read full milestone
Research
Portrait of Alan Turing

Turing's 'Computing Machinery and Intelligence'

Alan Turing published his landmark paper in the journal Mind, proposing the 'Imitation Game' (now known as the Turing Test) as a way to evaluate machine intelligence. He asked: 'Can machines think?' and argued the question itself was meaningless — what mattered was whether a machine could convincingly imitate human conversation.

Read full milestone
Research

Samuel's Checkers Program

Arthur Samuel created a checkers-playing program at IBM that could learn from experience, improving its play over time. He coined the term 'machine learning' to describe programs that learn without being explicitly programmed.

Read full milestone
Research

Logic Theorist: The First AI Program

Newell and Simon created the Logic Theorist, often called the first AI program. It could prove mathematical theorems from Whitehead and Russell's Principia Mathematica — and even found a more elegant proof than the original for one theorem. It was debuted at the Dartmouth Conference.

Read full milestone
19561969

The Birth of AI

AI was officially born as a field, and early programs showed surprising promise — leading to ambitious predictions about machine intelligence.

9 milestones
Research
John McCarthy, organizer of the Dartmouth Conference

The Dartmouth Conference

A two-month workshop at Dartmouth College where the term 'Artificial Intelligence' was officially coined. The proposal stated: 'Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.' This gathering brought together the founders of the field.

Read full milestone
Research
Frank Rosenblatt, inventor of the Perceptron

The Perceptron

Frank Rosenblatt built the Mark I Perceptron, the first hardware implementation of an artificial neural network. It could learn to classify simple visual patterns. The New York Times reported it as an 'Electronic Brain' that the Navy expected would 'be able to walk, talk, see, write, reproduce itself and be conscious of its existence.'

Read full milestone
Infrastructure
Lisp programming language logo

LISP Programming Language

John McCarthy created LISP (LISt Processing), a programming language designed specifically for AI research. Its features — recursion, dynamic typing, garbage collection, and homoiconicity — were decades ahead of their time.

Read full milestone
Product
Diagram of an industrial robot arm like Unimate

Unimate: First Industrial Robot

The first Unimate robot was installed on a General Motors assembly line in New Jersey, performing die-casting and spot-welding tasks. It was the first industrial robot to replace humans on a production line.

Read full milestone
Research

DENDRAL: The First Expert System

DENDRAL automated chemical structure determination from mass spectrometry data. It used heuristic rules from domain experts to solve problems that normally required PhD-level expertise. Its successor Meta-DENDRAL could even generate new rules automatically.

Read full milestone
Research
ELIZA chatbot conversation example

ELIZA: The First Chatbot

Joseph Weizenbaum created ELIZA, a program that simulated a Rogerian psychotherapist using simple pattern matching. Despite being purely rule-based with no understanding, users became emotionally attached to it and insisted it truly understood them — a phenomenon Weizenbaum found deeply disturbing.

Read full milestone
Research

SHRDLU: Natural Language Understanding

Terry Winograd created SHRDLU, a program that could understand and respond to English commands about a simulated 'blocks world.' Users could ask it to move objects, answer questions about their arrangement, and even understand pronouns and context within its limited domain.

Read full milestone
Cultural
HAL 9000 from 2001: A Space Odyssey

2001: A Space Odyssey — HAL 9000

Stanley Kubrick's film introduced HAL 9000, an AI that could speak naturally, read lips, play chess, and ultimately turn against its human crew. HAL became the defining pop-culture image of artificial intelligence for generations.

Read full milestone
Research
Shakey the robot at the Computer History Museum

Shakey the Robot

Shakey was the first mobile robot that could reason about its actions. It combined computer vision, natural language processing, and planning to navigate rooms, push objects, and solve simple tasks. It used the A* search algorithm and STRIPS planner.

Read full milestone
19701979

First AI Winter

Reality failed to match the hype. Funding dried up, criticism mounted, and AI entered its first period of disillusionment.

3 milestones
Research
Marvin Minsky, co-author of Perceptrons

Perceptrons: The Book That Killed Neural Networks

Minsky and Papert published 'Perceptrons,' mathematically proving that single-layer perceptrons could not solve the XOR problem or other non-linearly separable tasks. While technically correct, the book was widely interpreted as proving neural networks were fundamentally limited — though multi-layer networks could solve these problems.

Read full milestone
Regulation

The Lighthill Report

British mathematician James Lighthill published a devastating critique of AI research, concluding that the field had failed to deliver on its promises. 'In no part of the field have the discoveries made so far produced the major impact that was then promised.' The report led to massive funding cuts for AI research in the UK.

Read full milestone
Research

Backpropagation Discovered (Initially Ignored)

Paul Werbos described the backpropagation algorithm in his PhD thesis — a method for training multi-layer neural networks by propagating errors backward through the network. However, in the anti-neural-network climate of the 1970s, the work went largely unnoticed.

Read full milestone
19801987

Expert Systems Boom

Rule-based expert systems brought AI into the corporate world, creating a billion-dollar industry — and reviving neural network research in the background.

5 milestones
Product
Symbolics Lisp machine used for expert systems

R1/XCON: Expert Systems Go Corporate

R1 (later XCON) was deployed at DEC to configure VAX computer systems. It saved DEC an estimated $40 million per year. This commercial success sparked a gold rush: by 1985, companies were spending over $1 billion per year on expert systems.

Read full milestone
Research
John Hopfield, inventor of Hopfield networks

Hopfield Networks: Physics Meets Neural Networks

Physicist John Hopfield showed that a type of recurrent neural network could serve as content-addressable memory, using concepts from statistical physics. The network would converge to stable states that could store and retrieve patterns — connecting neuroscience, physics, and computation.

Read full milestone
Infrastructure

Japan's Fifth Generation Computer Project

Japan's Ministry of International Trade and Industry launched a 10-year, $850 million project to build 'fifth generation' computers with AI capabilities — parallel processing machines that could understand natural language and reason like humans.

Read full milestone
Research
Geoffrey Hinton, pioneer of backpropagation in neural networks

Backpropagation Rediscovered

Rumelhart, Hinton, and Williams published 'Learning Representations by Back-propagating Errors' in Nature, demonstrating that backpropagation could train multi-layer neural networks effectively. The same year, the PDP (Parallel Distributed Processing) group published their influential two-volume work on connectionism.

Read full milestone
Research
NETtalk neural network back-propagation diagram

NETtalk: Neural Network Learns to Speak

NETtalk was a neural network that learned to pronounce English text aloud, starting from babbling sounds and gradually becoming intelligible — mimicking how a child learns to speak. It captured public imagination and demonstrated backpropagation's potential.

Read full milestone
19881993

Second AI Winter

Expert systems proved brittle and expensive. The AI bubble burst again, but foundational work on learning algorithms continued quietly.

3 milestones
Cultural

The Second AI Winter Begins

The expert systems bubble burst. LISP machine companies collapsed. The DARPA Strategic Computing Initiative was cut. Japan's Fifth Generation project was failing. Expert systems proved brittle, expensive to maintain, and unable to learn. The AI industry lost billions.

Read full milestone
Research
LeNet-5 convolutional neural network architecture

LeNet: Convolutional Neural Networks

Yann LeCun demonstrated that convolutional neural networks (CNNs) could be trained with backpropagation to recognize handwritten digits. The refined LeNet-5 (1998) achieved 99%+ accuracy on MNIST and was deployed by banks to read checks — running in ATMs for years.

Read full milestone
Research
Reinforcement learning agent-environment interaction diagram

TD-Gammon: Reinforcement Learning Plays Backgammon

Gerald Tesauro created TD-Gammon, a neural network that learned to play backgammon at expert level through self-play using temporal difference reinforcement learning. It discovered novel strategies that surprised human experts.

Read full milestone
19942005

Quiet Emergence

AI stopped trying to mimic human reasoning and embraced statistical approaches. Machines began beating humans at specific tasks.

5 milestones
Research

Support Vector Machines

Vapnik and Cortes published their work on Support Vector Machines (SVMs), a method for finding maximum-margin decision boundaries in high-dimensional spaces with unusually strong theoretical guarantees. SVMs quickly became one of the leading approaches for classification problems across text, vision, and bioinformatics.

Read full milestone
Research
LSTM recurrent neural network cell diagram

Long Short-Term Memory (LSTM)

Hochreiter and Schmidhuber published the LSTM architecture, solving the vanishing gradient problem that plagued recurrent neural networks. LSTMs could learn long-range dependencies in sequential data by maintaining a memory cell with gates that controlled information flow.

Read full milestone
Competition
IBM Deep Blue chess computer

Deep Blue Defeats Kasparov

IBM's Deep Blue defeated world chess champion Garry Kasparov in a six-game match (3.5-2.5). It was the first time a reigning world champion lost a match to a computer under standard tournament conditions. Deep Blue evaluated 200 million positions per second using brute-force search and hand-crafted evaluation.

Read full milestone
Product
Original iRobot Roomba vacuum robot

iRobot Roomba

iRobot released the Roomba, a robotic vacuum cleaner that used sensors and algorithms to autonomously navigate and clean floors. At $200, it brought autonomous robots into millions of homes.

Read full milestone
Competition
Stanley, the autonomous vehicle that won the 2005 DARPA Grand Challenge

DARPA Grand Challenge: Self-Driving Cars Begin

DARPA offered $1M for an autonomous vehicle to complete a 150-mile desert course. In 2004, no vehicle finished — the best went 7.4 miles. In 2005, Stanford's 'Stanley' (led by Sebastian Thrun) won by completing the course in under 7 hours. The 2007 Urban Challenge tested autonomous driving in traffic.

Read full milestone
20062011

Deep Learning Dawn

Geoffrey Hinton's breakthroughs reignited neural networks. GPU computing made deep networks trainable. The revolution was beginning.

6 milestones
Research
Deep belief network architecture diagram

Deep Belief Networks: Hinton Revives Deep Learning

Geoffrey Hinton published 'A Fast Learning Algorithm for Deep Belief Nets,' showing that deep neural networks could be effectively trained by pre-training each layer as a restricted Boltzmann machine. This solved the long-standing problem of training networks with many layers.

Read full milestone
Competition
Netflix Prize competition announcement

The Netflix Prize

Netflix offered $1 million to anyone who could improve their recommendation algorithm by 10%. The competition attracted thousands of teams and ran for 3 years (won in 2009). It popularized collaborative filtering, matrix factorization, and ensemble methods.

Read full milestone
Infrastructure
Fei-Fei Li, creator of ImageNet

ImageNet: The Dataset That Changed Everything

Fei-Fei Li and her team created ImageNet, a dataset of over 14 million hand-labeled images in 20,000+ categories. Starting in 2010, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) became the benchmark for computer vision progress.

Read full milestone
Infrastructure
NVIDIA CUDA GPU computing logo

GPU Computing for Neural Networks

Researchers including Andrew Ng demonstrated that GPUs (graphics processing units) could train neural networks 10-70x faster than CPUs. NVIDIA's CUDA platform made GPU programming accessible. This hardware breakthrough removed the computational bottleneck that had held back deep learning.

Read full milestone
Competition
IBM Watson computer system

IBM Watson Wins Jeopardy!

IBM's Watson system defeated the two greatest Jeopardy! champions, Ken Jennings and Brad Rutter, in a televised match. Watson used natural language processing, information retrieval, and machine learning to understand nuanced questions with puns and wordplay.

Read full milestone
Product
Apple Siri voice assistant logo

Apple Launches Siri

Apple introduced Siri as a built-in feature of the iPhone 4S — the first major voice assistant integrated into a mainstream consumer device. Users could ask questions, set reminders, and control their phone with natural speech.

Read full milestone
20122017

Deep Learning Breakthrough

AlexNet shocked the world. Deep learning conquered image recognition, games, and language. The Transformer architecture changed everything.

11 milestones
Research
AlexNet deep neural network architecture diagram

AlexNet: The ImageNet Moment

AlexNet, a deep convolutional neural network, won the ImageNet competition by a staggering margin — reducing the error rate from 26% to 16%. Trained on two NVIDIA GTX 580 GPUs, it was dramatically deeper and more powerful than previous entries. The AI community was stunned.

Read full milestone
Research
Tomáš Mikolov, lead author of Word2Vec

Word2Vec: Words as Vectors

Google researchers published Word2Vec, showing that relatively small neural networks could efficiently learn meaningful vector representations of words from large text corpora. The famous example `king - man + woman ≈ queen` made the idea vivid: semantic relationships could be captured geometrically in vector space.

Read full milestone
Research
Google DeepMind logo

DeepMind's DQN Masters Atari Games

DeepMind demonstrated a deep reinforcement learning agent (Deep Q-Network) that learned to play Atari 2600 games directly from pixel inputs, achieving superhuman performance on many games with no task-specific engineering. Google acquired DeepMind for ~$500 million shortly after.

Read full milestone
Research
Generative Adversarial Network architecture diagram

Generative Adversarial Networks (GANs)

Ian Goodfellow introduced GANs — two neural networks (generator and discriminator) competing against each other, one creating fake data and the other trying to detect it. The concept allegedly came to him during a bar conversation. Yann LeCun called GANs 'the most interesting idea in the last 10 years in ML.'

Read full milestone
Product
Amazon Alexa voice assistant logo

Amazon Echo & Alexa

Amazon launched the Echo smart speaker with Alexa voice assistant, creating an entirely new product category. Alexa could play music, control smart home devices, answer questions, and run third-party 'skills.' It brought always-on AI into the living room.

Read full milestone
Open Source
TensorFlow machine learning framework logo

TensorFlow Open-Sourced

Google open-sourced TensorFlow, its internal machine learning framework. This gave every researcher and developer access to the same tools Google used internally. PyTorch (Facebook, 2016) followed, creating a healthy competition that accelerated the entire field.

Read full milestone
Infrastructure
OpenAI logo

OpenAI Founded

OpenAI was founded as a non-profit AI research lab with $1 billion in committed funding, aiming to ensure artificial general intelligence benefits all of humanity. Co-founded by Sam Altman (Y Combinator president), Elon Musk, and top researchers including Ilya Sutskever from Google Brain.

Read full milestone
Research
Residual network skip connection block diagram

ResNet: Deeper Than Ever

Microsoft Research introduced ResNet with skip connections (residual connections), enabling the training of networks with 152+ layers — 8x deeper than previous networks. ResNet won ImageNet 2015 with 3.57% error, surpassing human-level performance (5.1%) for the first time.

Read full milestone
Competition
Go board game, the game AlphaGo mastered

AlphaGo Defeats Lee Sedol

DeepMind's AlphaGo defeated Lee Sedol, one of the greatest Go players ever, 4-1 in a five-game match in Seoul. Go has more possible positions than atoms in the universe — brute force was impossible. AlphaGo used deep reinforcement learning and Monte Carlo tree search. In Game 2, AlphaGo played Move 37 — a move so creative that experts called it 'beautiful' and 'not a human move.'

Read full milestone
Research
The Transformer model architecture diagram from Attention Is All You Need

Attention Is All You Need: The Transformer

Eight researchers at Google published 'Attention Is All You Need,' introducing the Transformer architecture. It replaced recurrence with self-attention mechanisms that could process entire sequences in parallel. The paper's title was deliberately bold — and proved prescient.

Read full milestone
Research
Go board representing AlphaGo Zero's self-play mastery

AlphaGo Zero: Learning From Scratch

AlphaGo Zero achieved superhuman Go performance with ZERO human knowledge — no training data from human games, no hand-crafted features. It learned entirely through self-play, and within 40 days surpassed all previous versions, including the one that beat Lee Sedol.

Read full milestone
20182021

The Transformer Era

Transformers scaled to billions of parameters. GPT and BERT redefined NLP. AI began generating text, code, and images that stunned researchers.

9 milestones
Research
OpenAI logo

GPT-1: Generative Pre-training

OpenAI released GPT-1, demonstrating that a Transformer trained on vast amounts of text using unsupervised pre-training could then be fine-tuned for specific NLP tasks. With 117 million parameters, it showed the potential of scaling language models.

Read full milestone
Research

BERT: Bidirectional Language Understanding

Google published BERT (Bidirectional Encoder Representations from Transformers), which could understand language context from both directions simultaneously. BERT shattered records on 11 NLP benchmarks. Google integrated it into Search, affecting 10% of all queries.

Read full milestone
Competition
Google DeepMind logo, creators of AlphaStar

AlphaStar Masters StarCraft II

DeepMind's AlphaStar reached Grandmaster level in StarCraft II, a real-time strategy game requiring long-term planning, deception, and split-second tactics with incomplete information — far more complex than Go or chess.

Read full milestone
Research
GPT-2 language model generating text about itself

GPT-2: 'Too Dangerous to Release'

OpenAI announced GPT-2 (1.5 billion parameters) but initially refused to release the full model, calling it 'too dangerous' due to its ability to generate convincing fake text. The decision was controversial — some praised the caution, others called it a publicity stunt. The full model was eventually released in November 2019.

Read full milestone
Research
OpenAI logo

GPT-3: The 175 Billion Parameter Leap

OpenAI released GPT-3 with 175 billion parameters — 100x larger than GPT-2. Without any fine-tuning, GPT-3 could write essays, code, poetry, translate languages, and answer questions through 'few-shot learning' (learning from just a few examples in the prompt). The API launched in beta, enabling thousands of applications.

Read full milestone
Research
Protein structure visualization representing AlphaFold's predictions

AlphaFold 2: Protein Folding Solved

DeepMind's AlphaFold 2 solved the 50-year-old protein structure prediction problem, achieving accuracy comparable to experimental methods at CASP14. It could predict how proteins fold from their amino acid sequences — a problem that had stumped biologists for half a century.

Read full milestone
Infrastructure
Anthropic AI safety company logo

Anthropic Founded

Former OpenAI VP of Research Dario Amodei and his sister Daniela, along with several other OpenAI researchers, founded Anthropic — an AI safety company focused on building reliable, interpretable, and steerable AI systems.

Read full milestone
Research
AI-generated image by DALL-E

DALL-E: Text to Image Generation

OpenAI unveiled DALL-E, a model that could generate images from text descriptions — 'an armchair in the shape of an avocado' became iconic. Built on GPT-3's architecture adapted for images, it showed that language models could bridge the gap between text and visual creativity.

Read full milestone
Product
GitHub Copilot AI coding assistant logo

GitHub Copilot: AI Writes Code

GitHub launched Copilot as a technical preview — an AI pair programmer powered by OpenAI Codex that could autocomplete entire functions, write boilerplate, and suggest code from natural language comments. It was trained on billions of lines of public code.

Read full milestone
20222024

Generative AI Revolution

ChatGPT brought AI to the masses. Generative AI exploded across every industry. The world woke up to a new technological era.

16 milestones
Open Source
Astronaut riding a horse, iconic Stable Diffusion generated image

Stable Diffusion: Open-Source Image Generation

Stable Diffusion was released as a widely available text-to-image model that could run on consumer hardware, with model weights distributed under an open release rather than an API-only product. Unlike DALL-E, anyone could download it, run it locally, and build on top of it. An explosion of community modifications, fine-tunes, and applications followed.

Read full milestone
Product
OpenAI logo, creators of ChatGPT

ChatGPT: AI Goes Mainstream

OpenAI released ChatGPT, a conversational AI based on GPT-3.5 fine-tuned with RLHF (Reinforcement Learning from Human Feedback). It reached 1 million users in 5 days and 100 million in 2 months — the fastest-growing consumer application in history. People used it to write emails, debug code, brainstorm ideas, and a thousand other tasks.

Read full milestone
Research
OpenAI logo

GPT-4: Multimodal Intelligence

OpenAI released GPT-4, a multimodal model that could understand both text and images. It passed the bar exam (90th percentile), scored 1410 on the SAT, and demonstrated remarkably nuanced reasoning. It was a massive leap from GPT-3.5 in accuracy, safety, and capability.

Read full milestone
Product
Anthropic logo, creators of Claude

Claude: Constitutional AI

Anthropic released Claude, an AI assistant built with Constitutional AI (CAI) — a novel approach where the model is trained to follow a set of principles rather than just optimizing for human preference ratings. Anthropic, founded by former OpenAI researchers, positioned Claude as the safety-focused alternative.

Read full milestone
Product
Midjourney AI image generation logo

Midjourney V5: Photorealistic AI Art

Midjourney V5 produced images so photorealistic that AI-generated photos went viral and were mistaken for real photographs — including a fake image of the Pope in a puffer jacket and fake photos of Trump's arrest. The line between AI-generated and real imagery effectively dissolved.

Read full milestone
Open Source
Meta AI logo

Llama 2: Meta Opens the Floodgates

Meta released Llama 2, a family of widely available large language models (7B, 13B, 70B parameters) distributed as open weights under a custom license that allowed broad commercial use. While not open-source in the strict OSI sense, it gave companies and researchers access to a frontier-quality model they could run, customize, and deploy themselves.

Read full milestone
Open Source
Mistral AI logo, creators of Mixtral

Mixtral 8x7B: Efficient Mixture of Experts

French startup Mistral AI released Mixtral 8x7B, a mixture-of-experts model that matched or beat GPT-3.5 while using a fraction of the compute per token. It demonstrated that clever architecture could compete with brute-force scaling.

Read full milestone
Product
Google Gemini AI model logo

Gemini: Google's Multimodal Response

Google launched Gemini, its most capable AI model family, natively multimodal across text, code, images, audio, and video. Gemini Ultra matched or exceeded GPT-4 on many benchmarks. It marked Google DeepMind's full response to OpenAI's dominance.

Read full milestone
Research
OpenAI logo, creators of Sora

Sora: AI Video Generation

OpenAI previewed Sora, a model that could generate photorealistic videos up to a minute long from text descriptions. The quality stunned the world — realistic physics, complex camera movements, and coherent scenes that looked like professional cinematography.

Read full milestone
Research
Google Gemini logo

Gemini 1.5 Pro: Million-Token Context

Google released Gemini 1.5 Pro with a 1 million token context window (later extended to 2M) — able to process entire codebases, books, or hours of video in a single prompt. It could find a needle in a haystack across millions of tokens with near-perfect recall.

Read full milestone
Product
Anthropic logo, creators of Claude 3

Claude 3: Approaching Human-Level

Anthropic launched the Claude 3 family (Haiku, Sonnet, Opus), with Claude 3 Opus matching or exceeding GPT-4 on most benchmarks. It featured a 200K token context window, strong reasoning, nuanced instruction-following, and a 'personality' that users found distinctively thoughtful and careful.

Read full milestone
Regulation
European Union flag representing the EU AI Act

EU AI Act: First Major AI Regulation

The European Parliament approved the AI Act, the world's first comprehensive AI regulation. It established a risk-based framework: banning 'unacceptable risk' AI (social scoring, indiscriminate surveillance), heavily regulating 'high risk' applications, and requiring transparency for generative AI.

Read full milestone
Open Source
Meta AI logo

Llama 3: Open-Source Catches Up

Meta released Llama 3 (8B and 70B, later 405B), closing the gap with closed frontier models. The 405B release put near-frontier open-weight models into more developers' hands, even though Meta's licensing still sat outside a strict open-source definition.

Read full milestone
Product
OpenAI logo

GPT-4o: Omni Model

OpenAI released GPT-4o ('omni'), a unified model that natively processed text, audio, images, and video with near-instant response times. It could hold natural voice conversations with emotional expression, sing, laugh, and respond to visual input in real time.

Read full milestone
Research
OpenAI logo, creators of o1

OpenAI o1: Reasoning Models

OpenAI released o1, a model trained to 'think before it speaks' using chain-of-thought reasoning at inference time. It could solve complex math, coding, and science problems by spending more compute thinking through multi-step solutions — trading speed for accuracy on hard problems.

Read full milestone
Research
Nobel Prize medal

Nobel Prizes Awarded for AI Work

The 2024 Nobel Prize in Physics went to Geoffrey Hinton and John Hopfield for foundational work on neural networks and machine learning. The Nobel Prize in Chemistry went to Demis Hassabis and John Jumper (AlphaFold) alongside David Baker for computational protein design. AI research received the highest scientific recognition.

Read full milestone
20252026

The Agentic Era

AI systems gained autonomy — reasoning, planning, and executing complex tasks. The age of AI agents arrived.

10 milestones
Product

The Rise of AI Agents

By 2025, frontier models were being wrapped in systems that could browse the web, call tools, edit files, execute code, manage state, and carry multi-step tasks forward with limited supervision. Claude Code, OpenAI's Operator, Google's Project Mariner, OpenClaw, and a wave of agent frameworks turned 'AI agent' from a research label into a practical product category.

Read full milestone
Product
OpenAI logo

OpenAI o3: Advanced Reasoning at Scale

OpenAI released o3, the successor to o1, with markedly improved reasoning capabilities. It posted state-of-the-art results on many math and coding benchmarks and handled problems that previously required expert-level multi-step analysis.

Read full milestone
Product
Google Gemini logo

Gemini 2.0: Google's Agent Platform

Google launched Gemini 2.0, designed from the ground up for the agentic era — with native tool use, code execution, and multi-step reasoning. Deeply integrated into Google's ecosystem (Search, Workspace, Android), it brought AI agent capabilities to billions of users.

Read full milestone
Product
GitHub Copilot logo representing the AI coding agents era

AI Coding Agents Transform Software Development

AI coding agents like Claude Code, Cursor, GitHub Copilot's agentic workflows, and OpenClaw-linked remote coding loops pushed beyond autocomplete into delegated engineering work. These systems could inspect repositories, run tests, edit files, use terminals and browsers, and iterate on tasks over multiple turns.

Read full milestone
Open Source
DeepSeek AI logo

DeepSeek R1: Open-Source Reasoning

Chinese AI lab DeepSeek released R1, an openly released reasoning model that approached OpenAI's o1-class performance at a fraction of the cost. Trained with reportedly modest compute budgets, it challenged the assumption that frontier reasoning required the largest Western-scale investment programs.

Read full milestone
Product
Anthropic logo

Claude 3.5 Sonnet: A Leading Coding Model

Anthropic's Claude 3.5 Sonnet emerged as one of the strongest widely used models for coding tasks, with developers praising its code generation, debugging, and software engineering capabilities. It powered tools like Claude Code, enabling AI to work directly inside developer environments.

Read full milestone
Product
Anthropic logo, creators of Claude 4

Claude 4 / Opus 4: Frontier Reasoning

Anthropic released Claude 4 Opus, a model with significantly enhanced reasoning, extended thinking capabilities, and the ability to sustain complex multi-step problem-solving over long contexts. It excelled at agentic tasks, code generation, and nuanced analysis.

Read full milestone
Open Source
OpenClaw GitHub organization avatar

OpenClaw: The Personal AI Assistant Goes Open Source

The `openclaw/openclaw` repository launched on GitHub, framing itself as 'your own personal AI assistant' that ran on users' own devices across the channels they already used, from WhatsApp and Telegram to Slack, Discord, and iMessage. Instead of keeping the assistant trapped in a single app, OpenClaw combined messaging integrations, voice, tools, browser control, local skills, and device-side control into an always-on personal agent.

Read full milestone
Product
Anthropic logo, creators of Claude 4.5 Opus

Claude 4.5 / 4.6 Opus: Frontier Agentic Capability

Anthropic released Claude 4.5 and 4.6 Opus, representing the frontier of AI capability in early 2026. These models demonstrated unprecedented reasoning depth, coding ability, and capacity for autonomous multi-step work. They could sustain complex agentic workflows, manage entire projects, and collaborate with other AI agents.

Read full milestone
Cultural

AI Agents in the Workforce: March 2026

By March 2026, AI agents were being used in day-to-day operations for coding, research, support, scheduling, and internal automation. Rather than replacing whole teams outright, the clearest pattern was AI taking over narrow but valuable chunks of knowledge work and operating as an always-available teammate inside existing tools and channels.

Read full milestone