GitHub Copilot AI coding assistant logo

GitHub Copilot: AI Writes Code

What Happened

GitHub launched Copilot as a technical preview — an AI pair programmer powered by OpenAI Codex that could autocomplete entire functions, write boilerplate, and suggest code from natural language comments. It was trained on billions of lines of public code.

Why It Mattered

First AI coding assistant to go mainstream. Changed how millions of developers write code. Sparked debates about code copyright, AI authorship, and the future of software engineering. Over 1M developers adopted it in the first year.

Organizations

Tags

Related Milestones

GitHub Copilot logo representing the AI coding agents era
Product

AI Coding Agents Transform Software Development

AI coding agents like Claude Code, Cursor, GitHub Copilot's agentic workflows, and OpenClaw-linked remote coding loops pushed beyond autocomplete into delegated engineering work. These systems could inspect repositories, run tests, edit files, use terminals and browsers, and iterate on tasks over multiple turns.

AnthropicCursor
OpenAI logo, creators of ChatGPT
Product

ChatGPT: AI Goes Mainstream

OpenAI released ChatGPT, a conversational AI based on GPT-3.5 fine-tuned with RLHF (Reinforcement Learning from Human Feedback). It reached 1 million users in 5 days and 100 million in 2 months — the fastest-growing consumer application in history. People used it to write emails, debug code, brainstorm ideas, and a thousand other tasks.

Sam AltmanOpenAI
GPT-2 language model generating text about itself
Research

GPT-2: 'Too Dangerous to Release'

OpenAI announced GPT-2 (1.5 billion parameters) but initially refused to release the full model, calling it 'too dangerous' due to its ability to generate convincing fake text. The decision was controversial — some praised the caution, others called it a publicity stunt. The full model was eventually released in November 2019.

Alec RadfordOpenAI
OpenAI logo
Research

GPT-3: The 175 Billion Parameter Leap

OpenAI released GPT-3 with 175 billion parameters — 100x larger than GPT-2. Without any fine-tuning, GPT-3 could write essays, code, poetry, translate languages, and answer questions through 'few-shot learning' (learning from just a few examples in the prompt). The API launched in beta, enabling thousands of applications.

Tom BrownOpenAI
AI-generated image by DALL-E
Research

DALL-E: Text to Image Generation

OpenAI unveiled DALL-E, a model that could generate images from text descriptions — 'an armchair in the shape of an avocado' became iconic. Built on GPT-3's architecture adapted for images, it showed that language models could bridge the gap between text and visual creativity.

OpenAI

Get the latest AI milestones as they happen

Join the newsletter. No spam, just signal.