Skip to content

A Timeline of Deep Learning

1943

Two researchers in Chicago, Warren McCulloch and Walter Pitts, show that highly simplified models of neurons could be used to encode mathematical functions.

1958

Frank Rosenblatt, a psychologist at the Cornell Aeronautical Laboratory, develops a basic neural network in a machine called the Perceptron. It detects images with a camera and categorizes them by turning knobs to adjust the weights of “association cells” in the machine. Rosenblatt says it eventually should be possible to mass-produce Perceptrons that are conscious of their own existence.

1959

Stanford researchers Bernard Widrow and Ted Hoff show how neural networks can predict upcoming bits in a data stream. The technology proves useful in noise filters for phone lines and other communications channels.

1969

Research on neural networks stalls after MIT’s Marvin Minsky and Seymour Papert argue, in a book called “Perceptrons,” that the method would be too limited to be useful even if neural networks had many more layers of artificial neurons than Rosenblatt’s machine did.

1986

David Rumelhart, Geoff Hinton, and Ronald Williams publish a landmark paper on “backpropagation,” a method for training neural networks by adjusting the weights they assign to individual artificial neurons. The backpropagation algorithm had been applied in computers in the 1970s, but now researchers put it to wider use in neural networks.

1990

AT&T researcher Yann LeCun, who decades later will oversee AI research at Facebook, uses backpropagation to train a system that can read handwritten numbers on checks.

1992

Gerald Tesauro of IBM uses reinforcement learning to get a computer to play championship-level backgammon.

2006

Hinton and colleagues show how to quickly train several individual layers in a neural network.

2012

“Deep learning” takes off after Hinton and two of his students establish that a neural network trained in their method outperforms other computing techniques on a standard test for classifying images. Their system’s error rate is 15 percent; the next-best entrant is wrong 26 percent of the time.

2014

Google researcher Ian Goodfellow plays two neural networks off each other to create what he calls a “generative adversarial network.” One network is programmed to generate data—such as an image of a face—while the other, known as the discriminator, evaluates whether it’s plausibly real. Over time, the generator will tend to produce images (or other data) that seem realistic.

2015

A startup in London, DeepMind, uses reinforcement learning to train a system that masters old Atari video games like Breakout. It plays the games randomly but quickly selects tactics that lead to higher scores.

2016

A deep learning system called AlphaGo beats human Go champion Lee Sedol after absorbing thousands of examples of past games played by people.

2017

An updated version of AlphaGo, known as AlphaZero, plays 29 million games against itself rather than studying past games played by humans. Then it powerfully demonstrates this form of reinforcement learning by beating the original Alpha Go program 100 games to nothing. The method also works with chess and a Japanese game called shogi.

2018

The same team develops AlphaFold, a set of deep learning and generative neural networks to predict the structure of proteins from their amino acid sequences. The team enters the Critical Assessment of Techniques for Protein Structure Prediction (CASP) competition and places first, predicting the structure of 25 of the 43 unknown proteins, with the runner up only able to predict three out of the 43.