Artificial Intelligence, or AI, is both a science field and a technological revolution. Its history from a centuries-old philosophical concept to today's technology that dictates our lives is storied, with trailblazing minds, scientific advances, and decades of drive.
Who Invented AI?
John McCarthy, an American computer scientist, is generally regarded as the "father of artificial intelligence." He is credited with first using the term "artificial intelligence" in 1956 at the now-famous Dartmouth College summer workshop, the birth of AI as a field of scientific inquiry.
McCarthy's dream was to investigate if "every feature of learning or any other aspect of intelligence can in principle be so well described that a machine can be made to simulate it.".
However, AI's intellectual heritage is more profound—Alan Turing, the renowned British mathematician, established theoretical ground with his 1950 paper, "Computing Machinery and Intelligence," suggesting what today we refer to as the Turing Test to test a machine's capacity to simulate intelligent behavior indistinguishable from a human's.
Warren McCulloch and Walter Pitts's 1943 work on artificial neurons and Marvin Minsky's construction of the first artificial neural network in the early 1950s are also fundamental stepping stones.
Key Early Figures in AI
Name | Contribution | Date |
John McCarthy | Coined "Artificial Intelligence"; LISP | 1956, 1958 |
Alan Turing | Turing Test, theorized intelligent machines | 1950 |
Warren McCulloch & Walter Pitts | Proposed artificial neuron model | 1943 |
Marvin Minsky | Built first artificial neural network | 1951 |
History of AI
-
Myths and Machines: Ancient Greek myths (such as Pygmalion and Talos) and medieval machines sparked the vision of artificial, human-like machines.
-
"Robot": Used for the first time by Czech playwright Karel Čapek in 1921 to refer to artificial humans.
-
20th Century: Beginnings and the Emergence of AI
-
1943: McCulloch and Pitts develop a model of artificial neurons—an early step toward neural networks.
-
1950: Alan Turing conceives the Turing Test—a measure of machine intelligence.
-
1956 (Dartmouth Workshop): John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon make AI a field of study. McCarthy names "Artificial Intelligence."
-
1958: John McCarthy invents LISP, a seminal AI programming language.
-
1960s–1970s: Development of symbolic AI, "expert systems," early robots, and chatbots such as ELIZA.
-
1980s: Development of self-driving prototypes, high-performance chess computers, and logic reasoning.
AI Winters and Renaissances
-
1970s–1990s: Undulations in interest and investment—so-called "AI Winters".
-
1997: IBM Deep Blue beats world chess champion Garry Kasparov, an AI milestone.
The Deep Learning Era
-
2000s–2010s: Explosive growth in available data and computing power. Advent of machine learning, particularly deep learning (stacked neural networks) driving speech, vision, and language advancements.
-
2011–present: AI beats human champions at "Jeopardy!" (IBM Watson), the game of Go (AlphaGo), drives virtual assistants (Siri, Alexa, ChatGPT), and revolutionizes industries.
The Enduring Impact
AI has grown from philosophical musings and theoretical abstractions into a huge, real-world presence in society—ranging from autonomous vehicles and medical diagnoses to creative software and smart robots. The competition and cooperation between such legendary figures as Turing, McCarthy, Minsky, and today's researchers fuel this ever-widening frontier.
What began as a dream to replicate human intelligence by a group of pioneers at Dartmouth now underlies revolutions in business, science, entertainment, and ordinary life.
Comments
All Comments (0)
Join the conversation