Blogs
The History of AI: Pre Nov 2022
Explore the fascinating history of AI, from its early beginnings to the latest breakthroughs in deep learning, big data, and AGI.
8

minutes

May 5, 2024
This content was generated using AI and curated by humans

The History of Artificial Intelligence

Artificial Intelligence (AI) has been around for a long time. It's not just a modern concept - ideas about it can even be found in ancient Greek and Egyptian myths. Here are some key moments in the history of AI, showing its growth from the beginning to the present.

Maturation of Artificial Intelligence (1943-1952)

From 1943 to 1952, artificial intelligence (AI) made big leaps forward. It went from just an idea to something people could actually experiment with and use. Here are some important things that happened during this time:

  • Year 1943: The first work which is now recognized as AI was done by Warren McCulloch and Walter pits in 1943. They proposed a model of artificial neurons.
  • Year 1949: Donald Hebb demonstrated an updating rule for modifying the connection strength between neurons. His rule is now called Hebbian learning.
  • Year 1950: The Alan Turing who was an English mathematician and pioneered Machine learning in 1950. Alan Turing publishes "Computing Machinery and Intelligence" in which he proposed a test. The test can check the machine's ability to exhibit intelligent behavior equivalent to human intelligence, called a Turing test.
  • Year 1951: Marvin Minsky and Dean Edmonds created the initial artificial neural network (ANN) named SNARC. They utilized 3,000 vacuum tubes to mimic a network of 40 neurons.

The birth of Artificial Intelligence (1952-1956)

From 1952 to 1956, AI began as a new area of study. During this time, early leaders started the work that would later turn into a groundbreaking technology field. Here are key events from this time:

  • Year 1952: Arthur Samuel pioneered the creation of the Samuel Checkers-Playing Program, which marked the world's first self-learning program for playing games.
  • Year 1955: An Allen Newell and Herbert A. Simon created the "first artificial intelligence program"Which was named as "Logic Theorist". This program had proved 38 of 52 Mathematics theorems, and find new and more elegant proofs for some theorems.
  • Year 1956: The word "Artificial Intelligence" first adopted by American Computer scientist John McCarthy at the Dartmouth Conference. For the first time, AI coined as an academic field.

At that time high-level computer languages such as FORTRAN, LISP, or COBOL were invented. And the enthusiasm for AI was very high at that time.

Backward Skip 10sPlay VideoForward Skip 10s

The golden years-Early enthusiasm (1956-1974)

From 1956 to 1974, a time often called the "Golden Age" of artificial intelligence (AI), many amazing things happened. People working in AI were very excited and made big steps forward. Here are some important things that happened:

  • Year 1958: During this period, Frank Rosenblatt introduced the perceptron, one of the early artificial neural networks with the ability to learn from data. This invention laid the foundation for modern neural networks. Simultaneously, John McCarthy developed the Lisp programming language, which swiftly found favor within the AI community, becoming highly popular among developers.
  • Year 1959: Arthur Samuel is credited with introducing the phrase "machine learning" in a pivotal paper in which he proposed that computers could be programmed to surpass their creators in performance. Additionally, Oliver Selfridge made a notable contribution to machine learning with his publication "Pandemonium: A Paradigm for Learning." This work outlined a model capable of self-improvement, enabling it to discover patterns in events more effectively.
  • Year 1964: During his time as a doctoral candidate at MIT, Daniel Bobrow created STUDENT, one of the early programs for natural language processing (NLP), with the specific purpose of solving algebra word problems.
  • Year 1965: The initial expert system, Dendral, was devised by Edward Feigenbaum, Bruce G. Buchanan, Joshua Lederberg, and Carl Djerassi. It aided organic chemists in identifying unfamiliar organic compounds.
  • Year 1966: The researchers emphasized developing algorithms that can solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was named ELIZA. Furthermore, Stanford Research Institute created Shakey, the earliest mobile intelligent robot incorporating AI, computer vision, navigation, and NLP. It can be considered a precursor to today's self-driving cars and drones.
  • Year 1968: Terry Winograd developed SHRDLU, which was the pioneering multimodal AI capable of following user instructions to manipulate and reason within a world of blocks.
  • Year 1969: Arthur Bryson and Yu-Chi Ho outlined a learning algorithm known as backpropagation, which enabled the development of multilayer artificial neural networks. This represented a significant advancement beyond the perceptron and laid the groundwork for deep learning. Additionally, Marvin Minsky and Seymour Papert authored the book "Perceptrons," which elucidated the constraints of basic neural networks. This publication led to a decline in neural network research and a resurgence in symbolic AI research.
  • Year 1972: The first intelligent humanoid robot was built in Japan, which was named WABOT-1.
  • Year 1973: James Lighthill published the report titled "Artificial Intelligence: A General Survey," resulting in a substantial reduction in the British government's backing for AI research.

The first AI winter (1974-1980)

The first AI winter, from 1974 to 1980, was a hard time for artificial intelligence (AI). Funding for research dropped a lot, and people were disappointed in AI.

  • The duration between years 1974 to 1980 was the first AI winter duration. AI winter refers to the time period where computer scientist dealt with a severe shortage of funding from government for AI researches.
  • During AI winters, an interest of publicity on artificial intelligence was decreased.

A boom of AI (1980-1987)

From 1980 to 1987, AI experienced a revival and fresh energy after the tough period of the First AI Winter. Here's what happened during this time:

  • In 1980, the first national conference of the American Association of Artificial Intelligence was held at Stanford University.
  • Year 1980: After AI's winter duration, AI came back with an "Expert System". Expert systems were programmed to emulate the decision-making ability of a human expert. Additionally, Symbolics Lisp machines were brought into commercial use, marking the onset of an AI resurgence. However, in subsequent years, the Lisp machine market experienced a significant downturn.
  • Year 1981: Danny Hillis created parallel computers tailored for AI and various computational functions, featuring an architecture akin to contemporary GPUs.
  • Year 1984: Marvin Minsky and Roger Schank introduced the phrase "AI winter" during a gathering of the Association for the Advancement of Artificial Intelligence. They cautioned the business world that exaggerated expectations about AI would result in disillusionment and the eventual downfall of the industry, which indeed occurred three years later.
  • Year 1985: Judea Pearl introduced Bayesian network causal analysis, presenting statistical methods for encoding uncertainty in computer systems.

The second AI winter (1987-1993)

  • The second AI Winter lasted from 1987 to 1993.
  • Once again, funding for AI research was cut by investors and the government due to high costs and insufficient results. However, expert systems like XCON were cost-effective.

The emergence of intelligent agents (1993-2011)

Between 1993 and 2011, there were big advances in artificial intelligence (AI), especially in creating smart computer programs. In this time, AI experts moved from trying to copy human intelligence to making smart, practical software for special tasks. Here are some important events from this period:

  • Year 1997: In 1997, IBM's Deep Blue achieved a historic milestone by defeating world chess champion Gary Kasparov, marking the first time a computer triumphed over a reigning world chess champion. Moreover, Sepp Hochreiter and Jürgen Schmidhuber introduced the Long Short-Term Memory recurrent neural network, revolutionizing the capability to process entire sequences of data such as speech or video.
  • Year 2002: for the first time, AI entered the home in the form of Roomba, a vacuum cleaner.
  • Year 2006: AI came into the Business world till the year 2006. Companies like Facebook, Twitter, and Netflix also started using AI.
  • Year 2009: Rajat Raina, Anand Madhavan, and Andrew Ng released the paper titled "Utilizing Graphics Processors for Extensive Deep Unsupervised Learning," introducing the concept of employing GPUs for the training of expansive neural networks.
  • Year 2011: Jürgen Schmidhuber, Dan Claudiu Cire?an, Ueli Meier, and Jonathan Masci created the initial CNN that attained "superhuman" performance by emerging as the victor in the German Traffic Sign Recognition competition. Furthermore, Apple launched Siri, a voice-activated personal assistant capable of generating responses and executing actions in response to voice commands.

Deep learning, big data and artificial general intelligence (2011-present)

From the year 2011 up to the present day, the field of artificial intelligence (AI) has seen remarkable progress. This progress can largely be credited to the combination of three factors: the advent and advancement of deep learning techniques, the widespread utilization of massive amounts of data, and the unrelenting pursuit of artificial general intelligence (AGI). These have led to significant milestones and breakthroughs in AI. Here are some of the key developments and events that have taken place during this period:

  • Year 2011: In 2011, IBM's Watson won Jeopardy, a quiz show where it had to solve complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly.
  • Year 2012: Google launched an Android app feature, "Google Now", which was able to provide information to the user as a prediction. Further, Geoffrey Hinton, Ilya Sutskever, and Alex Krizhevsky presented a deep CNN structure that emerged victorious in the ImageNet challenge, sparking the proliferation of research and application in the field of deep learning.
  • Year 2013: China's Tianhe-2 system achieved a remarkable feat by doubling the speed of the world's leading supercomputers to reach 33.86 petaflops. It retained its status as the world's fastest system for the third consecutive time. Furthermore, DeepMind unveiled deep reinforcement learning, a CNN that acquired skills through repetitive learning and rewards, ultimately surpassing human experts in playing games. Also, Google researcher Tomas Mikolov and his team introduced Word2vec, a tool designed to automatically discern the semantic connections among words.
  • Year 2014: In the year 2014, Chatbot "Eugene Goostman" won a competition in the infamous "Turing test." Whereas Ian Goodfellow and his team pioneered generative adversarial networks (GANs), a type of machine learning framework employed for producing images, altering pictures, and crafting deepfakes, and Diederik Kingma and Max Welling introduced variational autoencoders (VAEs) for generating images, videos, and text. Also, Facebook engineered the DeepFace deep learning facial recognition system, capable of identifying human faces in digital images with accuracy nearly comparable to human capabilities.
  • Year 2016: DeepMind's AlphaGo secured victory over the esteemed Go player Lee Sedol in Seoul, South Korea, prompting reminiscence of the Kasparov chess match against Deep Blue nearly two decades earlier.Whereas Uber initiated a pilot program for self-driving cars in Pittsburgh, catering to a limited group of users.
  • Year 2018: The "Project Debater" from IBM debated on complex topics with two master debaters and also performed extremely well.
  • Google has demonstrated an AI program, "Duplex," which was a virtual assistant that had taken hairdresser appointments on call, and the lady on the other side didn't notice that she was talking with the machine.
  • Year 2021: OpenAI unveiled the Dall-E multimodal AI system, capable of producing images based on textual prompts.
  • Year 2022: In November, OpenAI launched ChatGPT, offering a chat-oriented interface to its GPT-3.5 LLM.

Now AI has developed to a remarkable level. The concept of Deep learning, big data, and data science are now trending like a boom. Nowadays companies like Google, Facebook, IBM, and Amazon are working with AI and creating amazing devices. The future of Artificial Intelligence is inspiring and will come with high intelligence.

Discover More AI Insights
Blogs