Grow with AppMaster Grow with AppMaster.
Become our partner arrow ico

Timeline of Artificial Intelligence AI — 2022 Update

Timeline of Artificial Intelligence AI — 2022 Update

Let us dive into this topic from ancient times until 2022.

Antiquity

Artificial intelligence (AI) began with myths, legends, and stories about artificial beings endowed with intellect or awareness created by master artisans. The early Greek philosophers attempted to depict the human thinking process as a machine-like manipulation of symbols to form theories.

Later fiction

Ideas about artificial men and thinking machines were created in fiction, such as Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots), and speculation, such as Samuel Butler's "Darwin among the Machines," and in real-world occurrences, including Edgar Allan Poe's "Maelzel's Chess Player".

Automata

Artisans from every civilization, including Yan Shi, Hero of Alexandria, Al-Jazari, Pierre Jaquet-Droz, and Wolfgang von Kempelen, have devised realistic humanoid automata. The ancient Egyptian and Greek sacred statues were the first known automatas. The faithful believed that artisans had endowed these figures with actual minds/ During the medieval age, these legendary automatons were said to respond to queries addressed to them.

Formal reasoning

Artificial intelligence is based on the idea that human thought may be mechanized. There has been much study into formal — or "mechanical"—"reasoning." The Chinese, Indian, and Greek philosophers invented formal deduction methodologies of the first millennium BCE. They were developed by such philosophers as Aristotle (who wrote a rigorous analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who created the algebra and is credited with giving his name to "algorithm"), and European scholastic thinkers such as William of Ockham.

The Spanish philosopher Ramon Llull (1232–1315) created several logical machines to create knowledge via logical procedures; he referred to his devices as mechanical beings that could combine fundamental and indisputable facts using simple logical operations the production of all possible knowledge. Gottfried Leibniz revived Llull's ideas.

Leibniz, Thomas Hobbes, and René Descartes investigated the prospect in the 16th century that all rational thought might be reduced to algebra or geometry. The reason, according to Hobbes, is "nothing but a reckoning." Leibniz imagined a global language of reasoning (his characteristica universalis) that would reduce the debate to calculation so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice for them to take their pencils. These thinkers first articulated the physical symbol system hypothesis, which would eventually become the central belief of AI study.

In the 20th century, logical-mathematical logic developed the crucial breakthrough that made artificial intelligence seem practical. Such works laid the groundwork for Boole's The Laws of Thought and Frege's Begriffsschrift. In 1913, Russell and Whitehead published the Principia Mathematica, a formal study of the foundations of mathematics, building on Frege's system.

The response they got was unexpected in two respects. To begin with, they proved that there were limits to what mathematical logic could accomplish. However, second and more significant (for AI), their research indicated that any mathematical inference might be mechanized within these parameters.

The Turing Test

The Turing test is a long-term goal for AI research – will we ever be able to create a computer that can sufficiently impersonate a human that a suspicious judge could not tell the difference? It has followed a similar path to much of the AI research since its inception. Initially, it appeared to be difficult yet doable (once hardware technology reached).

Despite decades of study and significant technological improvements, the Turing test continues to serve as a goal for AI researchers while also revealing how far we are from achieving it.

In 1950, English mathematician and computer scientist Alan Turing published a paper entitled "Computing Machinery and Intelligence," which kicked off the field that would become known as artificial intelligence. This was years before John McCarthy coined the term Artificial Intelligence. The article began with a simple question: “Can machines think?” After this, Turing proposed a method for determining if machines can think, which became known as the Turing test. The “Imitation Game” was developed as a simple test that might be used to determine whether machines were thinking. Assuming that a computer programmed to seem exactly like an intellectual human truly has demonstrated that computers can think.

Although people continue to argue about whether machines can think and test cyberspace, it's clear that Alan Turing and his proposed criterion provided a powerful and instructive vision for the field of AI. This paper, written by Alan Turing himself, provided his seminal contributions to AI research and paved the way for modern computer science. The Turing test is widely regarded as a landmark in the field of artificial intelligence and may be regarded as a goal for many years to come while also being a milestone in tracking the progress of the entire AI field.

Try AppMaster no-code today!
Platform can build any web, mobile or backend application 10x faster and 3x cheaper
Start Free

Cybernetics and early neural networks

The invention of the computer inspired early investigations into intelligent machines. A confluence of ideas emerged during the late 1930s, 1940s, and early 1950s, inspiring earlier work in neuroscience. The works of Norbert Wiener and Claude Shannon focused on electrical networks' control and stability. Information theory by Claude Shannon described digital signals (all-or-nothing signals). Alan Turing's theoretical notion of computing proved that any sort of calculation may be represented digitally. The close link between these ideas suggested that an electronic brain might be built.

Robots like W. Grey Walter's turtles, as well as the Johns Hopkins Beast, are examples of work in this area. These machines were driven by analog electronics and instinct rather than computers, digital electronics, or symbolic reasoning; they were entirely controlled by analog circuitry.

In 1943, Walter Pitts and Warren McCulloch investigated networks of idealized artificial neurons and demonstrated how they might perform basic logical operations. They were the first to describe what later researchers would term a neural network. A young Marvin Minsky, then a 24-year-old graduate student, was inspired by Pitts and McCulloch. In 1951 (with Dean Edmonds), he created the first neural network machine, the SNARC. For the next 50 years, Minsky would be one of AI's most important leaders and innovators.

Game AI

In 1951, Christopher Strachey and Dietrich Prinz created checkers programs for the Ferranti Mark 1 machine at the University of Manchester. Arthur Samuel's checker program, which was created in the mid-50s and early 60s, eventually reached amateur level skill. The use of AI in games would endure throughout history as a metric for advancement in AI.

Dartmouth Workshop 1956: the birth of AI

In 1956, the Dartmouth Conference was hosted by Marvin Minsky, John McCarthy, and two IBM senior scientists: Claude Shannon and Nathan Rochester. "A machine may be built to duplicate any aspect of human intelligence," the proposal read. The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell, and Herbert A. Simon—all of whom would go on to create significant AI projects during the early decades of study. At the conference, Newell and Simon unveiled the "Logic Theorist" while McCarthy urged the attendees to accept "Artificial Intelligence" as the name of their field. The 1956 Dartmouth conference was the event that gave AI its name, purpose, and first success, as well as its key players and defining moments.

Symbolic AI 1956–1974

To most people, the years following the Dartmouth Workshop were simply "astounding": computers were solving algebra word problems, proving geometric theorems, and learning to talk English. In the late 1960s, few would have thought that such "intelligent" behavior by machines was conceivable. In private and in print, academics expressed great optimism that a fully sapient machine would be developed within less than 20 years. The new field attracted significant funding from government agencies like DARPA.

The first AI winter 1974–1980

In the 1970s, AI faced criticism and financial setbacks. The difficulties that AI researchers were facing were not recognized by them. Their huge expectations had been raised far beyond what was reasonable, and when the promised benefits failed to appear, government funding for AI vanished. At the same time, for ten years after Marvin Minsky's devastating criticism of perceptrons, the field of connectionism (or neural networks) lay dormant. Despite the public's negative view of AI in the late 1970s, new ideas were explored in logic programming, commonsense reasoning, and a variety of other fields.

Boom 1980–1987

From the early days of AI, knowledge was a major concern. Expert systems, a form of AI program, were adopted by businesses around the world in the 1980s and knowledge became the focus of mainline AI research. In the 1990s, the Japanese government heavily invested in AI with its fifth-generation computer initiative. The resurgence of connectionism in the works of John Hopfield and David Rumelhart in the early 1980s was another encouraging moment. Once again, AI had succeeded.

The second AI winter 1987–1993

In the 1980s, the business world's attention to AI followed the classic pattern of an economic bubble. The crash was caused by commercial suppliers being unable to produce a variety of workable solutions. Hundreds of companies failed, and many investors refused to invest in them. Many believed that the technology was not viable, yet the research continued to advance. Numerous experts, such as Rodney Brooks and Hans Moravec, advocated for a radically new kind of AI.

AI 1993–2011

The field of Artificial Intelligence, which has been more than half a century old, has reached some of its most basic objectives. It is presently being utilized effectively throughout the technology sector, albeit somewhat quietly. Some of it was the result of improved computing capability, while some came about through focusing on specific isolated issues and striving to achieve the highest levels of scientific accountability. And yet, AI's reputation in the business world was less than stellar. Within the field, there was limited agreement on why AI had been unable to fulfill its promise of human-level intelligence in the 1960s. AI was splintered into a number of distinct disciplines, each focusing on a different issue or method, while nonetheless giving the illusion that they were working towards the same goal.

Try AppMaster no-code today!
Platform can build any web, mobile or backend application 10x faster and 3x cheaper
Start Free

The "victory of the neats"

Artificial intelligence researchers began to create and utilize sophisticated mathematical approaches at a greater rate than they ever had before. Many of the issues that AI needed to tackle were already being addressed by academics in fields like mathematics, electrical engineering, economics, and operations research. The shared mathematical language allowed for more collaboration between diverse fields and the accomplishment of measurable and verifiable results; AI had now become a more serious "scientific" discipline, according to Russell & Norvig (2003).

Probability and decision theory have been incorporated into AI since Judea Pearl's influential 1988 work introduced probability and decision theory to the field. Bayesian networks, hidden Markov models, information theory, stochastic modeling, and classical optimization are just a few of the many new techniques employed. Mathematical representations were also developed for "computational intelligence" paradigms like neural networks and evolutionary algorithms.

Predictions (or "Where is HAL 9000?")

In 1968, Arthur C. Clarke and Stanley Kubrick predicted that by 2001, a machine would have intellect comparable to or surpassing human beings. HAL 9000, the AI character they designed, was based on the notion held by many top AI experts that such a device would be developed by 2001.

By 2016, the market for AI-related goods, hardware, and software had reached more than $8 billion, with interest in AI reaching "mania." Big data's applications have begun to extend beyond the field of statistics. For example, big data was used to train models in ecology and for a variety of economic applications. Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) have fueled progress and research in image and video processing, text analysis, and even speech recognition.

Big Data

Big data is a term used to describe tremendous quantities of numerical data that are beyond the capabilities of typical applications software. It requires an entirely new set of processing models to handle this level of decision-making, insight, and process optimization. In the Big Data Era, Victor Meyer Schonberg and Kenneth Cooke define big data as "all data is utilized for analysis instead of random evaluation (sample survey).

The following are five important characteristics of big data: Volume, Velocity, Variety, Value, and Veracity (proposed by IBM). The significance of big data technology isn't to master huge data information, but to focus on the important bits. To put it another way, if big data is likened to the economy, the key to profitability in this sector is improving the "Process capability" of the data and turning it into "Value-added."

Artificial general intelligence

The ability to solve any issue, rather than just a specific one, is known as general intelligence. Artificial general intelligence (or "AGI") refers to software that may apply intellect to a variety of problems in the same way that humans can.

AI researchers argued in the early 2000s that AI development had largely abandoned the field's original objective of creating artificial general intelligence. AGI study was established as a separate sub-discipline and there were academic conferences, laboratories, and university courses dedicated to AGI research, as well as private consortia and new firms, by 2010.

Artificial general intelligence is also known as "strong AI," "full AI," or a synthetic sort of intellect rather than "weak AI" or "narrow AI."

AI in 2022

Artificial intelligence (AI) has become a business and organizational reality for numerous sectors. Even if the benefits of AI aren't always readily apparent, it has shown itself capable of improving process efficiency, decreasing errors and labor, and extracting insights from big data.

People are talking about what the next big thing will be in the world when it comes to AI-powered trends. A collection of the most intriguing AI trends to anticipate in 2022 is presented here:

  • ROI Driven AI implementation;
  • Video analytics;
  • The ‘As a Service’ business model;
  • Improved cybersecurity;
  • AI in Metaverse;
  • A data fabric;
  • AI and ML with the Internet of Things (IoT);
  • AI leading hyper-automation.

Conclusion

Artificial intelligence has a huge impact on the future of every sector of science, economy, production, and every person. Artificial intelligence has contributed to the development of innovative technologies such as big data, robotics, and the Internet of Things from the very beginning, and it will continue to develop.

Related Posts

How to Develop a Scalable Hotel Booking System: A Complete Guide
How to Develop a Scalable Hotel Booking System: A Complete Guide
Learn how to develop a scalable hotel booking system, explore architecture design, key features, and modern tech choices to deliver seamless customer experiences.
Step-by-Step Guide to Developing an Investment Management Platform from Scratch
Step-by-Step Guide to Developing an Investment Management Platform from Scratch
Explore the structured path to creating a high-performance investment management platform, leveraging modern technologies and methodologies to enhance efficiency.
How to Choose the Right Health Monitoring Tools for Your Needs
How to Choose the Right Health Monitoring Tools for Your Needs
Discover how to select the right health monitoring tools tailored to your lifestyle and requirements. A comprehensive guide to making informed decisions.
GET STARTED FREE
Inspired to try this yourself?

The best way to understand the power of AppMaster is to see it for yourself. Make your own application in minutes with free subscription

Bring Your Ideas to Life