How artificial intelligence came about

0
How artificial intelligence came about

More than 60 years ago, man decided that he could use the logic of machines to help his development. See below how artificial intelligence came about, what its first applications were and how much it has developed over the decades. Its origins were due to the need to speed up known processes to free humans to focus on other abstract thoughts.

How is the area defined?

Artificial intelligence — or AI — is a discipline just over sixty years old, which is a set of sciences, theories and techniques — including mathematical logic, statistics, probabilities, computational neurobiology, and computer science — that aims to imitate the cognitive abilities of a human being.

Begun at the height of World War II, its developments are closely linked to those of computing and have led computers to perform increasingly complex tasks that previously could only be done by a human being.

Image credits: almabetter.com

The beginnings (1940 – 1960)

It is impossible to separate the origin of artificial intelligence from the evolution of the computational process. Based on this principle, we cannot fail to mention Alan Turing, the great father of computing who simply created the machine that made the allies win the war faster.

Speaking of this computer genius, now, many experts believe that the Turing test is not a good measure of artificial intelligence, but rather an efficient chatbox tool, which strongly inspired the concept that would create artificial intelligence.

The period between 1940 and 1960 was marked by technological development — with the Second World War acting as an accelerator — and the desire to understand how to bring the functioning of machines and organic beings closer together.

The term “AI” can be attributed to John McCarthy of MIT ( Massachusetts Institute of Technology ), where we can define it as the construction of computer programs that engage in tasks that are performed more satisfactorily by human beings due to high-level mental processes, such as perceptual learning, memory organization and critical reasoning.

Around 1960, artificial intelligence cooled down due to technical limitations of the time, such as the lack of computer memory.

Technological limitations held back artificial intelligence in the first Era

The second era (1972 – 1997)

During this period, the problems of technical limitations of computers were partially resolved, with increased memory. The great thing that revived technology was art and cinema, individuals who had contact with these concepts in their youth released their creativity.

In the technical area, in fact, it was microprocessors that made the idea possible again. Even so, the truth is that little progress was made in a tangible way and with broad knowledge; the developments were restricted to researchers.

Microprocessors were the key to the second Era (Image: Vishnu Mohanan/Unsplash)

A first major step was taken at Stanford University in 1972 with MYCIN (a system for the diagnosis of blood diseases and prescription drugs). This system was based on an “inference engine” that was programmed to be a logical mirror of human reasoning. When inputting data, the engine provided answers with a high level of expertise.

In 1997, the Deep Blue computer defeated chess master Garry Kasparov, despite this, the IBM computer was specialized in a limited universe, and not with the capacity to model and calculate an entire world.

Current artificial intelligence (2010 – present)

Two main factors triggered this new era. The first was access to large volumes of data. The second factor was the discovery of the very high efficiency of computer graphics card processors in accelerating the calculation of learning algorithms.

Outstanding feats

  • In 2012, Google X (Google’s research lab) is able to make an AI recognize cats in a video — a machine learns to distinguish something;
  • In 2016, AlphaGO (Google’s AI specialized in Go games) beat the European champion (Fan Hui) and the world champion (Lee Sedol). The game of GO has absurdly more variations than Deep Blue’s chess.

How was this possible? Through a complete paradigm shift in expert systems. The approach has become inductive. In short, it is no longer a matter of coding rules for expert systems, but of allowing computers to discover them through correlation and classification, based on a large amount of data, their own responses.

Suddenly, the vast majority of research teams turned to this technology with immense benefits.

This type of learning has also enabled considerable advances in text recognition, but there is still a long way to go to produce text understanding systems. When we say this, it is because AI is still unable to fully contextualize and analyze our intentions with certain forms of writing.

With information: Live scienceCOE.

LEAVE A REPLY

Please enter your comment!
Please enter your name here