History of artificial intelligence

Artificial intelligence: birth, applications and future trends

R&D Internet Informatics Engenharia Cybersecurity

Artificial intelligence, for those who do not use it on a daily basis, seems like a concept typical of great film productions or science fiction books. But the truth is that it is a set of almost century-old concepts that are increasingly present and to which we resort, often without realising it. Find out what artificial intelligence is, what it is for, what its risks and challenges are and what we expect from it in the future.

Historia de la IA
Artificial intelligence is not about creating new knowledge, but about collecting and processing data to make the best use of it for decision making.

Artificial intelligence has become a broad and revolutionary tool with countless applications in our daily lives. Able to give birth to robots that act with human-like responses and to respond to voice requests with practical functionalities in mobiles and speakers, artificial intelligence has attracted the attention of Information and Communications Technology (ICT) companies around the world and is considered the Fourth Technological Revolution following the proliferation of mobile and cloud platforms. Despite the innovation it brings to our lives, its history is a long process of technological advancement.

Definition and Origins of Artificial Intelligence

When we speak of "intelligence" in a technological context we often refer to the ability of a system to use available information, learn from it, make decisions and adapt to new situations. It implies an ability to solve problems efficiently, according to existing circumstances and constraints. The term "artificial" means that the intelligence in question is not inherent in living beings, but is created through the programming and design of computer systems.

As a result, the concept of "artificial intelligence" (AI) refers to the simulation of human intelligence processes by machines and software. These systems are developed to perform tasks that, if performed by humans, would require the use of intelligence, such as learning, decision-making, pattern recognition and problem solving. For example, managing huge amounts of statistical data, detecting trends and making recommendations based on them, or even carrying them out.

Today, AI is not about creating new knowledge, but about collecting and processing data to make the most of it for decision making. It rests on three basic pillars:

  • Data. This is the information collected and organised on which we want to automate tasks. It can be numbers, texts, images, etc.

  • Hardware. This is the computing power that allows us to process data faster and more accurately to make software possible.

  • Software. It consists of a set of instructions and calculations that allow training systems that receive data, establish patterns and can generate new information.

But what are AI algorithms? This is the name given to the rules that provide the instructions for the machine. The main AI algorithms are those that use logic, based on the rational principles of human thought, and those that combine logic or intuition (deep learning), which use the way people's brains work to make the machine learn as they would.

How was artificial intelligence born?

The idea of creating machines that mimic human intelligence was present even in ancient times, with myths and legends about automatons and thinking machines. However, it was not until the mid-20th century that their true potential was investigated, after the first electronic computers were developed. 

In 1943 Warren McCulloch and Walter Pitts presented their model of artificial neurons, considered the first artificial intelligence, even though the term did not yet exist. Later, in 1950, the British mathematician Alan Turing published an article entitled "Computing machinery and intelligence" in the magazine Mind where he asked the question: Can machines think? He proposed an experiment that came to be known as the Turing Test, which, according to the author, would make it possible to determine whether a machine could have intelligent behaviour similar to or indistinguishable from that of a human being.

John McCarthy coined the term "artificial intelligence" in 1956 and drove the development of the first AI programming language, LISP, in the 1960s. Early AI systems were rule-centric, which led to the development of more complex systems in the 1970s and 1980s, along with a boost in funding. Now, AI is experiencing a renaissance thanks to advances in algorithms, hardware and machine learning techniques.

As early as the 1990s, advances in computing power and the availability of large amounts of data enabled researchers to evolve learning algorithms and lay the foundations for today's AI. In recent years, this technology has seen exponential growth, driven in large part by the development of deep learning, which harnesses layered artificial neural networks to process and interpret complex data structures. This development has revolutionised AI applications, including image and speech recognition, natural language processing and autonomous systems.

History of artificial intelligence

  • 1943
  • 1950
  • 1956
  • 1961
  • 1964
  • 1966
  • 1997
  • 2002
  • 2011
  • 2014
  • 2016
  • 2022

American researchers Warren McCulloch and Walter Pitts present their model of artificial neurons, considered the first artificial intelligence.


British mathematician Alan Turing proposes a test for machine intelligence. If a machine can fool humans into thinking it is human, then it has intelligence.


American computer scientist John McCarthy coins the term "artificial intelligence" to describe "the science and engineering of making machines intelligent".


General Motors installs the first industrial robot, Unimate, to replace humans in assembly tasks.


Joseph Wizenbaum develops the first natural language processing computer program, ELIZA, which simulates human conversation.


Shakey, the first general-purpose mobile robot capable of reasoning about its own actions, is launched. It is considered the forerunner of autonomous cars.


IBM's Deep Blue supercomputer beats world chess champion Garry Kasparov in one game.


Roomba, the first mass-produced robotic hoover sold by iRobot, is launched, capable of roaming and cleaning in the home.


Apple integrates Siri, a virtual assistant with a voice interface, into its iPhone 4S.


IBM's Watson system, capable of answering questions asked in natural language, wins first prize in the popular US TV quiz show Jeopardy!


The Eugene computer program passes the Turing Test by convincing a third of the judges participating in the experiment that it was a human being.


DeepMind's AlphaGo programme, based on a deep neural network, beats Lee Sodol, the world Go champion in five games.


OpenAI launches ChatGPT, an artificial intelligence chatbot application trained to hold conversations and answer questions, to the public.

 SEE INFOGRAPHIC: History of artificial intelligence [PDF]

Functions and purposes of artificial intelligence 

Artificial intelligence plays a key role in digital and sustainable transformation in various sectors. Not only does it create a favourable climate for the development of an increasingly advanced digital landscape, but it is also one of the most impactful sustainable technologies: it enables organisations or companies to reduce the number of teams, resources or materials. It offers higher productivity with less, which guarantees a digital and sustainable basis for any company.

AI has practical applications in a wide variety of sectors, driving efficiency, innovation and decision making. Some of these areas are:


There are now chatbots that ask patients about their symptoms to make a pattern-based diagnosis. In addition, AI is used to develop personalised treatments based on genetic and clinical data.


Smart technology enables the assessment of risks and opportunities, improving investment and lending decisions as well as providing personalised financial advice through virtual assistants.


There are educational platforms that use AI to adapt learning content according to the individual needs of students. It can also simplify administrative tasks, such as automatically marking exams.


AI analyses agricultural data to create precision agriculture by optimising resource use, improving productivity and reducing environmental impact. In addition, AI-based drones and sensors can monitor the state of crops and help in the early detection of diseases.


Artificial intelligence applications optimise energy distribution, improving the efficiency and reliability of power grids, and allow equipment failures to be predicted, reducing downtime and maintenance costs.

 Logistics and transport

AI plays a key role in the development of autonomous vehicles, as well as in the optimisation of delivery and transport routes, reducing costs and emissions.


Some applications of artificial intelligence make it possible to make sales forecasts and choose the right product to recommend to the customer.

Risks and challenges associated with artificial intelligence 

Advances in artificial intelligence have led to the transformation of various areas and sectors, but have also raised concerns about possible risks or challenges that may arise in its development. Here are some examples: 

 Biases and algorithmic discrimination

AI relies on algorithms and data to make decisions, but these can be biased and perpetuate injustices. They can reflect and amplify existing biases, which could lead to discriminatory decisions.

 Violation of privacy

The collection and analysis of large amounts of data to feed AI algorithms can raise concerns about the privacy of people's information if not handled properly. Data breaches may even encourage the proliferation of potential cyber attacks. 

 Labour displacement

AI and automation may also risk displacing millions of workers from their jobs. Repetitive and routine tasks can easily be taken over by advanced AI systems, which could lead to mass unemployment in some sectors, posing economic and social challenges.


AI can also be used for malicious purposes, such as for the development of more sophisticated cyber attacks that are less familiar to victims.

 Ethics and legal responsibility

Algorithm-driven decisions can raise ethical questions, especially when dealing with critical situations such as healthcare, justice and security. This is compounded by the difficulty of determining legal liability in case of incorrect decisions or detrimental actions of AI systems. 

 Superintelligence and control

Some experts have raised concerns about a possible risk associated with the development of super-intelligent AI in the long term. The main fear is that if we were to create an AI with higher-than-human intelligence, it could become autonomous and surpass our ability to control it. 

Looking to the future: AI trends and projections

AI trends and projections cover a wide range of areas and have great potential to significantly influence various sectors. Looking to the future of this technology involves changing our conception of how we interact with technology and address complex issues. One of the central focuses in the coming years will be the development of techniques to understand and explain the decisions made by algorithms. But the ethics of AI, which are currently at the centre of the AI debate, will continue to be of great importance with the increasing implementation of practices that ensure fairness and transparency in the development and deployment of systems to ensure them.

In addition, AI is expected to become increasingly specialised, with systems designed for increasingly specific tasks in sectors such as health, education and even agriculture. The development of all these projections will be further enhanced by the convergence of AI with emerging technologies such as quantum computing and robotics, which will expand its capabilities and applications in various industries.