From the mythical stories of automatons in ancient cultures to the complex artificial intelligence we all know today, Mankind’s journey to impart a kind of “intelligence” to machines has been a long road full of ups and downs. As can already be seen in another report Computer Hey.
It is one of the most exciting and promising fields of modern technology. But when was AI really born? To understand it, you need to explore the nature of artificial intelligence and the roots of its concept.
The idea of machines that can think and learn like humans stretches back long before modern technology made AI possible. One of the first thinkers to explore this idea was the Greek philosopher Antigo Heron of Alexandria, who lived in the 1st century AD.
Heron created automata that resembled human and animal figures and were capable of performing simple tasks such as opening doors or playing a flute. Although these automata are far from modern AI, they represent the first step towards building intelligent machines.
OK, but what exactly is artificial intelligence?
Artificial intelligence (AI) focuses on the field of computer science Creating systems and machines capable of performing tasks that require intelligence and reasoning when performed by humans.
Unlike traditional programming, where specific rules are defined for computers to perform tasks, AI is based on the idea that machines can learn and make decisions on their own through data collection and analysis.
One of the fundamental characteristics of artificial intelligence is its ability to learn and improve with experience.. This is achieved through algorithms and mathematical models that allow machines to recognize patterns in data and adjust their behavior based on those patterns.
As AI systems are fed more information and data, they become more accurate and effective at performing specific tasks.
AI is divided into many branches and approaches, each with its own applications and challenges. For example, machine learning (Machine learning) focuses on developing mechanisms that allow machines to improve their performance on a task given more information.
deep learning (Deep learning) is a subset of machine learning based on artificial neural networks and has been demonstrated Especially useful in tasks like speech recognition and computer vision.
These are the true origins of a technology that promises to change the world: artificial intelligence
It started in the 20th century. One of the most influential moments in the history of AI was the publication of the paper Computing Machines and Intelligence By British Mathematician and Logician Alan Turing In 1950.
In this work, Turing proposed the idea of a universal machine that could simulate any computational process that a human could perform. This machine, now known as the Turing machine, laid the theoretical foundation for modern computing and AI.
Turing also introduced the famous Turing Test, which asked the question of whether a machine could reproduce the intelligent behavior of a human in an indistinguishable manner. This test became a key benchmark in the quest for AI.
Abandoning Alan Turing’s pioneering theories, It is only in 1956 that we can refer to the birth of artificial intelligence as a real and concrete concept.
At the Dartmouth conference, three eminent scientists, John McCarthy, Marvin Minsky, and Claude Shannon, officially named this emerging field “the science and intelligence of creating intelligent machines, especially intelligent computer programs.”
Back then, these scientific visionaries envisioned a future in which society would be surrounded by intelligent machines in less than a decade.
However, the reality was far from this prediction as artificial intelligence technology It didn’t reach its full potential until the 1990s, when its golden age truly began, and its first steps were quite frightening.
Since the 1990s, large tech companies have started making massive investments in the field of artificial intelligence.
What is the reason for this change in direction in your view? The answer lies in the imminent arrival of the digital worldA scenario in which organizations realize the pressing need to improve their ability to process and analyze vast amounts of data to come.
One of the tech giants, IBM, took a major step in 1997 by introducing the world to Deep Blue, a supercomputer capable of defeating world chess champion Garry Kasparov.
This historical milestone had such a profound impact that it became the subject of films exploring the technological future, and of course, artificial intelligence featured prominently. Also, this decade saw the emergence of intelligent agents, laying the groundwork for the creation of sophisticated chatbots and now-known virtual assistants.
Later, another IBM supercomputer, Watson, achieved another milestone by winning the popular quiz show Jeopardy!. “The IBM Watson computer has won its battle against the human brain,” exclaimed the host of the American television show.
These were the years when IBM and Microsoft led investment in artificial intelligence innovation with significant investments. Now, in a world where artificial intelligence has become entrenched in many people’s lives and has replaced humans in various tasks, there is a need to regulate ethics in the sector.
All companies recognize the importance of this technology, but it remains to be seen how far artificial intelligence will take humanity in the future.