Artificial Intelligence – A Product Identical to Nature? Part I

Despite the fact that AI and neural networks are mentioned in almost every product around us, from smart light bulbs to large services, there is a deep disdain for the term AI and neural networks in particular. They are called “artificial idiots”, then they say that they are not AI yet, and the real one will be sometime later, and traditionally neural networks are compared to just a bunch of “IF” blocks. But how fair is such a remark? Do neural networks and AI really not deserve to be called such? To answer this question, it is worth going through the history of the term and the tool, starting with the description of the nervous systems of biological creatures and coming to modern computers, to understand how it all began and what it led to.

The beginning of the story

The history of artificial intelligence and neural networks begins with the study of biological neurons. In 1943 In 1958, neurophysiologist Warren McCulloch and logician Walter Pitts published a paper in which they proposed a mathematical model of a neuron based on observations of the nervous systems of biological beings. Their model included a mathematical description of the process of excitation of a neuron and the transmission of an impulse to other neurons. This marked the beginning of the study of artificial neural networks.

Warren McCulloch

Warren McCulloch

In the 1950s In the 1960s, John McCarthy, Marvin Minsky, Nathan Rochester, and Claude Shannon built on McCulloch and Pitts's ideas to propose the concept of “artificial intelligence.” They sought to create machines capable of performing tasks that required intelligence, such as learning, pattern recognition, decision making, and problem solving. In 1956 In 1968, at a Dartmouth conference, McCarthy and his colleagues formally coined the term “artificial intelligence,” setting the stage for intensive research in the field.

In the 1960s and 1970s In the 1980s, researchers focused on creating expert systems—programs capable of solving highly specialized problems. These systems used knowledge bases and logical inference rules, but their use was limited due to the complexity of creating and maintaining such systems. At the same time, the first machine learning algorithms based on statistical methods and probability theory appeared.

A significant breakthrough has occurred in the 1980s years with the development of computer vision and natural language processing technologies. The first neural networks capable of learning from large amounts of data were created. One of the key achievements of this period was the invention of the backpropagation method proposed by David Rumelhart, Geoffrey Hinton and Ronald Williams in 1986 year. This method has significantly improved the training of multilayer neural networks and opened up new possibilities for their application.

David Rumelhart

David Rumelhart

In the 1990s and 2000s In the past few years, the development of data processing technologies and the increase in computing power have led to the emergence of more complex and powerful neural network architectures. Models such as convolutional neural networks (CNN) and recurrent neural networks (RNN) have emerged, which have become the basis for modern computer vision, speech recognition, and text processing systems.

The modern stage of development of neural networks and artificial intelligence has begun in the 2010s years with the advent of deep learning. Deep neural networks, which consist of many layers, have been able to achieve unprecedented results in a variety of tasks, such as playing Go (not to be confused with Golang that DevOps and SRE engineers “play” with, Go it's a board game)*text translation and image generation. One of the key events was the creation of AlphaGo by DeepMind in 2016 year, which was able to defeat the world champion in Go, demonstrating the power of modern neural network technologies.

Go board.

Go board.

Today, neural networks and AI are used in a variety of areas, from medical diagnostics to autonomous vehicles. Neural network models are used to analyze big data, forecast, optimize processes, and create new products and services. AI has become an integral part of our lives, and its development continues at a tremendous speed.

Reasoning

Despite the impressive success of neural networks in various tasks, the opinion that modern neural networks and AI do not fully reflect real biological models and contain simplifications and errors, which means they cannot be called AI at all and are some kind of pathetic parody of the biological original. However, this remark is true not only for AI, but also for any mathematical, physical and social model. Let's consider several examples.

  1. Floating-point numbers in computers. The IEEE 754 standard inevitably introduces an error in the way floating-point numbers are stored, but that doesn't stop us from using these numbers in millions of applications, from financial calculations to scientific research.

  2. Integrals and area calculations. We can never completely find the area under a curve, but numerical integration techniques allow us to strive for precision indefinitely and obtain results accurate enough for practical use.

  3. Astrophysics. We still face the problem of the cosmological constant, where the data diverges from reality by many orders of magnitude. However, this does not prevent us from using our understanding of the same gravity and other laws to predict the motion of planets and develop space missions.

  4. And finally, language. Our ordinary language, used in this article, evolved to replicate spoken language, but is unable to fully capture the emotion, intonation, and speed conveyed by the voice. Nevertheless, it remains the primary means of communication and knowledge transfer between people.

Thus, the claim that AI and neural networks do not deserve to be called such because of their limitations seems unfounded. All models used in science and technology contain simplifications and errors, but this does not diminish their usefulness and significance.

Conclusion

AI and neural network technologies continue to develop and improve, and they are already widely used in a variety of fields. If we deny neural networks and AI the right to be compared with biological intelligence because of their simplifications and errors, then in a similar way we can deprive many other systems, theories and models used in science and technology of the right to exist.

Of course, there are many memes and common misconceptions about AI and neural networks, but the reality is much more complex and interesting. Artificial intelligence is not just a model that tries to reproduce the properties of the organic nervous system, it is a powerful tool that helps us solve complex problems and open up new horizons.

In the following articles of the series we will delve into the topic of neural networks in more detail, with a more detailed comparison of them with biological analogues, as well as the mathematical apparatus that is used to describe them. And also we will see what successes are currently observed in the field of precise copying of the nervous system of living organisms in silicon. And vice versa, when living cells are used as hardware accelerators for computer calculations.
If you are interested in starting independent work with neural networks for your business, whether it is their training or launch, for this we at ITGLOBAL.COM can offer the service of our cloud server with GPU – AI Cloud.

This article is supported by the ITGLOBAL.COM team

We are the first cloud provider in Russia, as well as an integrator, supplier of IT services, products, services and a developer of our own software.

• Our website
• Our blog about virtualization and Enterprise IT
• Our YouTube channel
• Success stories of our clients

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *