How we went from the belief “Artificial intelligence is impossible” to robots we trust with our lives

In the mid-1930s, a teenager from Detroit, a child of the Great Depression, came across a volume of Principia Mathematica, a fundamental work of the early 20th century on the foundations of mathematics. After reading the book in 3 days, a 12-year-old boy finds several controversial points in it, and soon writes a letter to one of the authors – 63-year-old British philosopher Bertrand Russell. The scientist, amazed by the teenager’s abilities, invites him to enroll in graduate school with him in the UK, but due to his young age, he refuses. Three years later, the teenager runs away from home to attend Russell's lectures at the University of Chicago.

Walter Pitts

Walter Pitts

Hello! My name is Vladimir Manerov, I am the executive director of the company TEAMLY and head the platform development department. In the last article I talked about the near future in the field of AI, and today I decided to talk about the history of artificial intelligence in broad strokes.

That teenager's name was Walter Pitts, and 8 years later he would publish the first paper on the computerization of a neuron. It will become fundamental in the development of artificial intelligence even before the first computers appear.

The revolutionary idea of ​​Walter Pitts and his colleague Warren McCulloch was that they imagined the brain as a computer. This stimulated great interest, which later transformed into cybernetics. And while mathematicians were engaged in science, science fiction writers of that time enthusiastically wrote their literary works. This is how the idea of ​​artificial intelligence spread beyond universities, where it was picked up by society. And, as always, there were skeptics. They say that it cannot be that robots will replace humans and will think and offer solutions. Well, 80 years later, robots are doing just that. Although not in all areas, the trend has emerged very clearly.

What has been going on in the science of artificial intelligence all this time?

Birth of science

In 1948, two immediately important theories came out:

  • “Cybernetics” by Norbert Wiener is the science of the laws of control processes and information transfer in machines, living organisms and society.

  • Claude Shannon's information theory is the measurement of the amount of information, its properties and limiting relationships for data transmission systems.

Important thoughts that dominated the new ideas:

  1. Complex systems consist of a hierarchy of self-regulating elements.

  2. The individual element has a control mechanism that causes the output to change in inverse proportion to the deviation from the target value (negative feedback).

In the early 50s, Alan Turing proposed his famous test, which can determine whether we are talking to a person or a computer.

Over the course of 15 years, several scientists from different disciplines (mathematics, engineering, psychology, economics) made several important scientific discoveries that became a booster for the development of AI. And already in 1956, the field of artificial intelligence research became an independent academic discipline.

Until the seventies, the sphere was in full swing. Scientific communities were created in the USA, USSR, Great Britain and other leading countries, which annually proposed ideas and solutions. Some of them disappeared into obscurity, but some were developed.

The approach to complex calculations as a composition of simple ones is still relevant, as well as the idea that human reasoning when solving a problem can be described by a set of rules. But the idea of ​​the evolutionary development of AI has not passed the test: development through the accumulation of random errors (mutations) and even their deliberate recombination occurred very slowly. And after 40 years of attempts to develop this idea, they abandoned it. However, it has not been forgotten, but is still used today to build neural networks.

Repeated repetitions and the search for new techniques led to a thought that now seems simple: the most important property of human intelligence is the ability to learn. But reasoning alone is not enough for learning; facts from the physical world are needed. Knowledge about the world can be represented as a set of concepts and relationships between them. For example, in the form of a semantic network.

We are getting closer and closer to the idea of ​​a perceptron – the basic structure of a learning neural network, a digital twin of the human brain that Pitts dreamed of.

But by the 70s, many theories had been worked out, but what to do next was unclear. Skeptical publications from philosophers and journalists began to appear. The most ideological participants continued to explore. But the main interest of the community has shifted to the area of ​​symbolic computing, opposite to neural networks.

The period of AI winter has arrived.

Revival of interest

By the early 80s, the Japanese economic miracle brought the Land of the Rising Sun to second place in the world. Having finally recovered from the damage of World War II, the Japanese began to look towards high technology, including the development of a new generation of computers.

In 1980, the Japanese introduced the Wabot-2 robot, which could read musical scores, communicate with people and play an electronic organ. It was a successful project that brought back the scientific community's interest in artificial intelligence. The humanization of this device is more of a marketing tribute than a useful feature. Even now, humanoid robots are losing to robots without any signs of human appearance, while robot dogs are much more advantageous – due to ergonomic engineering.

Let's go back to the eighties: during the first five-year plan, several designs were proposed that became decisive in matters of machine learning:

  • a neural network in the form of a multilayer set of connected perceptrons for processing information and formulating predictions;

  • auto-difference of objects in reverse mode.

Artificial intelligence became not just a hero in science fiction books; now ordinary company employees could work with it. Expert systems began to appear in the form of software for technology companies. They used bulky Lisp machines. They worked on magnetic tapes, took up an entire room, and it was simply impossible to work without special training.

At the same time, Apple and IBM corporations were working to make more powerful and cheaper computers. They assumed a completely different approach to working at a computer, and Lips machine technology could not quickly adapt to it. The new type of PC revealed a high percentage of the errors of its predecessors. Therefore, the old cars had to be sent to the dustbin of history. Thus ended the first wave of commercial use of AI systems.

Recent history

The nineties were a period of confusion not only in the countries of the former USSR, but also in the computer revolution. When it became possible to process more data and furnish offices with personal computers, the question arose of how to process and store so much data. This required new approaches to the structure of algorithms, as well as new hardware. Now these are the basics that are taught in the first year of institute. But then, only by the end of the 90s, the field of AI research came out of the freeze, databases began to be called Big Data, and researchers learned to extract more and more knowledge from them. The concept of an intelligent assistant has appeared.

In 1997, an epoch-making event occurred – the IBM supercomputer Deep Blue defeated world chess champion Garry Kasparov. Many people couldn’t wrap their heads around this: previously it was believed that this would require calculating all the combinations of a 32-year-long batch!

It became clear that silicon intellectuals are the future. The ubiquity of the Internet has generated exponential growth in the amount of data. Now machine learning is actively used in everyday life: search results, smart feeds, scanning products in a store, recommendation systems, etc. Of course, machine learning is also actively used in science – medicine, biology, engineering, etc.

What do we have now

Neuroscience would not be possible without people's desire to create artificial intelligence. Conversely, AI would not be possible without neuroscience. For the last 100 years, these trends have gone hand in hand. And modern neural networks are built on the basis of a basic neuron model of the human brain.

Neural networks work surprisingly well:

  • testing on a test sample almost always shows accuracy above 99% (if the developers tried, of course);

  • the more data used for training, the better;

  • the complexity of the layer architecture and the functions of individual neurons do not inhibit the main functions;

  • neural networks solve problems in a wide range of application areas.

Yes, Walter Pitts would be proud of modern developers!

Yes, Walter Pitts would be proud of modern developers!

This, of course, greatly delights me. Likewise, in 1948, people were fascinated by Norbert Weiner's breakthrough cybernetics. Now it seems something simple and even obvious. Probably, for people of the future, the complex architecture of modern neural networks will seem something primitive.

Thank you for reading!


Traditional commercial break

On April 17, at the Moscow Central Distribution Center, we will hold the TEAMLY Conference, dedicated to collaboration and corporate knowledge management. Come listen to the reports of speakers from the companies GrandMotors, KAMAZ Digital, Splat, Epiphany. Let's talk about how to manage projects and teams based on applied experience and company knowledge.

I will also speak: I will tell you how we at the startup TEAMLY are reshaping task management with the rapid growth of teams.

More information and registration link Here.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *