The largest discoveries of the past year in the field of Computer Science

In 2022, computer scientists have learned how to transmit top secret information, why transformers seem to be good at everything, and how to improve on more than a decade old algorithms (with a little help from artificial intelligence) that you will learn to work with at our Data Science courses.

Introduction

With the solution of an ever wider range of problems, the work of scientists becomes more and more interdisciplinary. In 2022, many of the most important results in the field of CS are obtained with the participation of other scientists and mathematicians. Perhaps the most useful results have been on cryptographic issues that are at the core of Internet security and are typically complex mathematical problems. One such problem—the product of two elliptic curves and their relation to an Abelian surface—eventually led to the collapse of a promising new cryptographic scheme thought to be strong enough to withstand a quantum computer attack. And another set of mathematical relationships, in the form of one-way functions, will tell cryptographers whether truly secure codes are possible in principle.

CS and quantum computing have a lot of overlap with physics. In one of the largest journals devoted to theoretical CS, in 2022, researchers published a proof of the NLTS hypothesis, which, among other things, argues that the ghostly connection between particles, known as quantum entanglement, is not as fragile as physicists once thought. The proof of NLTS has implications not only for our understanding of the physical world, but also for the myriad cryptographic possibilities that obfuscation brings.

Artificial intelligence has always flirted with biology: this field draws inspiration from the human brain – perhaps the most advanced computer. Although understanding how the brain works and creating artificial intelligence similar to that of the human brain has long seemed like a pipe dream to computer scientists and neuroscientists, a new type of neural network known as a transformer (transformer) seems to process information in a similar way to the brain.

As we learn more about the work of both, each of them tells us something new. Perhaps this is why Transformers excel at tasks as diverse as language processing and image classification. Artificial intelligence is becoming more advanced, contributing to the creation of better AI: “hypernets” help train neural networks faster and at lower cost.

Red particles with changing spins and some entanglement

Confused Answers

When it comes to quantum mechanical entanglement, that is, a property that tightly binds even distant particles, physicists and other scientists are at a dead end. Everyone agrees that a completely entangled system cannot be fully described. However, physicists believed that it was easier to describe systems that were only close to complete entanglement. Scientists, however, disagreed with this and declared that such systems are just as uncomputable; this belief formed the basis of the hypothesis of the absence of low-energy trivial state conjecture (NLTS). In June, a group of scientists published a proof of this conjecture. Physicists were surprised that this proof means that entanglement is not necessarily so fragile, and computer scientists are glad that they are one step closer to proving a fundamental theorem known as the quantum probabilistically checkable proof theorem, which requires the NLTS to be true.

The news comes amid results from 2021 work that showed the possibility of using quantum entanglement to achieving perfect cryptographic strength encrypted messages. And in October 2022, researchers successfully entangled three particleslocated at considerable distances from each other, thereby expanding the possibilities of quantum encryption.

![Иллюстрация, где показана оранжево-синяя сеть линий, которые фокусируются в прозрачную пирамиду, становятся белым светом и переходят в прозрачный глаз]

Transformation of understanding of things by artificial intelligence

In the past five years, transformers have revolutionized the processing of information by artificial intelligence. Originally designed for language generation and comprehension, the input transformer processes each element at the same time, providing a complete understanding, which gives it increased speed compared to other language networks that use a fragmented approach. This also makes it extraordinarily versatile, and other AI researchers are applying it to their fields. They found that the same principles could allow them to improve tools for classifying images and processing multiple kinds of data at the same time. However, these benefits came at the cost of more training than was required before Transformers. In March, researchers studying transformers learned that some of their power comes from the ability to make words more importantrather than just memorizing patterns. Transformers are so adaptive that, with the help of transformer-based networks, neuroscientists have already started simulate the functions of the human brainindicating a fundamental similarity between artificial and human intelligence.

The collapse of cryptography

The security of online communication is based on the complexity of various mathematical problems: the more difficult the problem is to solve, the more effort a hacker must make to break the channel. And, since modern encryption protocols for a quantum computer would be a simple job, researchers are finding new challenges to counter them. But in July, just an hour of computing on a laptop one of the most promising leads dropped. “That’s a bummer,” said Christopher Peikerta cryptographer at the University of Michigan.

This failure highlights the difficulty of finding the right questions. Researchers have shown that the only way to create secure code that never crashes is to prove the existence of one-way functions—problems that are easy to solve but hard to unwind once solved. We still don’t know if they exist (this discovery would help us know what cryptographic universe do we live in), but two researchers found that this question equivalent to another taskwhich is called Kolmogorov complexity and which includes the analysis of series of numbers: one-way functions and real cryptography are possible only if a certain version of Kolmogorov complexity is difficult to calculate.

Machines help train machines

In recent years, the skills of artificial neural networks to recognize the patterns of artificial neural networks have given impetus to the development of artificial intelligence. But before the network goes live, researchers must train it, potentially fine-tuning billions of parameters in a process that would require months of work and massive amounts of data. Or a machine can do it all for them. With a new kind of hypernet – a network that processes and spits out other networks – they are already will be able to do it soon. A hypernet called GHN-2 analyzes any given network and produces a set of parameter values ​​that the study has shown are generally no less efficient than those in neural networks trained with traditional methods. Even when the best possible parameters were not achieved, the GHN-2 proposals remained the starting point, which was closer to the ideal, as it allowed to reduce the time and data required for full training.

In addition, last summer in the magazine Quanta was considered a new approachdesigned to help machine learning. It is known as embodied AI and provides training algorithms based on responsive three-dimensional environments (responsive three-dimensional environments), rather than static images or abstract data. Whether agents learning simulated worlds or robots learning the real world, these systems learn in a very different way and in many cases better than traditionally trained systems.

Improved Algorithms

This year, with the advent of more complex neural networks, computers have moved forward as a research tool. One such tool is particularly well suited to the problem of multiplying two-dimensional tables of numbers, called matrices. There is a standard way to do this, but as the matrices grow, it becomes unwieldy, so researchers are constantly looking for a faster algorithm with fewer steps. In October, DeepMind researchers announced that their neural network revealed faster algorithms multiplications of some matrices. However, experts warned that this breakthrough represents the emergence of a new tool for solving the problem, and not a new era in which artificial intelligence will solve these problems on its own. As if on cue, the pair of researchers developed new algorithms using traditional tools and methods to improve them.

In March, researchers published an accelerated algorithm for solving the problem maximum flow, one of the oldest problems in computer science. By combining previously used approaches in a new way, the group created an algorithm for determining the maximum possible flow of material through a given network, which Daniel Spielman from Yale University called it absurdly fast. “I was inclined to believe that … there are no such good algorithms for this task.”

Mark Braverman in an orange shirt in an alley among trees

New ways to communicate information

Mark Breiverman, a theorist at Princeton University, has spent more than a quarter of his life developing a new theory of interactive communication. His work allows researchers to quantify terms such as “information” and “knowledge”. This not only gave a better theoretical understanding of the interactions, but also created new methods that provide more efficient and accurate communication. For this and other achievements the International Mathematical Union in July of this year awarded Breiverman with the IMU Abacus Medal — one of the highest awards in theoretical CS.

A useful theory and even more practice with immersion in the IT environment are waiting for you in our courses:

Brief catalog of courses

Data Science and Machine Learning

Python, web development

Mobile development

Java and C#

From basics to depth

And

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *