Quantinuum on its progress in creating quantum AI

A group of researchers from Quantum has made significant progress in the potential use of quantum artificial intelligence (AI)reporting the first implementation of scalable quantum natural language processing (QNLP). Their model, called QDiscoCirccombines quantum computing with artificial intelligence to solve word problems such as question answering, according to a research paper published on ArXiv preprint server.

Ilyas Khan, founder and chief product officer at Quantinuum, noted that while the team hasn't solved the problem at scale yet, the study is an important step toward demonstrating that How interpretability and transparency can help create safer and more efficient generative AI.

“Bob has been working in the field of quantum natural language processing (qNLP) for over a decade, and I’ve been proud to be watching this progress for the last six years, preparing for the moment when quantum computers can actually solve real-world problems,” said Han. “Our work on compositional intelligence, published earlier this summer, laid the foundation for what interpretability means. Now that we have the first experimental implementation of a fully functioning system, it’s incredibly exciting. Together with our research in areas like chemistry, pharmaceuticals, biology, optimization, and cybersecurity, this will help accelerate scientific discovery in the quantum sector in the near future.”

Ilyas Khan said: “While this is not yet the ChatGPT moment for quantum technologies, we have already mapped out a path to their real practical significance in the world. Progress on this path, I think, will be related to the concept of quantum supercomputers.”

To date, quantum AI, especially when combined with quantum computing and natural language processing (NLP), remains a largely theoretical field with limited evidence-based research. The work aims to shed light on the intersection of quantum computing and NLP, according to Han and Bob Koek, Quantinuum's chief scientist and head of quantum compositional intelligence.

The study demonstrates how quantum systems can be applied to artificial intelligence problems in a more interpretable and scalable way than traditional methods.

“At Quantinuum, we have been working on NLP using quantum computers for some time. We are excited to have recently conducted experiments that show not only how to train models for quantum computers, but also how to make these models interpretable for users. In addition, our theoretical studies provide promising indications that quantum computers may be useful for interpretable NLP.”

The experiment described in the paper is based on the use of the technique of compositional generalization. This method, borrowed from category theory and adapted for natural language processing, allows us to consider language structures as mathematical objects that can be combined and combined to solve AI problems.

According to Quantinuum, One of the key challenges in quantum machine learning is the ability to scale learningTo solve this problem, they use an approach called “compositional generalization.” This means that models are trained on small examples using classical computers, and then tested on much larger examples using quantum computers. Since modern quantum computers have already reached such complexity that they cannot be modeled classically, the scale and ambition of this work could increase rapidly in the near future.

The researchers have solved one of the main problems of quantum machine learning, the so-called “sterile plateau” problem. It occurs when training large quantum models becomes ineffective due to vanishing gradients. The Quantinuum study provides compelling evidence that quantum systems can not only solve certain problems more efficiently, but also provide transparency in the decision-making of AI models, an important problem in modern AI.

These results were achieved on Quantinuum's H1-1 ion trap quantum processor, which provided the computing power to execute the quantum circuits underlying the QDisCoCirc model. The QDisCoCirc model uses compositional principles borrowed from linguistics and category theory to break down complex text data into simpler, more understandable components..

In their paper, the researchers highlight the importance of the H1-1 quantum processor in conducting the experiments: “We present experimental results for a question answering task using QDisCoCirc. This is the first verified implementation of scalable compositional quantum natural language processing (QNLP). To demonstrate compositional generalization on a real device, beyond the instance sizes that were modeled classically, we used the H1-1 ion trap quantum processor with state-of-the-art two-qubit gate accuracy.”

Practical implications

This research has significant practical implications for the future of AI and quantum computing. One of the most important results is the possibility of using quantum AI to create interpretable models. In modern large language models (LLMs) like GPT-4, the decision-making process is often kept as a “black box,” making it difficult for researchers to understand how and why certain results are generated. In contrast, the model QDiscoCirc allows one to observe internal quantum states and relationships between words or sentences, which provides a better understanding of the decision-making process.

In practical terms, this opens up broad application possibilities in areas such as question-answering systems, where it is important not only to get the right answer but also to understand how the machine came to that conclusion. An interpretable quantum AI approach based on compositional methods can be used in the legal, medical, and financial sectors, where transparency and accountability of AI systems are key.

The study also showed successful use of compositional generalization, the ability of a model to generalize from data trained on small sets to more complex and larger inputs. This could be a major advantage over traditional models like transformers and LSTMs, which, according to the study’s authors and data from the white paper, failed to generalize as well when tested on longer, more complex texts.

Is the performance higher than the classic one?

Beyond applications to natural language processing (NLP) tasks, the researchers also examined whether quantum circuits could outperform classical models like GPT-4 in some cases. The results showed that classical machine learning models, including models like GPT-4, did not show significant gains on compositional tasks — their performance was on par with random guessing. This suggests that quantum systems, as they scale, may be uniquely suited to handling more complex forms of language. This could be especially important when working with large datasets, although large-scale language models like GPT are likely to improve over time.

The study demonstrated that classical models failed to effectively generalize large text instances, while quantum circuits successfully coped with this task, demonstrating the ability to generalize compositionally..

Quantum models have also been found to be more resource efficient. Classical computers have difficulty simulating the behavior of quantum systems on a large scale. This indicates that quantum computers will be essential for solving large-scale NLP problems in the future.

“As the size of text circuits increases, classical modeling becomes impractical, highlighting the need to use quantum systems to solve these problems.”

Methods and experimental setup

As part of the proof of concept model QDiscoCirc The researchers developed datasets with simple binary question-answer tasks. These datasets were designed to test how well quantum circuits perform on basic linguistic tasks, such as identifying relationships between characters in a text. The research team used parameterized quantum circuits to create word embeddings — mathematical representations of words in a specific space. These embeddings were then used to build more complex text circuits, which were evaluated using a quantum processor.

Koeke and Han note that this approach allows the model to remain interpretable while still taking advantage of quantum mechanics. The importance of this approach will increase as quantum computers become more powerful.

“We see the 'compositional interpretability' proposed in the paper as a solution to the problems facing modern AI.” Compositional interpretability is about giving understandable meaning to the components of a model, for example in natural language, and then understanding how these components interact and fit together.”

Limitations and possibilities

Despite the significant progress, the study also has its limitations. One key issue, according to the team, is the current scale of quantum processors. While the QDisCoCirc model shows great potential, the researchers note that larger, more complex problems will require quantum computers with more qubits and higher precision. They acknowledge that their results are still at the proof-of-concept stage, but they also point to the rapid advancement of quantum technology. Quantum computers have entered a new era, which Microsoft calls the era of “trusted” quantum computers and IBM calls “utility-scale quantum computing.”

“Scaling these computations to more complex real-world problems remains a significant challenge due to existing hardware limitations,” the researchers write, adding that the situation in this area is changing rapidly.

Additionally, the current research focuses on solving binary questions, a simplified form of natural language processing (NLP). In the future, the team plans to study more complex tasks, such as parsing entire paragraphs or working with multiple layers of context. The researchers are already looking at ways to expand the model to work with more complex text data and different types of linguistic structures.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *