A little about Chain-of-Thoughts

The main problems of LLM (Large Language Models):

  • Solving complex logical problems (Searching for implicitly specified information)

  • Security (Anti-hacking and ethical behavior)

  • Hallucinations (Generating new information that is not true)

All problems are complex and intersecting. When solving a complex problem, we expect something new from the model that we do not know and did not write in the request: that is, the model must generate information that is not presented in the request. When we ask a model to find something, we expect it to write the information that is presented in the request. Often a difficult task involves searching for information in a query: here you can already see the contradiction that the model faces.

Introduction

OpenAI recently addressed the first problems with a new model. The new model uses the Chain-of-Thoughts technique to solve problems.

This is what the dialogue with ChatGPT o1-preview looks like. The user's request triggers a whole chain of actions in which data is synthesized by the model. It is not known whether all data is displayed. At the end, all thoughts are hidden, and the user receives a compiled answer.

This is what the dialogue with ChatGPT o1-preview looks like. The user's request triggers a whole chain of actions in which data is synthesized by the model. It is not known whether all data is displayed. At the end, all thoughts are hidden, and the user receives a compiled answer.

At the same time, the company continues to adhere to the principle of a minimalistic interface:

  • The user enters a prompt

  • The model inside performs step-by-step actions with it

  • The user receives a summarized answer, which significantly reduces the user’s labor costs. All this is accompanied by a cool animation that shows the stages of the model’s “thinking”: this makes everything more intuitive. Based on several statements, certain conclusions can be drawn:

  • Internal dialogue will be hidden from the user in the future

  • Increased thinking time is seen as a benefit (Thinker says hi), implying deeper and more thorough processing of information.

  • The model does not want to be a polyglot, perhaps to optimize the use of tokens or specialization of the dataset. However, it works quite well with the Russian language.

  • The model devours a huge number of tokens in comparison with existing ones, and the price is steep, not to mention the restriction of API access to it

  • The model works best with direct and clear instructions

Sometimes the time it may take to resolve can be... long.

Sometimes the time it may take to resolve can be… long.

Abstractly speaking, the model contains a loop in which the input data is run. At each stage of the cycle they are enriched with synthetic information. Moreover, in two stages: at the first, a “certain” instruction is generated, at the second, the model’s response is obtained. The model has a certain mechanism for exiting the loop. All or some part of the information is carried out through summarization. I don’t know exactly how this model is implemented, but the logic of the process is quite obvious:

  1. The cycle can be implemented both inside the model and as an external tool

  2. The instruction can be fixed, selectable or model generated

  3. The model response can be generated by the internal model or external

  4. The cycle can be controlled either by the model or by some external tool

  5. Summarization can be controlled either by an internal model or an external one

  6. The five points above are unknown variables that will affect the quality of the final answer. The question arises: should all five points be synthesized by the model or not? If not, how much is better to spend on synthesis? Should non-synthetic information be added at some stage or not? Should the user see the thinking process or not, or partially should not?

Regardless of the effectiveness of the model, and it is, this approach will have long-term consequences for the entire industry: how much data will be expected from the user, whether synthetic data will be shown to the user, and so on.

What is a Chain of Thought?

The chain of thoughts can be implemented, in a certain version, can be implemented not only with the expensive o1, but also with all existing models.

How?

Let's take a closer look at what this means.

Structurally, it represents a certain set of messages that are sequentially sent to the model. Key point: Model responses are added to this set of messages. In the most standard scenario, the chain is lengthened each time by the model’s response and the user’s request. How is this different from just a huge prompt sent to the model with one sheet?

Chain of questions to create a chain of thought

Chain of questions to create a chain of thought

On the left (A) is the numbering of comparison positions to GPT-4o-mini. On the right (B) through dialogue/chain of thought. In the first case, a response was also received. However, the chain allows you to get a more labeled answer.

On the left (A) is the numbering of comparison positions to GPT-4o-mini. On the right (B) through dialogue/chain of thought. In the first case, a response was also received. However, the chain allows you to get a more labeled answer.

This differs from a significant piece of information sent simultaneously in the following points:

  • Structured: structure emerges instead of a single answer.

  • In stages: stages of interaction appear, detailing the interaction with the BYM.

  • Interactivity: each stage can be independently modified. That is, because this is the same dialogue that the user conducts with any chat.

Example message chain:

  • I have this problem. How to solve it?

  • Answer 1

  • Write to me the disadvantages of the solution

  • Answer 2

  • Figure out how to overcome shortcomings

  • Answer 3

  • Give me the final solution

  • Answer 4

Example chain of thoughts

Example chain of thoughts

On the left (A) is the generation of patterns using a single request to GPT-4o-mini. On the right (B) through dialogue/chain of thought. Obviously, the answer on the right is richer in examples and more specific.

On the left (A) is the generation of patterns using a single request to GPT-4o-mini. On the right (B) through dialogue/chain of thought. Obviously, the answer on the right is richer in examples and more specific.

It is worth noting that all questions are extremely general and abstract. Individual meaning is added by unique texts that describe the user’s problem, his situation, his capabilities, etc. By responding to queries with synthesized answers, the model simultaneously adapts to the context of the conversation and deepens both its own and the user’s understanding of the problem.

The chain of thoughts is sent to the model in an increasing order: first the first question, then the first question-answer pair and the second question, etc. The dialogue becomes more complex and acquires data at every stage. Technically Chain of Thought much more token-hungry than a single sheet: it is more expensive and slower (due to the number of tokens per sending). It also does not work well with post-send, because in order to send the next message you need to wait until a response to the previous ones is received. That's why I hardly used gpt-4 until gpt-4o-mini came out.

Chain of Thought may help to understand the model better:

  • I have this problem. How to solve it? At the second stage, not just the second message will be sent, but the first message, a response to it and a second message:

  • I have this problem. How to solve it?

  • Model 1 response

  • Directions to add synthetic model data. Data that will help you understand “how the model thinks”

  • Model 2 response

  • Request final decision

Chain of questions for generating synthetic information

Chain of questions for generating synthetic information

On the left (A) is the generation of patterns using a single request to GPT-4o-mini. On the right (B) through dialogue/chain of thought. In both cases, a problem is noted in the order of output. However, the answer on the left is considerably rich in detail.

On the left (A) is the generation of patterns using a single request to GPT-4o-mini. On the right (B) through dialogue/chain of thought. In both cases, a problem is noted in the order of output. However, the answer on the left is considerably rich in detail.

For model Chain of Thought will look like text, marked up inside as the user's requests, her responses and system instructions (if possible). And she will take it in larger and larger quantities.

In this case, the duration of the cycle and the end of the cycle will be fixed: they will depend on the number of questions you ask. Additionally, you can independently control the work of the model at each stage of the dialogue. Instructions can be either fixed or created with the participation of BYM.

TO Chain of Thought should be treated as a method of breaking down the information that is provided to the model. Any problem you have with which you turn to a model has certain nuances that are important specifically for you. These nuances must be provided to the model so that it can prepare the highest quality response.

This information consists of the following parts:

  • Your instructions in the Thought Chain can range from simple queries to complex, multi-layered tasks. These may include requests to produce text, analyze data, formulate arguments, etc. The key here is to be clear and precise in your instructions so that the model can correctly understand your requirements.

  • Your examples play an important role in the Chain of Thought. They help the model understand what kind of responses you expect. These can be either “good” examples that show the desired result, or “bad” ones that help avoid unwanted responses.

In fact, to create Chains of Thought You can use any model if the size of the context allows. This is an interesting and versatile tool for model management. To successfully use the Chain of Thought, it is important to consider the context. This may include previous requests, user information, current circumstances, etc. Finally, Chain of Thought is a dynamic process. You can add new instructions and examples as needed, adjusting the model's responses and improving the results.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *