Free open alternative to ChatGPT released


Members of the open community LAION-AI released the first trained models OA_SFT_Llama_30B And OA_SFT_Llama_13B. and launched an AI chatbot OpenAssistant based on them. At the moment, models with 13 and 30 billion parameters are available, retrained on multilingual datasets collected by the community. The models are based on the already popular LLaMA.

OpenAssistant is an AI-based conversational assistant that understands tasks, can interact with third-party systems (like plugins in ChatGPT) and dynamically extract information from them. OpenAssistant is positioned as an open alternative to ChatGPT.

“We want OpenAssistant to be the single, unifying platform that all other systems use to interact with people.” – members of the LAION community declare their vision.

You can try talking to OpenAssistant now here.
You can also take part in the formation of the dataset in your language here.

Technical details

The models were trained on the capacities allocated Redmond AI supported by Weights & Biases. Model inference is provided by Hugging face And Stability AI. The retrained models are based on the concepts InstructGPT, RLHF (Reinforcement Learning from Human Feedback) and reward model (reward model) on the base deBERTa. The model context of 30 billion has been doubled to 1024 tokens.

The community has made efforts to formation a full-fledged dataset, which is compiled and verified by a large group of people in different languages ​​​​and different levels of training. For the purposes of collecting the dataset, an algorithm has been implemented in which one group of community members form questions and answers, and the other group is engaged in validation at several levels.

The dataset is multilingual, the main shares are occupied by English (59%) and Spanish (42%). The share of the Russian language is at the level of 8%. We can influence this by taking part in the markup of the dataset.

It is worth considering the fact that when preparing a dataset not used responses from other language models such as ChatGPT to exclude synthetic data. All Open Assistant code is licensed under Apache 2.0. This means that it is available for a wide range of uses, including commercial use.

OpenAssistant is:

  • Personalized, customizable conversational AI assistant

  • System for extracting information from external resources and knowledge

  • System of interaction with other systems via API interfaces

  • Code generation and auto-completion system for developers

OpenAssistent consolidates all knowledge in one place:

  • Uses modern deep learning technologies

  • Capable of running on user hardware

  • Trained on feedback from real people

  • Open and accessible to everyone

inference

you can run OpenAssistent on your computer locally on the CPU. For this you need:

1. Download and unpack the files from the archive.
2. Download model and place in the same directory.
3. Open a terminal (cmd.exe) and run with the command:
main.exe -m D:\LLaMA_cpp\qunt4_0.bin -n -1 –ctx_size 2048 –batch_size 16 –keep 512 –repeat_penalty 1.0 -t 32 –temp 0.4 –top_k 30 –top_p 0.18 — interactive-first -ins –color
where D:\LLaMA_cpp\qunt4_0.bin is the path to the downloaded model.

This is how the inference of the model looks like at 13 billion.

This is how the inference of the model looks like at 13 billion.

Tests

The tests were carried out on a model of 30 billion:

In Russian:

OpenAssistant

OpenAssistant

ChatGPT

ChatGPT

O

O

Did it! Or was there an example in the dataset?

It seems like yes, but like no?

Well, that’s it.

In English:

Error!. Correct answer: Option D. This is an alternating number of subtraction series. First, 1 is subtracted, then 2 is added.

Correct answer: D. Book. Rest are all parts of a book.

Logically!

Well, that seems like an appropriate answer.

By generating code on demand, everything looks better.

Verdict.

In general, it’s cool that the community develops such projects. I’m sure this project has great potential and we’ll hear more about it in the future! The power of the community cannot be underestimated!

At the moment the model is raw. Even the GPT-3.5 version is still far from ChatGPT. Another important nuance is the license of the main model LLaMA. With her, the question is still far from unambiguous, tk. in fact, it was merged and the authors do not publicly comment on it.

Subscribe to my zen channel https://dzen.ru/agi (about AI, language models, news and trends) and telegram channel https://t.me/hardupgrade (about organizing, structuring and managing information, the second brain) .

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *