What's hidden under the hood of NeuroMendeleev

Hi all!

It’s not a fact that the next GPT-based chatbot will have something new for you, but we will still share potentially useful experience.

We recently launched a bot that embodies the image of Dmitry Mendeleev (a very important person for SIBUR), including his appearance. He can do anything. For example, telling facts from the field of chemistry and science, answering any work question and suggesting the right solution, talking about SIBUR and a career in the company, helping new employees to adapt – in general, a good tool for HR purposes.

Let's focus on what's under the hood.

GPT and databases

Paradoxically, the AI ​​part is the most typical, it consists only of GPT, a vector database with the necessary information and sleight of hand prompt engineering. The information we download and texts from users are converted into embeddings.

The database includes a Q&A database for answering questions about employees and applicants, information about the company, its products and projects, as well as internal abbreviations (a kind of Easter egg for employees).

The same standard scheme

The same standard scheme

The embedding, which was once a user message, is analyzed by the bot using contextual functions that intercept all questions related to SIBUR and search for information in the vector database. If the user's request is not associated with a company, then the response is created through basic GPT. Therefore, Dmitry Mendeleev can be asked questions about the weather, and about the prospects for the uprising of machines, and about the meaning of life (42).

There are quite a lot of instructions on how to implement such a chatbot scheme on the Internet, but if you have questions, we will be happy to answer them in the comments. For now, let's move on to the next part of the bot.

Interactive image

What distinguishes NeuroMendeleev from other GPT bots is his voice and facial expressions! Let's start with the first one: we generated the voice in the Eleven Labs program. We made several options for timbres, speeds and intonations, and in the end we chose the best one collectively.

The face was created using the MetaHuman tool within Unreal Engine. Based on the portrait from the Internet, a 3D cast was created from the Internet, then it was exported to Unreal Engine 5, where, using the key points of the cast, we created a new 3D model with textures and an animation skeleton (rig).

After that, we transferred the last cast to Blender, where we added Dmitry Ivanovich’s characteristic hairline. The hair asset was then returned to Unreal Engine, where the finished model was fully animated.

The animation itself occurs on the principle of capturing key points from the video. This requires a calibration video – anyone can act as a prototype. At the end, a sequence of frames is obtained, which is transferred
in Premiere Pro, where the video is stitched together and the voice is combined
with image.

What's the result?

And the final Dmitry Ivanovich looks something like this:

All this can be assessed here, and ask us additional questions in the comments! If anything, we are happy to help and listen to your advice 🙂

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *