Integration of the GPT-4 Omni model into a telegram bot in Python

On May 13, 2024, OpenAI officially unveiled the new model. According to OpenAI themselvesOmni matches GPT-4 Turbo's performance on English text and code, with significant improvements on non-English text, while being much faster and 50% cheaper in API terms.

The model's advantages are that it can work with all content (text, sound and images) and knows 50 languages.

And today we will integrate this model into a small bot in Python and deploy it to a cloud service. Amvera.

Why Amvera:

  • This is our blog. It would be strange if we were to deploy it to competitors)

  • Full free proxying of projects to OpenAI. No need to use your proxy in the code anymore!

  • Easy project preparation in “two clicks” using yaml instructions.

  • Convenient delivery of files and updates via git in just 3 commands.

  • After registration and confirmation of the phone number, a free balance of 111 rubles is credited!

Project plan (how will everything work?)

In the project we will use the following libraries:

  • aiogram 3.10.0 – asynchronous library for interacting with Telegram API

  • openai – official library for working with OpenAI REST API, working with httpx

This will be the most common GPT bot that will generate content in response to a message. For now, we will limit ourselves to text messages, but later you can add work with images and audio yourself.

Creating a Bot and OpenAI Key

Let's start by creating a bot.

For this we will use a bot @BotFather.

  1. We write the command /newbot

  2. We come up with a name and username for him

  3. In response, if everything went successfully, we receive a message about the creation of the bot and save the generated token.

Attention! Using the token you can get full control over the bot, do not send it to anyone!

How to get OpenAI key?

First, you need to have an OpenAI account with billing enabled. To create one, you will need any foreign number. You can use any service to receive SMS confirmation.

Then we move on to Keys page and create a key with any name using the button above:

That's it! Now we can move on to writing code.

Bot code

The project structure will look like this:

  • bot folder with files handlers.py for aiogram and handlers gpt.py to generate a neural network response.

  • File main.py with bot initialization

  • Also files required for deployment (they are not on the screen)

IN main.pyas said above, we initialize and launch the bot with logging enabled:

import os
import logging
import asyncio

from aiogram import Bot, Dispatcher
from aiogram.types import Message
from aiogram.filters import CommandStart

from bot.handlers import router

bot = Bot(token=os.getenv("TOKEN"))
dp = Dispatcher()

logging.basicConfig(level=logging.INFO)

@dp.message(CommandStart())
async def start_cmd(message: Message):
    await message.reply("Добро пожаловать в бота!\nНапишите свой вопрос и бот с помощью GPT4 Omni сгенерирует ответ!")

async def main():    
    dp.include_router(router)
    
    await bot.delete_webhook(drop_pending_updates=True)
    await dp.start_polling(bot)

if __name__ == "__main__":
    asyncio.run(main())

It is important to note that the token should be stored in environment variables that we will create later on the cloud site.

Be sure to import the router from the file handlers.py in the bot folder. By the way, here is the content of the file itself:

from aiogram import Router, F
from aiogram.types import Message
from aiogram.filters import CommandStart
from aiogram.fsm.state import State, StatesGroup
from aiogram.fsm.context import FSMContext

from bot.gpt import gpt_request

router = Router()

class StateGpt(StatesGroup):
  text = State()

@router.message(StateGpt.text)
async def state_answer(message: Message):
    await message.reply("Пожалуйста, дождитесь ответа!")

@router.message(F.text)
async def gpt_work(message: Message, state: FSMContext):
    await state.set_state(StateGpt.text)
    
    answer = await message.reply("Ответ генерируется...")
    response = await gpt_request(message.text)
    
    await answer.edit_text(response.choices[0].message.content)
    await state.clear()

What is interesting here is the states used in the project. With their help, we do not allow the user to create new requests during the text generation by the neural network and ask them to wait for a response.

Generating a response (gpt.py):

import os
import httpx

from openai import AsyncOpenAI

gpt = AsyncOpenAI(api_key=os.getenv("AI_KEY"),
                  http_client=httpx.AsyncClient())

async def gpt_request(text):
    response = await gpt.chat.completions.create(
        messages=[{"role": "user",
                   "content": str(text)}],
        model="gpt-4o"
    )
    return response

Since our entire bot is asynchronous, we will use the AsyncOpenAI module.

And here too AI_KEY is written to the environment variables.

It is important that billing is connected to your account, otherwise a quota error will occur.

Deploy to the cloud

Let's prepare a dependency file

For deployment to the cloud Amverawe need to create a dependency file – requirements.txt. In our case, it will be small, because we use only 2 libraries that require downloading via pip.

requirements.txt:

aiogram==3.10.0
openai==1.40.6

Let's register by linkspecifying all the required data.

After registration, we make sure that we receive a free balance and create a new project.

  1. In the window that opens, enter the project name and select the tariff. It is advisable to select a tariff no lower than Initial for a working project.

  1. Next, the data loading window opens. We can load the code directly through the interface in this window, or use the git tool. For now, let's skip loading the data and click Next

  2. Configuration window. This is where amvera.yml is created — the instructions for the project. We select the Python environment, the pip tool. Now additional sections open. The most important ones are the Python version (version), the path to the main file (scriptName) and the path to the dependencies file (requirements.txt)

After approving the settings, we can finish creating the project.

Open the project page and be sure to add environment variables using the “Create secret” button in the “Variables” tab.

This completes the project setup.

Delivering code via Git

As I already said, you can use the download via the site – this is the Repository tab on the project page.

But it is much more convenient to use git. With it, after a little setup, you can update the repository using 3 commands.

Install git and run the following commands in the command line (make sure you open cmd in the directory with the project):

  1. git init – initialize git (a .git folder should be created)

  2. git remote add amvera https://git.amvera.ru/имя-пользователя/название-проекта – add a remote repository (this link can be obtained in the “Repository” tab).

  3. git add . – adding all files and folders in the directories of the initialized git

  4. git commit -m "First commit" – first commit (required with a comment)

  5. git push amvera master – the last action is pushing files to the repository.

The build should start automatically.

If you decide to download all the files manually through the interface, after downloading you will need to go to the “Configuration” tab and click the “Collect” button.

Now, if everything goes well, the bot will work.

Summary

Now we have access to the relatively new GPT-4 Omni model directly in Telegram!

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *