Telegram bot interacting with OpenAI API without proxying. Python development

Let's create a bot that uses the OpenAI API. And deploy it on the server so as not to configure proxying of requests to the OpenAI API (which is blocked for users from Russia), and not to use foreign VPS.

The bot should help to implement

  • Automation of routine tasks (writing code, documentation, tests).

  • Provide recommendations and code examples.

  • Analyze code, find errors and make suggestions for improvement.

  • Reduce development and testing time.

Planning and designing a bot

Functional requirements:

  1. Welcome message on startup.

  2. Generating responses via OpenAI API.

  3. Processing commands, for example, /bot or /start

  4. Error logging.

  5. Automatic reconnection on failure.

Main use cases

  1. Launching the bot:

  2. Information request:

    • The user sends a command /bot or any text message.

    • The bot generates a response using the OpenAI API and sends it to the user.

  3. Error processing:

Selection of technologies and tools

We will use the following technologies and tools:

  • Python: Simplicity and ease of learning, a large number of libraries and tools.

  • TeleBot: Simple interface for interacting with Telegram API.

  • OpenAI API: Using GPT-3.5 model to generate text responses.

Writing the code for a telegram bot

The bot includes several key components that ensure its functionality and interaction with users:

  1. Telegram Bot API: This component is responsible for receiving and sending messages to users via the Telegram platform.

  2. OpenAI API: Used to generate responses to user queries using the GPT-3.5 model.

  3. Logging: Keeps a record of events and errors for later analysis and debugging.

  4. Main Loop (Event Loop): Ensures continuous operation of the bot and processing of all incoming messages.

These components interact as follows:

  • The user sends a message to the bot in Telegram.

  • The bot receives a message via the Telegram Bot API and sends a request to the OpenAI API to generate a response.

  • The received response is returned to the user via Telegram.

  • All events and errors are recorded in the log for monitoring and debugging.

1. Initializing the Bot and OpenAI API Keys

First, you need to set up API keys for OpenAI and Telegram.

import openai
import telebot
import logging
import os
import time

openai.api_key = 'Ваш Openai API ключ'
bot = telebot.TeleBot('Ваш Telegram токен')

Here we import the necessary libraries and set the keys to access the OpenAI and Telegram APIs.

2. Setting up logging

Logging allows you to track events and errors in the bot's operation.

log_dir = os.path.join(os.path.dirname(__file__), 'ChatGPT_Logs')
if not os.path.exists(log_dir):
    os.makedirs(log_dir)
logging.basicConfig(filename=os.path.join(log_dir, 'error.log'), level=logging.ERROR,
                    format="%(levelname)s: %(asctime)s %(message)s", datefmt="%d/%m/%Y %H:%M:%S")

We create a directory for logs and configure logging parameters for ease of analysis.

3. Processing commands and messages

Let's define functions for processing commands /start And /botas well as any text messages.

@bot.message_handler(commands=['start'])
def send_welcome(message):
    bot.reply_to(message, 'Привет!\nЯ ChatGPT 3.5 Telegram Bot\U0001F916\nЗадай мне любой вопрос и я постараюсь на него ответить')

def generate_response(prompt):
    completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=[{"role": "user", "content": prompt}]
    )
    return completion.choices[0].message.content

@bot.message_handler(commands=['bot'])
def command_message(message):
    prompt = message.text
    response = generate_response(prompt)
    bot.reply_to(message, text=response)

@bot.message_handler(func=lambda _: True)
def handle_message(message):
    prompt = message.text
    response = generate_response(prompt)
    bot.send_message(chat_id=message.from_user.id, text=response)
  • send_welcome: Sends a welcome message when the bot starts.

  • generate_response: Generates a response using the OpenAI API.

  • command_message And handle_message: Process commands and text messages, generating responses using the OpenAI API.

4. Main loop

Start the main loop to process messages and reconnect on failures.

print('ChatGPT Bot is working')

while True:
    try:
        bot.polling()
    except (telebot.apihelper.ApiException, ConnectionError) as e:
        logging.error(str(e))
        time.sleep(5)
        continue

Here we start the main loop, which constantly checks for new messages and processes them. In case of an error, the bot writes it to the log and tries to restore the connection.

We get the finished code for our bot

import openai
import telebot
import logging
import os
import time

openai.api_key = 'Openai_api_key'
bot = telebot.TeleBot('Telegram_token')

log_dir = os.path.join(os.path.dirname(__file__), 'ChatGPT_Logs')

if not os.path.exists(log_dir):
    os.makedirs(log_dir)

logging.basicConfig(filename=os.path.join(log_dir, 'error.log'), level=logging.ERROR,
                    format="%(levelname)s: %(asctime)s %(message)s", datefmt="%d/%m/%Y %H:%M:%S")

@bot.message_handler(commands=['start'])
def send_welcome(message):
    bot.reply_to(message, 'Привет!\nЯ ChatGPT 3.5 Telegram Bot\U0001F916\nЗадай мне любой вопрос и я постараюсь на него ответиь')

def generate_response(prompt):
        completion = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "user", "content": prompt}
            ]
        )
        return completion.choices[0].message.content


@bot.message_handler(commands=['bot'])
def command_message(message):
    prompt = message.text
    response = generate_response(prompt)
    bot.reply_to(message, text=response)


@bot.message_handler(func = lambda _: True)
def handle_message(message):
    prompt = message.text
    response = generate_response(prompt)
    bot.send_message(chat_id=message.from_user.id, text=response)


print('ChatGPT Bot is working')

while True:
    try:
        bot.polling()
    except (telebot.apihelper.ApiException, ConnectionError) as e:
        logging.error(str(e))
        time.sleep(5)
        continue

Deploy to a server with access to the OpenAI API

For deployment, we will focus on the platform Amvera.

Why did you choose Amvera?

  • Amvera provides built-in free proxying to the OpenAI API. You don't need an overseas VM or VPN.

  • Deployment is as simple as possible. By uploading code in the interface or via git push.

  • Starting balance that will allow you to test the service.

Launching our bot in the cloud

Let's now move on to the most interesting part of this article: how to deploy a bot without using foreign servers and without setting up proxying to the OpenAI API.

Registration in the service

  1. On the site Amvera click on the “Registration” button.

  2. Fill in all fields sequentially.

  3. We confirm that we are not a robot and click on the big blue “Registration” button

  4. All that remains is to confirm the specified email by clicking on the link in the letter.

Creating a project and placing a bot

  1. On the page that appears after logging in, click on the “Create” or “Create first!” button.

  2. Select a tariff. It may seem that tariff plans provide too few resources compared to VPS. However, in VPS, part of the resources are used by the operating system, and here the entire allocated resource is spent only on the deployed application. The Trial tariff will be enough for us, but it is better to perform the first launch on one of the older tariffs to make sure that everything works.

    New Project Creation Window

    New Project Creation Window

  3. Let's create a configuration yaml file. You can do this yourself based on documentationhowever I recommend using the automatic one graphic generation tool or do this in your personal account in the Configuration tab.

    Graphical tool for generating .yaml files

    Graphical tool for generating .yaml files

    1. We use Python, let's specify its version.

    2. requirements.txt – file with dependencies. Very important specify all libraries used in the project in this file so that the service can download them via pip. It is necessary to register all libraries in the format library==version.

    3. Specify the path to the file containing the entry point to the program (the file that you specify to the Python interpreter when you launch the application) or the launch command.

    4. If your bot uses SQLite during operation, save data to persistent storage /data. Otherwise, when you restart the project, all data will be lost!

    5. The port can be specified as the one used in your application code. Don't forget to change localhost to 0.0.0.0

    6. Click on the Generate YAML button, after which the file download begins amvera.yml.

  4. We put the downloaded file in the root of our project

  5. Let's initiate a Git repository and upload our project.

    • At the root of our project we execute the command: git init (if git is already initialized in your project, then you don't need to do this)

    • We link our local git repository to the remote repository using the command specified on the project page in amvera (has the format git remote add amvera <https://git.amvera.ru/ваш_юзернейм/ваш_проект>)

      1. We do it git add . And git commit -m "Initial commit"

      2. We push our project by running the command git push amvera master, вводя учетные данные, которые использовались при регистрации в сервисе.

  6. After the project is pushed to the system, the status on the project page will change to “Building in progress”.

    A successfully deployed project

    A successfully deployed project

  7. Once the project is built, it will move to the “Deployment in progress” stage, and then to the “Successfully deployed” status.

    A successfully deployed project

    A successfully deployed project

  8. If for some reason the project did not deploy, you can refer to the build logs and application logs for debugging. If the Project is stuck in the “Building” status for a long time, and the build logs are not displayed, then it is worth checking the correctness of the amvera.yml file again.

Hooray, it works! Now our Telegram bot is deployed and ready to use. You can follow all the steps in the article and test it by sending commands and messages to see how it works with the OpenAI API.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *