Telegram bot interacting with OpenAI API without proxying. Python development
Let's create a bot that uses the OpenAI API. And deploy it on the server so as not to configure proxying of requests to the OpenAI API (which is blocked for users from Russia), and not to use foreign VPS.
The bot should help to implement
Automation of routine tasks (writing code, documentation, tests).
Provide recommendations and code examples.
Analyze code, find errors and make suggestions for improvement.
Reduce development and testing time.
Planning and designing a bot
Functional requirements:
Welcome message on startup.
Generating responses via OpenAI API.
Processing commands, for example,
/bot
or/start
Error logging.
Automatic reconnection on failure.
Main use cases
Launching the bot:
Information request:
The user sends a command
/bot
or any text message.The bot generates a response using the OpenAI API and sends it to the user.
Error processing:
Selection of technologies and tools
We will use the following technologies and tools:
Python: Simplicity and ease of learning, a large number of libraries and tools.
TeleBot: Simple interface for interacting with Telegram API.
OpenAI API: Using GPT-3.5 model to generate text responses.
Writing the code for a telegram bot
The bot includes several key components that ensure its functionality and interaction with users:
Telegram Bot API: This component is responsible for receiving and sending messages to users via the Telegram platform.
OpenAI API: Used to generate responses to user queries using the GPT-3.5 model.
Logging: Keeps a record of events and errors for later analysis and debugging.
Main Loop (Event Loop): Ensures continuous operation of the bot and processing of all incoming messages.
These components interact as follows:
The user sends a message to the bot in Telegram.
The bot receives a message via the Telegram Bot API and sends a request to the OpenAI API to generate a response.
The received response is returned to the user via Telegram.
All events and errors are recorded in the log for monitoring and debugging.
1. Initializing the Bot and OpenAI API Keys
First, you need to set up API keys for OpenAI and Telegram.
import openai
import telebot
import logging
import os
import time
openai.api_key = 'Ваш Openai API ключ'
bot = telebot.TeleBot('Ваш Telegram токен')
Here we import the necessary libraries and set the keys to access the OpenAI and Telegram APIs.
2. Setting up logging
Logging allows you to track events and errors in the bot's operation.
log_dir = os.path.join(os.path.dirname(__file__), 'ChatGPT_Logs')
if not os.path.exists(log_dir):
os.makedirs(log_dir)
logging.basicConfig(filename=os.path.join(log_dir, 'error.log'), level=logging.ERROR,
format="%(levelname)s: %(asctime)s %(message)s", datefmt="%d/%m/%Y %H:%M:%S")
We create a directory for logs and configure logging parameters for ease of analysis.
3. Processing commands and messages
Let's define functions for processing commands /start
And /bot
as well as any text messages.
@bot.message_handler(commands=['start'])
def send_welcome(message):
bot.reply_to(message, 'Привет!\nЯ ChatGPT 3.5 Telegram Bot\U0001F916\nЗадай мне любой вопрос и я постараюсь на него ответить')
def generate_response(prompt):
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
return completion.choices[0].message.content
@bot.message_handler(commands=['bot'])
def command_message(message):
prompt = message.text
response = generate_response(prompt)
bot.reply_to(message, text=response)
@bot.message_handler(func=lambda _: True)
def handle_message(message):
prompt = message.text
response = generate_response(prompt)
bot.send_message(chat_id=message.from_user.id, text=response)
send_welcome
: Sends a welcome message when the bot starts.generate_response
: Generates a response using the OpenAI API.command_message
Andhandle_message
: Process commands and text messages, generating responses using the OpenAI API.
4. Main loop
Start the main loop to process messages and reconnect on failures.
print('ChatGPT Bot is working')
while True:
try:
bot.polling()
except (telebot.apihelper.ApiException, ConnectionError) as e:
logging.error(str(e))
time.sleep(5)
continue
Here we start the main loop, which constantly checks for new messages and processes them. In case of an error, the bot writes it to the log and tries to restore the connection.
We get the finished code for our bot
import openai
import telebot
import logging
import os
import time
openai.api_key = 'Openai_api_key'
bot = telebot.TeleBot('Telegram_token')
log_dir = os.path.join(os.path.dirname(__file__), 'ChatGPT_Logs')
if not os.path.exists(log_dir):
os.makedirs(log_dir)
logging.basicConfig(filename=os.path.join(log_dir, 'error.log'), level=logging.ERROR,
format="%(levelname)s: %(asctime)s %(message)s", datefmt="%d/%m/%Y %H:%M:%S")
@bot.message_handler(commands=['start'])
def send_welcome(message):
bot.reply_to(message, 'Привет!\nЯ ChatGPT 3.5 Telegram Bot\U0001F916\nЗадай мне любой вопрос и я постараюсь на него ответиь')
def generate_response(prompt):
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": prompt}
]
)
return completion.choices[0].message.content
@bot.message_handler(commands=['bot'])
def command_message(message):
prompt = message.text
response = generate_response(prompt)
bot.reply_to(message, text=response)
@bot.message_handler(func = lambda _: True)
def handle_message(message):
prompt = message.text
response = generate_response(prompt)
bot.send_message(chat_id=message.from_user.id, text=response)
print('ChatGPT Bot is working')
while True:
try:
bot.polling()
except (telebot.apihelper.ApiException, ConnectionError) as e:
logging.error(str(e))
time.sleep(5)
continue
Deploy to a server with access to the OpenAI API
For deployment, we will focus on the platform Amvera.
Why did you choose Amvera?
Amvera provides built-in free proxying to the OpenAI API. You don't need an overseas VM or VPN.
Deployment is as simple as possible. By uploading code in the interface or via git push.
Starting balance that will allow you to test the service.
Launching our bot in the cloud
Let's now move on to the most interesting part of this article: how to deploy a bot without using foreign servers and without setting up proxying to the OpenAI API.
Registration in the service
On the site Amvera click on the “Registration” button.
Fill in all fields sequentially.
We confirm that we are not a robot and click on the big blue “Registration” button
All that remains is to confirm the specified email by clicking on the link in the letter.
Creating a project and placing a bot
On the page that appears after logging in, click on the “Create” or “Create first!” button.
Select a tariff. It may seem that tariff plans provide too few resources compared to VPS. However, in VPS, part of the resources are used by the operating system, and here the entire allocated resource is spent only on the deployed application. The Trial tariff will be enough for us, but it is better to perform the first launch on one of the older tariffs to make sure that everything works.
Let's create a configuration yaml file. You can do this yourself based on documentationhowever I recommend using the automatic one graphic generation tool or do this in your personal account in the Configuration tab.
We use Python, let's specify its version.
requirements.txt – file with dependencies. Very important specify all libraries used in the project in this file so that the service can download them via pip. It is necessary to register all libraries in the format library==version.
Specify the path to the file containing the entry point to the program (the file that you specify to the Python interpreter when you launch the application) or the launch command.
If your bot uses SQLite during operation, save data to persistent storage /data. Otherwise, when you restart the project, all data will be lost!
The port can be specified as the one used in your application code. Don't forget to change localhost to 0.0.0.0
Click on the Generate YAML button, after which the file download begins amvera.yml.
We put the downloaded file in the root of our project
Let's initiate a Git repository and upload our project.
At the root of our project we execute the command:
git init
(if git is already initialized in your project, then you don't need to do this)We link our local git repository to the remote repository using the command specified on the project page in amvera (has the format
git remote add amvera <https://git.amvera.ru/ваш_юзернейм/ваш_проект
>)We do it
git add .
Andgit commit -m "Initial commit"
We push our project by running the command
git push amvera master, вводя учетные данные, которые использовались при регистрации в сервисе.
After the project is pushed to the system, the status on the project page will change to “Building in progress”.
Once the project is built, it will move to the “Deployment in progress” stage, and then to the “Successfully deployed” status.
If for some reason the project did not deploy, you can refer to the build logs and application logs for debugging. If the Project is stuck in the “Building” status for a long time, and the build logs are not displayed, then it is worth checking the correctness of the amvera.yml file again.
Hooray, it works! Now our Telegram bot is deployed and ready to use. You can follow all the steps in the article and test it by sending commands and messages to see how it works with the OpenAI API.