How to quickly write an API using FastAPI with validation and database

All web requests are processed on the server – this is well known to everyone. But it happens when you need to write a special software interface, the so-called

API

through which users will be able to centrally receive data and make changes, for example, to their profile.

In this article, we will develop a simple API using the most popular stack and FastAPI. Let’s look at the important concepts in working with this framework, sketch out the basic structure of the project and deploy the application on a cloud server. Details under the cut!

Use navigation if you don’t want to read the entire text:

Preparing the environment
First sketches
Data Validation with Pydantic
Working with the database
Repository Pattern
Router
Deploying the project to a cloud server
Conclusion

Preparing the environment


The first step is to create a virtual environment for our project, into which we will install the necessary dependencies. Depending on your operating system and how Python was installed, one of the following commands may work:

python -m venv venv
python3 -m venv venv
py -m venv venv

Installing libraries

I suggest installing the necessary libraries right away using the following command:

pip install fastapi uvicorn pydantic aiosqlite sqlalchemy

If you have conflicts between library versions, refer to their documentation or the version pack used in the project:

aiosqlite==0.19.0
fastapi==0.109.0
pydantic==2.5.3
SQLAlchemy==2.0.25
uvicorn==0.25.0

Let’s take a brief look at their purpose.

First sketches


The main file through which our application will be launched is main.py. It needs to be created in the root of the directory where you are developing.

Let’s check that everything is working correctly. To do this, let’s create a simple FastAPI application with one endpoint (also called a “handle” or “router”):

from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def home():
   return {"data": "Hello World"}

And start the Uvicorn web server:

uvicorn main:app --reload

Please note that we must specify the location of the app variable to launch the application, as well as the –reload flag so that the changes made will immediately update the web server to the latest versions of the code.

Now, if you go to the address in your browser http://127.0.0.1:8000 or http://localhost:8000we will see the web server response: {“data”: “Hello World”}.

Uvicorn works in conjunction with FastAPI as follows:

  1. our request goes to Uvicorn;
  2. Uvicorn passes this request to FastAPI;
  3. FastAPI runs the code we wrote and returns a response to Uvicorn: return {“data”: “Hello World”};
  4. Uvicorn returns the response to us.

If you go to the address

http://localhost:8000/docs

then we will see a convenient interface for testing our endpoints.

If we run our query here, we’ll see the same response: {“data”: “Hello World”}.

Data Validation with Pydantic


Pydantic makes it possible to validate data through type annotations in Python. Let’s create a simple diagram for adding a new task:

from pydantic import BaseModel

class STaskAdd(BaseModel):
   name: str
   description: str | None = None

@app.post("/")
async def add_task(task: STaskAdd):
   return {"data": task}

In the future, we will need a schema for reading tasks from the database, which will additionally have an id parameter (the primary key in the table). Let’s write a diagram for reading:

class STask(STaskAdd):
   id: int
   model_config = ConfigDict(from_attributes=True)

Please note that we are not inheriting from BaseModel, but from the STaskAdd schema we just created. In this case, we inherit the name and description fields and all we have to do is add id. We also set the model_config attribute, which we’ll talk about later in the Repository section.

Save the file and go to the documentation at http://localhost:8000/docs:

Data entry at the endpoint.

If you press Try it out in the upper right corner, we will be asked to manually edit json, which is not very convenient. In addition, it is difficult to understand which fields are required. To improve the experience with the API documentation, let’s change the add task endpoint to the following:

from fastapi import Depends

@app.post("/")
async def add_task(task: STaskAdd = Depends()):
   return {"data": task}

In this article we will not analyze the features of Depends, since this is an advanced topic that requires in-depth study. Now it’s enough to make sure that the appearance of the endpoint in the dock has improved significantly.

We see that a clear mark has appeared for a required field and it can be conveniently filled in in the highlighted area:

Working with the database


SQLAlchemy is a powerful library for working with relational databases. It takes into account the maximum number of features and nuances of various DBMSs. We will be working with an asynchronous version of SQLAlchemy and a SQLite database. First, let’s create a database.py file next to the main.py file and paste the following code:

from datetime import datetime
from sqlalchemy.ext.asyncio import async_sessionmaker, create_async_engine

engine = create_async_engine("sqlite+aiosqlite:///tasks.db")
new_session = async_sessionmaker(engine, expire_on_commit=False)

Here we create an asynchronous connection that will be responsible for sending requests to the database engine. Note that we are telling SQLAlchemy that we will use the aiosqlite async code driver. After creating an engine that we can already work with, we additionally create a session factory new_session. A session allows you to work not with ordinary lists and dictionaries, but with data models that are created through classes. Let’s create a task model:

from sqlalchemy.orm import DeclarativeBase, Mapped, mapped_column

class Model(DeclarativeBase):
   pass

class TaskOrm(Model):
   __tablename__ = "tasks"
   id: Mapped[int] = mapped_column(primary_key=True)
   name: Mapped[str]
   description: Mapped[str | None]

To create models, we always need a class from which we will inherit. In our case, the “parent” class is DeclarativeBase. The model corresponds to one table in the database. We set the table name in the __tablename__ attribute. In databases, each table usually has a unique value column called id.

SQLAlchemy, like Pydantic, uses type annotations to define column categories. The TaskOrm model completely describes the table inside the database, defines primary and foreign keys, indexes, constraints, etc.

To create a table inside a SQLite database, you need to add the following functions to the database.py file:

async def create_tables():
    async with engine.begin() as conn:
       await conn.run_sync(Model.metadata.create_all)
async def delete_tables():
   async with engine.begin() as conn:
       await conn.run_sync(Model.metadata.drop_all)

These functions are responsible for creating and deleting tables in the database.

Let’s take a look at the lifecycle of a FastAPI application and learn how to create a table when the application starts and delete it when it shuts down. To do this, we will write a lifespan function in the main.py file. When creating the app variable inside FastAPI(…), we will set the lifespan parameter with the value lifespan:

from contextlib import asynccontextmanager
from fastapi import FastAPI
from database import create_tables, delete_tables

@asynccontextmanager
async def lifespan(app: FastAPI):
   await create_tables()
   print("База готова")
   yield
   await delete_tables()
   print("База очищена")

app = FastAPI(lifespan=lifespan)

If you are running Uvicorn with the –reload parameter, then after saving the main.py file you should see the phrase “The database is ready” in the terminal. This means that the function has successfully run all the code up to the yield statement. The code after this will be run when Uvicorn is turned off if you press the key combination

CTRL + C

.

Database queries

To create database queries, we will use the SQLAlchemy Object Relational Mapper (ORM), which allows you to operate on class instances as if they were actual rows from the database.

Let’s create a repository.py file with a simple function to add a task:

from database import TaskOrm, new_session

async def add_task(data: dict) -> int:
   async with new_session() as session:
       new_task = TaskOrm(**data)
       session.add(new_task)
       await session.flush()
       await session.commit()
       return new_task.id

The function uses the new_session session factory and the TaskOrm model to add a new row to the tasks table. Please note that we are using an asynchronous context manager

async with new_session() as session

which allows you to automatically close the session when you exit the manager, so that we do not have to close the session manually every time through

session.close()

.

Line new_task = TaskOrm(**data) creates a new string, but for now stores it only inside our FastAPI application – the database doesn’t know anything about it yet. Line session.add(new_task) allows us to add a new row to the session object so that SQLAlchemy knows what changes to send to the database, but we still haven’t told the database anything about the new task.

Line await session.flush() sends a query like this to the SQL database INSERT INTO tasks (name, description) VALUES (‘Jack’, NULL) RETURNING id, but does not complete the transaction yet, meaning the changes are still not inside the database. The flush function allows us to get the value of the id column of the new task, which we return at the end of the function.

Since we want the changes to be in the database, at the end we write the code await session.commit()which leaves/commits changes to the database, ending the transaction.

Note that any code that is not executed asynchronously does not interact with the database, and all asynchronous operations send requests to the database. Keep this in mind when working with the session object.

After we add a task, we will most likely want to get a list of all the tasks. To do this, let’s create another function:

async def get_tasks():
   async with new_session() as session:
       query = select(TaskOrm)
       result = await session.execute(query)
       task_models = result.scalars().all()
       return task_models 

Here we are writing a simple SELECT query that will give us all the rows from the database. Given that we are asking to select all objects of the TaskOrm class, SQLAlchemy converts the response from the database to instances of the TaskOrm model. Please note that the resulting response, result, is an iterator that you need to go through and select all the desired results. To do this we enter the following command:

result.scalars().all().

You can find out more about working with SQLAlchemy here

playlist

.

Repository Pattern


Both functions access the tasks table, so it makes sense to combine them into one class. Such classes, which interact with a specific table and are responsible for the functions of adding, changing, fetching and deleting rows, are called repositories, since they use the corresponding pattern.

Let’s create our first repository class, and at the same time add conversion of the received data into Pydantic schemas:

from sqlalchemy import select
from database import TaskOrm, new_session
from schemas import STaskAdd, STask

class TaskRepository:
   @classmethod
   async def add_task(cls, task: STaskAdd) -> int:
       async with new_session() as session:
           data = task.model_dump()
           new_task = TaskOrm(**data)
           session.add(new_task)
           await session.flush()
           await session.commit()
           return new_task.id

   @classmethod
   async def get_tasks(cls) -> list[STask]:
       async with new_session() as session:
           query = select(TaskOrm)
           result = await session.execute(query)
           task_models = result.scalars().all()
           tasks = [STask.model_validate(task_model) for task_model in task_models]
           return tasks

Now when we add a task, we accept not a random dictionary, but a Pydantic schema, then convert it into a dictionary using

data = task.model_dump()

. Also, when submitting all tasks, we first convert them into the Pydantic STask schema.

Router


The last step left is to create a router and add endpoints to it. The router is a FastAPI entity that allows you to create applications with one endpoint not only in one main.py file, but in many. This way the project structure will be easy to read.

Let’s create a file router.py and declare a router for tasks, “tasks” in it:

from fastapi import APIRouter

router = APIRouter(
   prefix="/tasks",
   tags=["Таски"],
)

Each endpoint will have the prefix /tasks, and the task tag will also be specified in the documentation at /docs. Now let’s add two endpoints: to add one task and get them all:

from repository import TaskRepository
from schemas import STask, STaskAdd, STaskId

@router.post("")
async def add_task(task: STaskAdd = Depends()) -> STaskId:
   new_task_id = await TaskRepository.add_task(task)
   return {"id": new_task_id}

@router.get("")
async def get_tasks() -> list[STask]:
   tasks = await TaskRepository.get_tasks()
   return tasks

Let’s also create a separate STaskId schema that will display the return response of the add_task function:

class STaskId(BaseModel):
   id: int

To include this router in our application, just import the router.py file in the main.py file and add the router using the include_router method:

from router import router as tasks_router

app = FastAPI(lifespan=lifespan)
app.include_router(tasks_router)

In the router.py file, we use the repository we created earlier, the type annotations, and the return type of the functions. This allows us to add validation of the data returned to the client and improve the API documentation:

This is what the GET/tasks handle looks like:

Ready! We have set the basic structure of the project and now we can move on to the next stage – its deployment

on a real server

.

Deploying the project to a cloud server


Preparation

Before we begin, let’s create a file with all the dependencies that are used in the project. This is done using the command

requirements.txt

. I ended up with the following file:

aiosqlite==0.19.0
annotated-types==0.6.0
anyio==4.2.0
click==8.1.7
colorama==0.4.6
fastapi==0.109.0
greenlet==3.0.3
h11==0.14.0
idna==3.6
pydantic==2.5.3
pydantic_core==2.14.6
sniffio==1.3.0
SQLAlchemy==2.0.25
starlette==0.35.1
typing_extensions==4.9.0
uvicorn==0.25.0

Typically, applications and services are deployed on a server using Docker. To create an image of our application, we will need to create a Dockerfile:

FROM python:3.11-slim
COPY . .
RUN pip install -r requirements.txt
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "80"]

To run the application we need to install Python 3.11. Using the COPY command, we move all files from the current project to the assembled Docker image. Next, we install all dependencies. At the end of the Dockerfile you must specify the CMD and the command that will be launched when the container starts. Let me remind you that the image does not launch the application, but only stores information about all folders and dependencies, but the container is a running image.

In addition to the Dockerfile, we will add the .dockerignore file so that the image does not contain the folder with the dependencies of the Dockerfile itself:

venv
Dockerfile

Next, let’s create a .gitignore file and a GitHub repository:

__pycache__
venv
git init
git add .
git commit -m "initial commit"
git remote add origin REPO_URL
git push -u origin main

Loading a project

Go to the section

Cloud platform

inside

control panels

and press

Create a server

:

To run our application, one vCPU core with a share of 10% and 512 MB of RAM will be enough.

Great – the server is ready. Now we can transfer the project. First, let’s install the necessary dependencies: git and Docker. Instructions taken from

official website

:

sudo apt-get update
sudo apt-get install git
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo \
 "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
 $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
 sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

After installing git and Docker, you need to clone the previously created repository:

git clone REPO_URL.git

We can also use ready-made

repository

using the command:

git clone https://github.com/artemonsh/fastapi_beginner_course.git

After cloning the project, you need to go to the project folder:

cd <название_папки>

And run the command to build the fastapi_app image and run the container on port 80:

docker build . --tag fastapi_app && docker run -p 80:80 fastapi_app

Congratulations! Now your application is accessible via the server’s IP address, other users can visit it and use your APIs.

Conclusion


We learned how to implement the simplest API based on FastAPI, create a database and tables inside it using SQLAlchemy, describe data schemas and validate them using Pydantic. The knowledge gained is the foundation for building more complex applications.

I hope this article will inspire you to further experiment and create your own projects. You can expand functionality, dive deeper into FastAPI capabilities, and personalize the application to suit your needs.

Author: Artem Shumeiko, author channel on YouTube.

Perhaps these texts will also interest you:

→ The basics of search engine optimization for a web developer: boosting SEO with code and common sense
→ Figma has closed Dev Mode: workarounds and their brief overview
→ Diving into Kubernetes: useful materials from Selectel employees

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *