preparing an API testing report to keep everyone happy
We develop a test script, launch it and get a cool report with the results

Hello. How often do you need to develop hundreds of automated tests and provide reports with the results to stakeholders? Personally, I do very often. This helps me Anna.
Why is it needed
I work as a QA engineer in a large IT company. We provide testing as a service. We are approached by teams that develop their product to automate manual tests. As a rule, these are UI or API tests. Many do not understand what test automation is and every time you have to explain what’s what. The most important thing in any testing is the test results. Everything will be fine if they are presented in a convenient and understandable way. Historically, what we use for reports Allure Framework. Our clients are used to seeing results this way.

My team writes test scripts in Python. We are the first team in our company to introduce this language into testing. I have always wanted to do more and spend as little time on it as possible. Python has great built-in and user libraries, but they didn’t fully satisfy our needs. I had to import a bunch of libraries, write long code and spend a lot of time writing. Development and support time are the main criteria I set. For this, it was developed Anna.
How to install
To install, run the following command:
python -m pip install anna-api-test-framework
How to use
Let’s start by importing the library.
from anna import Actions, Report, Asserts
Three classes are provided for use.
Actions – contains methods for making http requests. Library hidden under the hood requests. All query and report data is automatically added to the test report.
action = Action(url=url)
response = action.request(method=method)

report – contains methods for adding the necessary data to the report: steps, test names, test grouping, links, description, etc.
@Report.epic('Simple tests')
@Report.story('Tests google')
@Report.testcase('https://www.google.com', 'Google')
@Report.link('https://www.google.com', 'Just another link')
class TestExample:
@Report.title('Simple test google')
@Report.severity('CRITICAL')
def test_simple_request(self):
url="https://google.com"
method = 'GET'
want = 200
Report.description(url=url, method=method, other="other information")
with Report.step('Checking response'):
Assert.compare(
variable_first=want,
comparison_sign='==',
variable_second=got,
text_error="Response status code is not equal to expected"
)
How to run tests
To start, use the following command:
python -m pytest --alluredir="./results"
We use the library for our tests pytest. With this command, we run all tests from the current directory and specify where to save the data with the test results. In this case, we save everything in the directory results
How to generate a report
To generate and display a report, you need to install the utility Allure.
Next, we use the command to generate a report from the received test data in the directory results
:
allure generate "./results" -c -o "./report"
This command generates a report in the directory report
How to open a report
We execute the following command:
allure open "./report"
The command starts a local server through which we access the report from the directory report
in your browser.

This report is based on the following test script:
from anna import Action, Report, Assert
# добавляем всю необходимую информацию для набора тестов, а также
# группируем их
@Report.epic('Simple tests')
@Report.story('Tests google')
@Report.testcase('https://www.google.com', 'Google')
@Report.link('https://www.google.com', 'Jast another link')
class TestExample:
# тестовый метод. Задаем ему имя и важность
@Report.title('Simple test google')
@Report.severity('CRITICAL')
def test_simple_request(self):
url="https://google.com"
method = 'GET'
want = 200
# добавляем описание в отчет со всей необходимой
# информацией
Report.description(url=url, method=method, other="other information")
# создаем новый объект класса Action
action = Action(url=url)
# выполняем запрос
response = action.request(method=method)
# получаем код ответа
got = response.status_code
# вставляем шаг теста в отчет
with Report.step('Checking response'):
# проверяем код ответа
Assert.compare(
variable_first=want,
comparison_sign='==',
variable_second=got,
text_error="Response status code is not equal to expected"
)
Outcome
This tool speeds up the development of test scripts for the test API and standardizes the type of test report on all projects. Our tests are raced in CI/CD (Jenkins) and run automatically on events. It remains only to watch the reports and keep the tests up to date.
PS
Thank you for your attention. I will be glad to hear your opinions, suggestions and comments.