Analytics on government projects – it’s not scary

How often have you heard during interviews that candidate analysts refuse your project because they do not want to work in the public sector, because they believe that bureaucracy will greatly interfere with their work, their development, and the release of a quality product?

Today I would like to share our experience with you, emphasizing that working as an analyst on government projects is not as scary as it might seem. My name is Georgy Dodeliya, and I manage international IT projects at the company JSC GNIVC, I have been working in the IT industry for more than 12 years. I started as a business analyst, then became interested in systems analysis, and after that I went into project management.

A little information about the company JSC “GNIVC”: we have been on the market for more than 46 years, our staff exceeds 1,700 people, we are represented in 8 cities of the Russian Federation. Our products are well known to you: these are mobile applications for different types of taxpayers (for example, personal accounts of individual entrepreneurs, individuals), self-employed people, a platform for checking checks, a civil registry office project, etc. We also manage a set of systems for the internal needs of the Federal Tax Service of Russia.

I would like to start presenting our experience with an introduction to the context – we will tell you in detail about the conditions in which we started.

The first condition is the closure of all offices for quarantine due to the pandemic, which forced us to work remotely, which created a number of problems, for example: interaction with untested tools, working on old processes, difficulties in forming and equipping a team.

The second is an international project, which involves interaction with a foreign state. It was necessary to establish communications with colleagues from the tax authorities of another country, and this was not easy for one simple reason: they don’t know you, they don’t know you – hence the problems with access, they won’t share information with you, and so on, since you are their only person. “Varyags”.

Third, a quick start to development is needed due to the fact that the State Contract was concluded before the pandemic.

Fourth, the customer has a strict hierarchy. It’s not that they won’t provide you with the requirements, they won’t go smoking with you unless permission is received from above.

Fifth, the customer’s IT processes are quite outdated. For example, there are no roles: analyst, tester, project managers. I'm not talking about DevOps. The developer goes “up” to his manager (a representative of the “business”), receives a task from him and goes to implement it. A week later he has a new task and so on. Let us add that there is only a dev stand and an industrial stand. And my colleagues develop only OLTP systems. As part of our own project, we were faced with the problem that it is difficult to build an analytical system for finding (detection?) violators of tax discipline for one simple reason – the rather low quality of the data (due to poor development of business requirements for the system, colleagues are constantly refining it and generate a lot of “crutches”).

The sixth condition concerns an internal problem for Russia that we encountered when hiring – this is a big boom in IT among “non-IT people.” Let's imagine a business colleague who has made two small productions for developers, attended a couple of trainings, written a beautiful resume and is applying for the role of an analyst with high salary expectations. We had to spend a huge amount of time filtering such candidates.

Where did we start? From holding a series of meetings between analytics, development, testing leads and the project manager. We discussed the expectations of each role from the results of the work of the other role, in what form this result needs to be provided.

At the end of a series of meetings, we formed requirements for the analytics team. Let's share.

First, our analyst must have knowledge in business analysis. We always add to the contract a stage of studying the processes of the automation that we are introducing, so as not to automate chaos.

Secondly, the analyst must have architectural skills. To understand what is “under the hood” of the customer’s hardware and software complex, we conduct a technical audit. Analysts are more often involved in formalizing the processes that are built in the IT departments of our customer, and help in communication and organizing audits in all other areas of infrastructure and security.

The analyst must also have knowledge in systems analysis. A bonus is knowledge of flexible methodologies, knowledge of BPMN notation, ability to model data models, knowledge of use-case and user-story (hereinafter referred to as UC and US, respectively) and integration interaction. But as you understand, the list can be endless, and we need to find a specialist in a short time, so any combination of additional skills suited us, we were ready to teach the rest.

The following lead agreements were regarding work sites. We called them “Jira work items”. The entire development was divided according to business sense into epics, which were decomposed into “stories”. And for a clear understanding of the progress in implementing this or that “story,” we decided to create a sub-task for each of the roles. During work, tasks may appear that are not directly related to the business feature; you should simply create a task for them. For example, preparing for a presentation, writing a PMI, working with a new library. The testing team also prepares test cases and test plans. Additionally, we introduce defects.

After all this, we developed the following production process.

But first, about the composition of the roles of the product team:

· methodology group, these are former employees of the Central Office of the Federal Tax Service of Russia, who moved to JSC GNIVC and studied in product specialist courses;

· Analysts: both business-oriented and system-oriented.

· a development team consisting of frontend, service layer and data layer (in our work we use a “data lake” approach);

· teams of testers (manual, automation and load testers).

The process is based on Kanban principles, visualization is carried out through a Kanban board.

Step 1 – filling the Backlog. The backlog is generated by the methodology team. Colleagues write “stories” (how I… want… that…), supplemented by acceptance criteria. Backlog-grooming is carried out with a certain frequency, in which all team members or representatives from each role participate, expectations and acceptance criteria are clarified, and thus the “story” is added.

Step 2 – task planning. The “Store” is assigned one priority or another, and it falls into the “Scheduled” column.

Step 3. Analysts take on tasks from the planned column. The analyst begins collecting requirements and designing. After the collection and design are fully ready, he submits his production for internal review to the analytics team and the methodology team. This is how we support the principle of V&V – Validation and Verification.

Step 4. If the task has passed the review, it goes into the “Analysis ready” column. At the same time, it must meet the requirements of Definition of Ready. This is a set of criteria that any production must satisfy in order to be considered ready. For example, among the productions there should be a demonstration script, as well as a set of acceptance criteria adjusted to take into account the analyst’s work.

Step 5 – code development. When the development team announces free resources, a “story review” is scheduled, in which the analyst defends his production to the development and testing team. The task goes into development, is decomposed, and the magic of writing code occurs. To move a task from “Development” to “Development Done”, it must meet the development team's Definition of Done requirements.

At the “Development” stage, the testing team is involved in the work, which generates test data and test design.

Step 6. The task is transferred to “Development ready”, after which a demo is carried out on it by an analyst and product specialist. If everything is satisfactory, we have achieved the goal of the story: we have solved the problem at the start. Then it goes into the “Testing” status. Moreover, “story” can move to “Testing” after a demo, even with a minimal set of defects.

Step 7. Now it’s the testing team’s turn. Before starting testing, a test case review is carried out by an analyst and product specialist. This is a review of test cases to understand whether all variations are covered and there is nothing superfluous, so as not to waste time. Based on the testing results, the task will be transferred to the “Testing ready” status if the requirements of the test-computation criteria, the so-called readiness criteria based on testing results, are met. For example, a certain number of defects is acceptable at a certain level of significance, and so on.

Step 8 – preparing the release. Once a certain number of “stories” have accumulated in “Testing Ready,” they are combined into a release by decision of the team.

Step 9 – the release is installed on an industrial stand. Once the installation is completed, the tasks are transferred to the “Deployed” status.

What artifacts are being prepared by analysts on our project?

One of the artifacts they use is an information collaboration system for writing documentation and a project management tool for tracking tasks.

In the collaboration system, a page is created called “History”; it has a number assigned according to a specific mask. The “I want that” and acceptance criteria from the task tracking tool are transferred to the page. I want to immediately explain that “History” is a one-time page. If we need to change the same functionality, then we will not adjust this “History”, we will create a new one, because this will already be a new business feature (with new “I want” or a new goal). Then this US is signed through UC (main and alternative), at the end of the production there is a script for the demo. If within the framework of “Stories” it is necessary to develop new screen forms, then the analyst develops “mockups” for them, placing them in the GUI section on the “Stories” page. “Mockups” are prepared in the draw.io macro, after which they are sent to the designer, who then implements them in Pixso, attaching a link to the “History”. All comments and rules for the operation of screen forms are formalized by the analyst in the form of text under each of the “mockups”, which are referenced from the UC steps.

UC, of ​​course, has preconditions and extensions. One example of extensions is algorithms. The algorithm is written on a separate page in the algorithms block, it is versioned. Algorithm pages usually contain a brief description of the algorithm, its purpose, input and output parameters, a calculation flowchart and a step-by-step text description of this flowchart (possibly in the form of a table).

To ensure that everyone on the project speaks the same language, in addition to the glossary, we use a logical-level data model, because at the design stage, analysts do not fully know from which database (relational or not, where exactly) the data for the algorithm will be taken from. This will be designed by an architect, for example, in collaboration with the Hadoop team.

If we need to issue some rights or, on the contrary, reduce some accesses, then we have a role model, it is also separately described with versioning and a description of each role.

There is a directory of all alerts and errors, which contains a sign with the text of the alert or error and the indication US

We also have regulatory and reference information, where each reference book is on a separate page and is also versioned.

If it is necessary to expand the list of non-functional requirements for the GUI, for example, the type and size of fonts, colors, etc., then a separate changes page is used for this. On this page, each GUI requirement is maintained atomically. For other subtypes of non-functional requirements, there is a separate page with sections on subtypes and descriptions of atomic requirements (for example, browser version, screen resolution, etc.).

If it is necessary to describe a reporting or printed form, then you should publish it on a separate page with a brief description: what kind of form it is, the algorithms and rules for filling out each of the cells, starting from the requirements for fonts, filling, and ending with a description of the rules for filling out the fields. Plus, if this form is dynamic (it can change the number of rows or columns), then it is necessary to describe the calculation algorithm. An integral artifact is the attached example form.

Another important artifact is the description of integration contracts. Analysts describe the structure of files (requests/responses) in tabular form, attach their diagrams and examples.

Now I would like to briefly talk about our plans.

Firstly, there is currently a small problem with documentation templates for system cases. What is it about? To obtain a calculation or sample, it is often necessary to set up an ETL process. It will be incorrect to cram all this into one user story; it will take a long time to develop and will entail a large number of overhead costs (from control of implementation to lengthy coordination with the customer). Now it is launched as a dev-task, which is not very nice. We want to template artifacts for these types of tasks and work on them.

Secondly, we want to set up a macro that allows us to collect GOST documentation from space in the collaboration system, requiring minimal further modifications from the analyst and technical writer.

Thirdly, we look towards another macro. It will allow us to clearly define what a requirement is on our project, assign it an identifier, a set of details and, most importantly, connections. We will get a requirements trace that will improve the quality of the requirements when they change in the future.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *