Impact of non-functional requirements on software development

All requirements for a software product can be divided into two groups. These are functional and non-functional requirements (NFTs). The former describe “what” needs to be done, the latter describe “how” the system should work. These are the conditions under which the product must operate and the qualities it must have (eg performance, reliability, scalability). They are of great importance, although they do not directly describe the main functions of the system. They affect the user experience. Today we will talk about three interesting tasks from our practice, in which NFT played a decisive role. We will be happy to discuss your tasks in the comments. See you under the cut!

What are non-functional requirements

Non-functional requirements are system constraints, they define its quality attributes and help ensure that the system meets user needs.

Non-functional requirements can be divided into two categories:

  • Quality Attributes: These are the characteristics of a system that determine its quality. Examples of quality attributes include security, performance, and usability.

  • Constraints: Examples of constraints include time, resources, and environment.

Prompt Delivery Guarantee Challenge

The NFT in this task is to display content on the page in a short, user-friendly time. To do this, the content is collected from pieces – one part of the content is drawn immediately, the other is loaded later.

Such a task usually arises where there is user-generated content: social networks, blogs, news feeds. When the page has the main data set, but there is data that is not needed in the initial output. For example, when a user comes to view a product, he wants to see the product itself first. And then, in favor of promotion and marketing, the site begins to “replenish” data: the “recommended” and “have you recently watched” sections. The information that the user did not directly request. The same thing happens in social networks, news feeds. The user requested one content, along with it they are given the opportunity to navigate to other content.

The longer a user walks around a site or store, the more he sees – for example, ads – and the higher the chance that he will buy something.

The bottom line is that the information collected for the user can be stored in different formats, databases. Some databases can be very heavy, others can take time to calculate. Because of this, if you give the user all the data in bulk, the server timeout may increase. The time when the user will see the information and be able to start using it. Roughly speaking, if a user opens a page in social networks or a store, and it loads for five seconds, most likely the user will close it after two seconds with the thought: “It probably doesn’t work.” You need to boot up in a maximum of two seconds. To do this, the content is shared.

It looks like this: in the case of a store, you need to be ready to display a page on the front that will be filled only with information about the product. There will be a header, a footer – in general, everything that is on each page. However, only basic information will be displayed and it will come very quickly. Since you just need to select a product by ID, open it and show it. Such a page will be rendered in half a second, the user can already work with it. Along with the initial rendering, requests for additional information are sent. Separately, requests are sent either for all additional at once, or in parts. One request went – the avatar, information about the name and preferences were loaded. Another request analyzed the user’s activity and discounts and brought the price. The third query returned featured products, and so on.

Here the back and front work together. The back can serve requests independently, the front knows that when it receives a page, it must request certain information and then put everything together.

Then at the front the most interesting task is to make everything look nice. Beautifully fell if any of the requests did not reach. So that the whole page does not die, but a stub is drawn or a section disappears.

Also, quick access to the main content can be achieved by choosing the right front architecture. In particular, static page generation is an approach in which pages are not created at the time of the request, but at the time of publication or according to a schedule. This allows you to give them to the user instantly. Loadable parts can also be embedded inside.

Here are examples of such generators:

Accessibility Challenge

The NFT in this challenge is to make the web application accessible to people with disabilities. The task is difficult, because it imposes many restrictions on the frontend.

What if a person has problems with color perception? Then everything on the page should be very contrasting. There are certain schemes that show what contrasts with what and how. We’ll have to change the design for this contrast.

For visually impaired people, large text is required. And this does not mean that you need to make the seventieth point size everywhere, the site should scale normally. If the user wants to zoom in on the site, he will scroll the mouse wheel while holding Ctrl, and the site should zoom in correctly. After all, there is no need to make a version of the site for the visually impaired accessible to everyone – most users see normally. Visually impaired people should be allowed to use the site. This will impose its own restrictions on the layout and styles.

Blind people use readers to navigate the Internet. Readers voice the texts that are written on the site. When developing a site for the blind, you need to additionally configure what to read and what not. Among the problems that arise in development: the text that is logically read first, in the layout may be further than other text that needs to be read after it. In such sites, tab navigation and arrows should work well.

Useful free tools to identify and fix accessibility issues:

Data visualization

The NFT in this task is to display data in a way that is useful to the user in a time acceptable to him. The difficulty lies in the fact that there is a lot of data. They are difficult to pick up, read, store, transfer and output.

Design problems can be, for example: there are several billion records, how to display them? If you display each entry separately, will the user be able to draw any conclusion? You need to understand how to use the data, you need analytics – what you want to get from this data, how it can be useful.

After the analytics, they make a design – how it will look. Further, this design is used from two sides: on the web and on the back. And it would be nice not to run into any performance restrictions.

We’ll have to iteratively maneuver between problems. On the one hand, there are requirements for design and analytics: what data should be visible in order for it to be of value. On the other hand, there are real limitations in the system – perhaps, for the purposes of analytics, three million points need to be displayed, but it will take a long time to draw. The browser may “groan”. Three million points is hundreds of megabytes of data that will be sent from the backend to the frontend, and it’s not a fact that the user is ready to wait a minute or two until this data is loaded.

To solve this problem, data aggregation is needed. First, they figure out how to aggregate the data so that it does not lose its meaning. Then they build aggregated graphs, draw aggregates on maps. Maybe they display selections if it is a table. In general, they do not try to display all the values, but display, for example, the brightest representatives of the category or the average value.

Another way to solve the visualization problem is if you need to show a lot of data, you can break it into pieces and show it in pieces.

Useful libraries for graph visualization:

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *