How We Made a Low-Code Constructor for Back Office. Part 2 (Back-End and Database)

Hello, this is the second article in a series about our path to creating a Low-Code platform-constructor for developing complex Back Office systems. In the previous article, I formulated what “complex systems” are, the task that needs to be solved, and also provided a set of “observations” about the principles of building IT production based on a Low-Code tool. This time, I will describe the approach we chose to build the Back-End and work with the database. In the following articles, about the principles of organizing a thin client.

At the institute, like all IT specialties, we had a course on databases. I can honestly admit that I hated it. It was not only boring, but also had some dull and empty examples about clients and orders, obvious connections and forms for filling in data. At that time, in my circle, databases were always secondary, but algorithms and programming languages ​​were considered interesting and complex, and, as a result, cool. Even if you look at programming olympiads, there is, in general, only algorithms. After I spent 15 years in fintech, I can say with confidence that they start with the wrong thing about databases and generally teach the wrong thing. Lecturers diligently hide from you the true architectural thrill of designing, and, instead, drag out the unsightly internal structure and all sorts of specifics. After this, it seems that “tables” are easy to make, and everything else is boring reports, nasty tuning of SQL queries and various administration problems – a cross for admins who are to blame for choosing their own fate. At the same time, the Zen of design is nearby if you realize that it is the base that defines the main business objects of the product and the main functionality through the relationships between them, and also sets the direction of the UI by user roles, who sees what and how they search. Here you need to apply abstract thinking, be able to generalize and highlight the main thing, predict in which direction the product will develop and conduct an extrasensory analysis of the requirements, since they are often “not very good”. At the same time, you do not write a single line of code, but simply draw the framework of the product in broad strokes and think through the functionality. Well, isn't that great!

Databases do not change as often as JavaScript engines, but still, the animal fear of getting attached to one manufacturer does not allow many to sleep peacefully. When I participated in the development of the core of a large fintech system for Oracle, we had an unspoken order not only to avoid various hints, but also not to use multi-table queries when describing business logic in PL / SQL. Instead of joins, there were nested cursors (to be sure how the query will go), and everyone was fighting for the ability to automatically convert Oracle packages to something else. At the moment, in connection with the development of non-relational databases (well, they will someday launch with a full-fledged transaction mode), I definitely want to keep a similar approach when interacting with the database and be as flexible as possible in choosing a storage system. It is convenient to move all internal business logic (which can be written in PL / pgSQL, PL / SQL and others) outside in order to be able to linearly scale the application server in the future, and leave only data storage and retrieval for the database.

Database structure

Updating a large database to a new version is always a sore spot. Firstly, everything is standing and waiting. Secondly, it does not always go smoothly. For fintech, this is critical, and we decided to organize data storage in a “packed” form – almost a Schema-less structure, only on an honest PostgreSQL database, which we supported one of the first. Additional tables are created for index fields, but all the main tables practically consist of one packed field. For wiggling the structure and upgrades, this is an order of magnitude easier, and it is also a step closer to NoSQL, which is also important.

The structure of our database is described in json files, each table in its own file. Tables are divided into

  • “regular” – are automatically created and upgraded in our database

  • “abstract” – the table structure is described as for a “regular” table, but at the same time custom procedures are created in Java / Kotlin to fill it with data from any external (and internal) sources. You can also define custom procedures for creating, updating and deleting records, and the abstract table will no longer differ from the usual ones. Any views are also “abstract” tables for which Java code is written

It is important that when describing the interaction logic, it makes no difference to the developer which tables he works with. Moreover, it makes no difference to the web screen designer which type of tables to create screens from. This approach allows you to construct almost any dashboards on any data sources and use the Low-Code platform as a tool for creating “single entry points” or complex workstations.

Let's look at an example of two tables Client and Loan (loans issued to the client).

Loans issued to the client

Loans issued to the client

In this case, we will make the Client table abstract and enter data into it, for example, from a text file. The Loan table will be ordinary and, accordingly, will be automatically created in the database.

Description of the Client (client.json) and Loan (loan.json) tables

Description of the Client (client.json) and Loan (loan.json) tables

Note that the client.parent field refers to the same table for organizing the hierarchy. The loan.status field is a “short reference” that is used to generate constants in Java / Kotlin for conveniently describing the behavior logic. The “@” elements are technical, they are created automatically and are used to automatically update the database structure so that there is no need to manually create alter scripts and it would be easy to add, delete and modify all elements. In addition to field types, this structure describes information about field domains, for example, what exactly the field refers to (“ref” to which table), how to deal with the current line when deleting a record that the current one refers to (the “refDelete” element), which field is “responsible” for the name of the current line when displaying in a reference book built on this table, etc. All this is used, among other things, during the generation of Java objects that are needed for the API when working with the database.

The description of indexes occurs in the same file where the table is described.

Index in the Loan table

Index in the Loan table

You can rename fields, add and delete new fields, indexes and tables. All editing occurs only in json files, and when installing a new version, all changes will be applied automatically.

For all table descriptions in json, classes are automatically generated, the attributes of which correspond to the fields in the table. In fact, “virtual fields”, because in the database the row is stored in a packed form, as an object. All work with the database occurs through objects of the generated classes.

Procedures and description of business logic

There is no direct access from the code to the database, and all select/update/delete operations are closed by a special API. This, firstly, allows for maximum isolation of business logic from the data storage method, secondly, allows for uniform work with both regular and abstract tables, and thirdly, greatly simplifies the “readability” of the code.

As an example, let's write a function that closes all loans for all clients that reference the current client and so on down the client “tree”.

closeAllClientLoans function

closeAllClientLoans function

First method iteratecreates a cursor and iterates through all the rows of the specified table, applying a filter parent = ${clientID}where instead of ${clientID} the value of the input variable of the function is printed. Since the Client table is abstract, the corresponding class will be called for it inside, where it is described how to fill it with data. For a more strict filter entry parent = ${clientID}to avoid typing the field names in the line, you can replace it with QueryHelper.c().and(Client.fParent, clientID).toString()

Because the function is preceded by @ActionName("closeAllClientLoans")then a REST API will be automatically generated for this function, and the function can be linked to any button on the screen. Also, via REST, you can call all standard functions yourself (including all select/update/delete) and build alternative thin clients or screens of the “first” type (from my previous article), for “people”. Of course, user rights verification and various audits are also imposed on top, if necessary.

Creating new objects (rows in the database) happens in a similar way, for example the createClientLoan function creates a new loan for a client.

Creating a loan from a client

Creating a loan from a client

It is important to note that it was precisely this API to the database and its “sameness” to external sources (abstract tables) that we fought for the “souls” of the so-called business developers: for simplicity and transparency of the description of the application logic, separation of logic from the technical level of interaction with the database and thin client, removal of headaches with assemblies and updates of the data structure. Moreover, if a function called from the outside (via the API) throws an exception, then rollback is automatically called (if necessary) so that important fintech data remains consistent. Upon successful completion, commit also occurs automatically. Of course, commit/rollback can be additionally managed inside the code, but here the task is to clean the code as much as possible and leave only the important business logic describing the product's behavior.

It often happens that before updating a record in a table you need to Always additionally fill in some more fields (some analogue of triggers), for example, for denormalization or simply according to the logic of the product. For this we have a number of methods, for example, beforeUpdate which can be overridden and additional rules can be added inside.

Filling data before updating the database

Filling data before updating the database

In this case, we automatically fill in the loan's due_date if this field was not set for some reason when updating the record in the database.

Modules and components

A module is a minimal delivery unit that is independently versioned and consists (optionally) of

  • Table descriptions (in json)

  • Business logic descriptions (java/kotlin)

  • Screen set (in json) – more on screens later

  • Well, and a little bit of other stuff in the form of text, like localization (text), documentation (markdown), and so on.

The module can be independently installed on the deployed platform, is a single zip file and is created automatically at the project build stage in the IDE. A module is an independent part of the functionality with its own tables, logic and screens, but it can be, for example, a module with only screens or tables, depending on your approach to licensing, delivery and the architecture of the product itself. Screens can be built on tables from different modules, and business logic can also access tables from other modules (if this is not explicitly prohibited). The Low-Code platform itself is also a set of system modules, some of which are included in the distribution by default, and some are delivered as needed.

We built the application server on the Apache Tomcat base, it is Stateless, does not store any data and can be scaled linearly if necessary. This is where the Low-Code platform core “runs”, modules are “put” there, and then the core itself is engaged in deploying the scheme in the database, providing API, distributing statics for the thin client (screens are built dynamically based on their descriptions in json), checking rights and various diagnostics.

The core includes the following basic system modules: authentication, authorization, audit, user management, screen designer, basic notifications, and database diagnostics. If necessary, we also connect a module for working with documents (components for OnlyOffice and P7-Office), notifications via Telegram (with feedback), full-text search, payments, and more. The number of modules is constantly increasing, and various custom ones are appearing (for example, integration with SAP, Teamcenter, and others). In general, it is important to make the connection of new modules, API, installation, and delivery systems as simple as possible, so as not to go crazy from a tangle of dependencies later.

Rights

For the convenience of distributing rights to users, it is convenient to operate entire screens at once – to issue rights to a screen, and also to have the ability to fine-tune what exactly can and cannot be done in a specific screen. Since the screen consists of server actions (selection, updating tables, calling additional actions), then, naturally, the rights must be checked on the server.

To solve the problems of issuing rights to data inside a table, we have developed a special division of rows in tables by a special feature “division”, the reference book of which is created globally and makes it possible to color the data throughout the entire system. With this approach, some tables can be left as global reference books, uniform for all users, and some can be colored so that the data does not get into “foreign” queries at all.

Diagnostics

We faced the need to support more sophisticated diagnostics of work. It is necessary to quickly determine the state of server processes, the state of server connections to the database, understand the statistics of database queries, evaluate the sizes of objects and understand the statistics of execution of actions. Moreover, it is necessary to be able to immediately understand from the request in what place of the “code” it was called. This is interesting, considering that in the standard statistics of the database this, of course, is not. We learned to “throw” something in the request, and conveniently show where and what is happening. First of all, this is necessary for quickly solving the client's problems, to immediately understand what the matter is.

Housekeeping

A very important module (which is usually forgotten) is the Housekeeping module, which is engaged in automatic deletion of obsolete data to reduce the size of the database and improve performance. Its scope of work usually includes

  • Temporary or intermediate data

  • Old entries created for logging

  • Old data already uploaded to DWH (Data Warehouse) systems and others

It is necessary to be able to set data selection rules. In our case, they are set directly in the module by the developer who is developing it. The rules can look like line filters (by date or status fields) or like software selection for complex cases. After installing the module on prod, the system administrator can control (change) the rules. You can change the conditions for applying the rules or the conditions for deleting, for example, how the date before which the data will be deleted is calculated.

Working with outdated data is very important, and it is advisable to think about it right away, while developing your modules, and to figure out the rules by which you will need to select records in the future. In general, I strongly recommend not to forget about Housekeeping.

IDE for development

We used IntelliJ IDEA (Community Edition) as our main IDE for development, for which we wrote our own plugins and added several buttons to the toolbar.

The first button generates the corresponding access classes based on the data model in json. So that you can immediately develop business logic. Note that if there is no business logic yet (this happens), and you just need to make many screens to fill in the data, then you don’t need to program anything at all.

The second button – assembles the module. In fact, it also generates everything first, but also assembles a module from the project as a zip file. The module turns out to be completely independent, and it can be sent to everyone at once, and even thrown into production, but this is for heroes.

The third button restarts the local Tomcat with the new module.

The project stores everything necessary for assembling the module – description of tables and screens in json, behavior logic, localization, etc.

A brief summary

In accordance with the goals set in the previous article, I need a Low-Code tool for organizing the production of products, simple and transparent, but without losing the capabilities in complex customizations. The entire structure of any module is “visible at a glance”: the database structure in json, screens in json, the business logic code is “cleaned” from technical routine and consists mainly of simple functions, elementary language constructs and a uniform API to the database. At the same time, no one prevents you from writing anything you want in Java / Kotlin. The system is modular, divided into modules in any combination and delivered as you like. The server is Stateless and can be linearly scalable. There is no binding to the database. You can also support NoSQL databases, while all the code will work as before. The box has a well-thought-out system of rights, various authentications, Housekeeping, etc. to provide developers with the opportunity to focus on developing a business solution and “forget” about the technical layer. All updates occur automatically, through the UI.

In the next article I will tell you how our thin client and screen editor are arranged.

The previous article can be found here How We Made a Low-Code Constructor for Back Office. Part 1

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *