Migration of electronic document management Directum RX to Linux and PostgreSQL

So, what happened happened, today it is impossible to make a Directum set on MS SQL. Microsoft stopped shipping licenses in Russia. For many this was a surprise, but for us it was not. Long before the introduction of sanctions restrictions, we took care of the transition to PostgreSQL and Linux. The reasons were banal – I wanted to reduce the cost of a license.

I also wanted to learn how to translate Directum systems to PostgreSQL, assuming that the baby elephant will be in demand from our customers for the same financial reason. For us, the abandonment of Microsoft databases was planned. In addition, we have migrated the infrastructures along with Directum to Linux. Today we evaluate this attempt to “move” with cost reduction as partly successful.

I, Vitaly Volnyansky, Head of Technology Solutions Practice (CTO), Sales Director (VPS) of EAE-Consult LLC, will tell you about the migration process and what problems we had to face under the cut.

Background: a few more words about motives

As you probably understood, the main motive for moving to Postgre was that you don’t have to pay for an MS SQL license. You can buy support for Postgre, but it is not a mandatory option, and the price of support is not comparable to the cost of a Microsoft license.

We have been using Directum since 2017. At that time, the proposals from the vendor could hardly be called optimal, the internal infrastructure was incompatible with MS SQL 14. In addition, the vendor strongly recommended using ARR Server with outdated NTP Authentication in the installation, which was used in the desktop client and did not work with the HTTPS protocol.

The only balancer that normally supported this solution was ARR, but everything is completely sad with its balancing mechanisms. When the main server went down, the balancer could not effectively redistribute new requests to the backup. This was one of the trigger factors that motivated us to improve the platform.

We realized that the current stack does not allow achieving high availability, and we also get routine redundant administration and debugging of Directum systems in the load. When using balancing (that is, when using several redundancy components), the administration time increases, you have to monitor the health of the components, monitor where the requests go.

In 2018, we decided to get rid of redundancy, installed one Application Server, one database. The system was optimized and developed, but problems continued to arise. With the release of the third version of Directum, a web client appeared, in connection with which there was a transition to new OpenID authentication mechanisms, and we realized that ARR could be replaced with a more efficient balancer, i.e. on Nginx.

A more optimal algorithm and the presence of a healthy Health Check have seriously simplified our lives. After replacing the balancer, we tried to deploy a second server, which was not fully successful. Therefore, before the release of the fourth version of Directum, we returned to using ARR. But even at version 3.4, a decision was made to migrate from MS SQL to Postgre as a way to reduce license costs.


In order to migrate, a sequence of steps has been defined:

  1. Explore the system.

  2. Prepare a new system in Yandex Cloud: install VMs, servers, install applications.

  3. Convert data from MS SQL to PostgreSQL.

  4. Migrate to Yandex.Cloud provided by EAE-Consult, ensure Hystax migration.

  5. Migrate the default landscape.

  6. Raise VM.

  7. Install Directum.

  8. Set up a communication channel between sites.

  9. Perform document sync, database copying and testing as part of pre-productive migration.

  10. Perform a productive migration.

I believe that a detailed description of each of them does not make sense, so I will focus on the most difficult, in our opinion, as well as problematic points.

System Survey

The preliminary stage naturally became the examination of the system. At this stage, it was necessary to eliminate possible version compatibility problems in advance.

What we did:

● checked database volumes;

● analyzed the hardware requirements for migration, assessed compatibility, and, having understood which version of Linux systems Directum would run on, chose Ubuntu 20.04;

● Determined what database we can use, defining the requirements and parameters of the future infrastructure and the possibilities of preparing landscapes (in accordance with the recommendations of the vendor).

At this preparatory stage was completed and it was possible to move on to the database migration tool.

Study and refinement of the migration tool from MS SQL to Postgre

The task of converting data from MS SQL to Postgre did not come quickly. We were the first to carry out such a migration. At the time, we were using the Directum RX 3.8 version, which the vendor introduced a special tool for transferring data from one database to another. Due to the custom extension of our system and the use of the CRM module, the tool worked exclusively with the “native” structure, tables and default internal interaction.

For full-fledged work, the tool had to be long and painful to configure and modify SQL scripts. Part of the CRM module used the .NET Framework and had to be adapted to work correctly on .Net Core and Linux. The vendor provided support by completing the solution for us. Now it’s hard to say how many times we migrated during the debugging process. A lot of. In 2021, we completed the transition to Postgre and decided to move on to the next stage. From that moment on, we stopped paying for MS SQL licenses.

Today, the vendor has improved a tool that backs up the original MS SQL database to a similar Postgre structure, all you need to do is set up a similar set of tables. In the version we used, the options for quick migration were not obvious.

Cloud selection criteria and migration results

System optimization assumed migration from the vendor’s cloud, i.e. Directum, to the Yandex cloud. The financial essence of the task was to replace investment costs with operating ones, using a service instead of purchasing equipment. Yandex was chosen due to the fact that our company is a partner of Yandex, and we were completely satisfied with the solution they proposed.

Preproductive Testing

Testing was carried out without autotests, in manual mode. The problem is that user interaction with the system is quite individual, and possible errors can appear in non-obvious cases that can hardly be quickly included in autotests. It is important to note that the vendor does not issue its autotests to partners, perhaps this is part of the corporate technology protection policy. At the testing stage, the company began implementing a web client, which complicated the task.

Prior to this, employees worked mainly with the native desktop application, and the transition to the browser version was not easy for everyone. We began to encounter complaints about the lack of convenient features and functionality implemented in the old version. The situation affected the duration of the process. Because in fact, users had to learn the version they were using all over again.

Significant upgrades, storage, and high availability

After the conversion, there was a desire to make the platform work in high availability mode. Starting with Directum 3.8, information about support for Linux servers appeared, which was encouraging. Later, we were given a test release, and we realized that the system showed signs of a microservice architecture.

Those. the application is architecturally structured as a collection of individual services. As a result of the architecture change, interaction between services appeared and the queuing mechanism was implemented. There were hints at the possibility of implementing a fault-tolerant solution.

Prior to version 3.8, all data was written directly to the database by default. This led to the growth of databases to indecent volumes and caused problems when creating a backup and restoring the system. In addition, the database required a lot of RAM to run.

Just when we converted to Postgre, the vendor optimized data storage and transformed their distribution by sending content to file storage, which greatly simplified backup. Despite the fact that the base did not grow to a monstrous size, the rate of its growth did not bode well. The updated content storage solution allowed us to significantly simplify the task and reduce the RAM requirements by almost 4 times. At the same time, reduce the cost of a virtual machine with a DBMS. If before the database backup lasted 3 hours, now it takes no more than 20 minutes and, accordingly, system recovery is also faster.

Meanwhile, to say that the updates and the hypothetical ability to create a fault-tolerant system ensured the high availability mode in full is an exaggeration. At the moment we continue to work in this direction.

Implementation problems

After the decision was made to use Yandex Cloud, we used Managed Postgre (to partially relieve ourselves of some of the responsibility for ongoing administration), as well as automate deployment and implement kubernetes containers. After completing the installation, we realized that there was a problem, the installer required superuser privileges, sys admin level. Having full rights to manage the database was not enough for the installer. He needed full access to the Postgre server!

It is likely that the one who made the installer had such access and implemented the solution according to the existing template. The decision is controversial and not very common. Usually in large companies, where there are separate departments involved in database administration, they create a cluster of servers, where, if necessary, databases are ordered. In the present case, no such solutions were provided. We decided not to “pick” the installer and refuse to use it.

The fact is that updates also went through the installer, and thus support would become a regular problem, comparable in complexity to independent support and database administration. The installer has changed in the new version, and according to the vendor, the problem is solved. We also already have a prototype installation and deployment script in kubernetes.

Dry residue

Due to the fact that the high availability mode could not be implemented in the full sense, there were problems with managed postgre, automatic installation and containers, it is difficult to consider this case absolutely successful. In doing so, we solved the main tasks, namely:

● saved on Microsoft licenses while they were available in Russia;

● Reduced operating expenses by moving them from investment to operating costs through the use of Yandex Cloud and a reduction in the size of virtual machines;

● guaranteed system security by limiting the influence of sanctioned vendors;

● learned how to migrate to Linux and Postgre and are ready to offer it to customers as a service, which, apparently, will be quite relevant among those using Directum RX.

It’s hard to describe everything in one longread. You may have questions regarding our experience or specific issues we have encountered. We appreciate your comments and will try to accurately answer any questions regarding our experience.

Similar Posts

Leave a Reply