Warm lamp safety

Recently, a very progressive young man called me a retrograde, because I suggested, in his opinion, a way to store data too much.

Therefore, I believe that all very progressive young people will be upset by my article – after all, there are no references to github, no clouds, not even a line of python. Just a description of the way that allowed me not to lose any of my works and data for 15 years, despite at least 10 situations when the hard disk with this data either failed or was lost.

All fans of LEDs are better off reading no further.

Many years ago – even when the Internet could only be obtained from a modem), like many other people, I got to know the PC. Inevitably, in the process of communication, files began to accumulate, which contained projects, documentation, music, entertainment – as usual. Because I work both at home and at work, the problem of uniformity of the environment very quickly arose – I wanted to sit in an armchair anywhere, listening to the same music, films, books and do the work the same regardless of the place.

After a while, a way to do it crystallized. I have formed several folders – these are work (projects), hard (documentation), distr (software distributions) and relax (entertainment).

These folders are different in the importance of the data they contain and in the frequency of their updating – if the work + hard folders can be updated several times a day, then distributions and entertainment sometimes do not experience any changes for a month. Today, work + hard weighs less than 256 GB in total.

I have always had at least 3 jobs – work, home, workshop. 256GB is very modest by today’s standards, so work data is stored on every computer in every workstation.

In total, together with the master SSD, which is always in my bag, I get 4 copies. The algorithm of work is as follows: when I come to any workplace, I include a USB lanyard with a media in the computer, and start synchronization of directories in Total Commander. Work ALWAYS takes place with the data on the master disk, and therefore it is on it that the most recent version of the file archive is always located. Therefore, the direction of synchronization is ALWAYS the same – all changes are poured from the master disk to the next backup disk – the computer disk of the local workstation. This allows you not to waste a lot of time and attention on this process. Just 5 pokes with the mouse and a couple of minutes.

After this synchronization, I have two identical copies of the file archive, which is very convenient, since all people are mistaken, and from time to time I, for example, delete the necessary working files. They can be instantly restored from a second copy. At the end of the work, the master disk is re-synchronized to the local disk of the workplace. This is how we work at all workplaces.

Thus, if for any reason we lose the master disk, then all the changes made anywhere are available to us, we just have to get to the last workplace. And at home or at work, our working archive will be no more than a day or two behind the latest version.

As for the much more voluminous distributions and entertainment, the algorithm is the same with them. With the exception that, due to the lower value of the data, a regular 2.5 “laptop hard drive can be used as a master disk. And there are enough backup points themselves. In principle, you can not carry the master disk of these data with you at all, and take it only on trips. or for the next necessary synchronization of the file archive.

An important issue is the security of file archives. This can be solved with any on-the-fly disk encryption utility. For example, I really like TrueCrypt. TrueCrypt absolutely transparent to the user and very fast, after the disk is connected, you will not feel any difference in working with or without it. In your absence, your data is not available.

Using this storage algorithm, I have experienced in 15 years losslessly 😊 the loss of about 10 hard drives – both master and regular backup drives. Unlike other people who started to run and worry in such situations, I calmly threw the disk into the trash, inserted a new one into the car and uploaded the required file archive onto it.

I work under Windows, lovers of unix systems can automate synchronization at startup and make the synchronization process completely automatic.

On special cases. This storage engine proved to be vulnerable to one problem. If the master disk is damaged so that the CONTENT of files in it begins to beat, then we risk multiplying the broken files. Once (it was a long time ago, and the working master disk was then a regular laptop disk) I had such a situation. Then I also managed not to lose anything, although it cost more work than the usual synchronization.

The important thing is that the work takes place all the time from the master disk. When a regular hard drive begins to crumble, defects begin to appear on it everywhere. While working, a lot of files of the working file archive are used, at that time I quickly stumbled upon broken files. TotalCommander has different modes of comparing directories – with and without comparing the contents of files. In normal daily syncs, I do not enable file content comparison, as experience shows, this is not necessary. But when it became clear that the master disk was damaging the data, then it was already necessary to synchronize the file archive by content and view the diffs.

It turned out that there were few damaged files; they were all available in the mirrors.

Another disk went to the trash heap, I saved the entire file archive without loss at the cost of 2 hours of labor. For 15 years, this situation has arisen 1 time, the SSD, which was then selected as the master disk of working files, has been working flawlessly for 4 years already.

I wish everyone who read to the end warm lamp safety!

Similar Posts

Leave a Reply