Operating system optimization

Have you ever had to boot an operating system from a mechanical hard drive at the beginning of the 2nd decade of the 21st century? Probably another pleasure…

And yet, SSD drives with a capacity of 1 TB or more are still very expensive, less than 1 TB is not always convenient for storing a significant amount of used programs, transferring large amounts of software to an additional HDD is not always convenient, and hybrid drives still only have the required minimum SSD cache, which is not always sufficient for high-speed access to both OS files and files of regularly used software and frequently opened documents. And if these documents are not text at all, but graphic or edited videos, their opening will again depend on the speed of your cheap and reliable HDD. And although the HDD is not nearly as fast as its much more expensive SSD competitor, the situation can still be improved a little, and then regularly maintained in a state that is far from catastrophic. Moreover, the service life of HDD is still noticeably higher than SSD.

Let’s try to figure out what is needed for this, using the example of three different families of operating systems – MacOS, Linux and Windows.

1. HDD as a bottleneck – points to reduce the load on the file system

1.1. Autoload

Both when installing the OS and when installing programs, some programs or their services are included in startup in order to automatically boot with the system. Naturally, each of them takes a little time to load. Not all of them are necessary! Despite all the usefulness of autoloading, it should be formed based on your own real needs. Do you need Microsoft’s OneDrive cloud client in Windows 10? Or Welcome screen in Linux Mint? I’m sure you have enough of your own list – it probably includes Skype, WhatsApp, Telegram, and your email client.

In addition to programs, startup is configured for services (on Windows) or Daemons (on Linux and Mac OS). And if the start of most system services should be left unchanged, then the start of services installed with programs should be monitored and unnecessary items removed – this can be done either manually or using special programs – tweakers – for example, SysInternals Autoruns or Reg Organizer for Windows, CleanMyMac for Mac OS, Ubuntu Cleaner for Linux.

1.2. Defragmentation and free space

All files on the disk are stored in the form of data chains – approximately like books in the form of strings. The minimum storage cell (page) of such a chain is a file system cluster. When there is no gap between (before, after) files on the disk that is larger than the file being saved, it is divided into fragments and stored in parts in the spaces between the files already on the disk. This is fragmentation. If there are a lot of fragments and they are widely scattered across the disk, reading such a file (from the HDD) slows down.

However, if you still choose an SSD as system storage, fragmentation not only does not reduce the speed of access to files (after all, reading from it is not sequential, but simultaneous, immediately from groups of data storage blocks, which the file system combines into clusters), but also used by the disk itself to extend the life by distributing writes to the newest blocks – that is, those that have completed the fewest write cycles. Thus, the defragmentation procedure on an SSD is useless for performance and harmful to the disk itself!

All file systems without exception are subject to fragmentation during operation. And although modern file systems are sufficiently able to reduce and partially prevent fragmentation by placing new files on the disk in the most convenient way in advance (in the widest intervals so that there are fewer fragments), slowing down reading from the disk for this reason is still possible, especially at high (more than 80%) disk full.

This is also why (and also for the needs of the operating system) it is strongly recommended to have at least 10% (or better yet, 20%) of the volume free on the system partition.

All three families of operating systems under consideration have utilities for defragmenting files. But, if in Mac Os and Linux the drop in performance due to fragmentation is relatively small (the Mac Os package does not even include a standard defragmentation utility, there are only paid third-party packages of maintenance software), and is largely leveled out by standard file system tools, then in Windows boot speed can noticeably decrease over time precisely due to fragmentation of boot files edited by the system (registry, preload data in the folder C:\Windows\Prefetch, some other files with settings). It is for this purpose that the standard Windows defragmentation utility can be launched with the flag /b (defrag c: /b /u) (https://www.outsidethebox.ms/10365/), so as not to wait for the disk to be completely defragmented, but to limit itself to only boot files.

And once again: if you nevertheless chose an SSD to store the system and programs, this item of increasing performance is useless, and it is also harmful for the service life of the SSD. As for files stored on the HDD (this is if you have two disks – SSD for the system and software and HDD for data) – then yes, opening large files or groups of files (for example, saved video editing projects, drawings, 3D models, databases data, other large files) can be significantly speeded up by reducing their fragmentation.

To summarize the above, if your HDD has never been more than 50-70% occupied, defragmentation is most likely not needed. If he is already more than 70-80% occupied, then before (and on SSD – instead) defragmentation would be better to audit the files and remove unnecessary things, this will greatly speed up the procedure! So let’s start cleaning.

1.2.1 Cart, downloads and documents

I think it would be unnecessary to specify the location of these folders in each system. Many users have accumulated a lot of unnecessary files in at least the second of these folders. If you don’t have time to go through them all, sort them by size in descending order and start revising with the larger ones. Or use programs that will conveniently sort them for you on the ENTIRE disk – for example, OmniDiskSweeper or DaisyDisk for MacOs, Treesize or WinDirStat for Windows, Baobab or KDirStat for Linux.

More than once or twice, on the computers of my clients, acquaintances, or colleagues in the office I serve, I observed several dozen (or even hundreds) of files in the recycle bin with a total volume of up to tens of gigabytes.

Even if you delete files in groups, and before emptying the Recycle Bin, check that each file is deleted correctly, it is best to do this immediately after deletion, or after several consecutive deletions, if this is more convenient, but immediately (at least on the same day, and preferably within hours)!

1.2.2 OS system and boot caches

Both the system and some programs (especially browsers) store a lot of temporary data on disk. For example, Windows boot prefetch files in the folder C:\Windows\Prefetch, which the Superfetch service creates by analyzing each download and pre-loading what is downloaded most often. This allows, in short pauses between the sequential loading of drivers and services, to load in parallel what will be required at the next stages of OS startup and very significantly speeds up the loading, partially parallelizing it.

Also, distribution kits of all installed updates and additional Windows components (for example, .net Framework libraries, Visual C++, etc.) are copied to a separate folder and a database of all these changes is maintained. If, after installing all the necessary components and updates, Windows works flawlessly, all this can be cleared in Disk Properties (My Computer\Disk C\Contextual (right-click) menu\Properties\Disk Cleanupdon’t forget to click the “Clean up system files” button with the elevation icon to administrator).

In Mac Os, the dyld cache is formed in a similar way (/private/var/db/dyld/), which contains links for downloadable programs to shared libraries and speeds up application launch.

Some elements of the system caches may become out of date (for example, something was deleted or only partially updated manually), which can also cause pauses when such discrepancies are detected. Imagine that, for example, the path to the executable file has changed and the system will have to search for it in subfolders in all folders intended for similar files (in Windows these can be folders C:\Windows\System32, C:\Program Files, C:\Program Files (x86)on Linux and Mac OS – /usr/bin, /usr/sbin, /usr/lib, /usr/local/lib. The aging of these caches can cause small pauses and freezes in operation.

To avoid such situations, system caches should be updated from time to time – usually this is done using special commands (such as like sudo update_dyld_shared_cache -force on Mac OS or sudo prelink -amfR on Linux, and also, indirectly – rundll32.exe advapi32.dll,ProcessIdleTasks for Windows). The Superfetch cache in Windows can be updated by simply deleting files from the folder C:\Windows\Prefetch except the file Layout.ini. If it is deleted, in order to re-form it, a forced procedure for launching all scheduled Windows self-service tasks during downtime will definitely be required: rundll32.exe advapi32.dll,ProcessIdleTasks (usually takes several hours only on the command line with administrator rights). After cleaning the folder, the next download will be slow, the download will be re-analyzed by SuperFetch and the files will be created again. After several reboots with pauses of 5-10 minutes between them (“training”, https://www.outsidethebox.ms/10365/), Windows boot will noticeably speed up.

These procedures are equally relevant for both HDD and SSD.

1.2.3 Application caches

Applications also store cached data on disk. For example, in the case of browsers, the disk cache allows you not to wait for sites to be reloaded from the Internet, but to load them directly from disk, simultaneously checking with the original and updating all changes on disk. During operation, such disk caches can grow significantly, which is why access to them noticeably slows down.

On Windows, applications (including browsers) save the cache to a folder C:\Users\User\AppData\Local each in its own folder.

On Mac OS, the caches of all applications are located in two folders: users/Username/Library/Caches (or for short ~Library/Caches) for each user and /Library/Caches for all users.

On Linux, application caches are stored in a folder /var/cache.

From time to time, it is worth clearing application caches – either (except for Windows) manually – deleting the contents of the desired folder (albeit with caution, it is best to delete only files, preserving the folder structure), or from the settings of the application itself (this applies primarily to browsers) , or from the terminal (for example, for the Linux package installer – sudo apt clean all for Debian/Ubuntu or sudo yum clean all for RedHat/CentOS), or using various optimization utilities – such as CCleaner and Reg Organizer for Windows, Onyx, CleanMyMac and CCleaner for Mac OS, Ubuntu Cleaner for Linux. The last method is the most convenient.

Keep in mind that after clearing the caches, the first launch of the system and applications may slow down, then the cache is updated and performance noticeably (usually the more noticeable, the larger the old cache was) will improve.

1.2.4 Temporary files

Basically, these are files that are needed by an application or system only once, before closing the application or rebooting.

On Linux these files are located in folders /tmp, /var/tmp And /home/Username/tmp (for each user). They are cleared automatically every time you reboot, but if something goes wrong, you can safely clear them manually. Currently open files will not be deleted because they are locked.

On Mac OS, temporary folders are located along the path /private/tmp (and /tmpbut this is not a folder, but a shortcut to the same private/tmp) and along a path unique in each system for each user – for example, /var/folders/g7/2du11t4_b7mm24n184fn1k911300qq/T/

To find out the unique path to your temporary folder, type in the terminal: echo $TMPDIR

Also sometimes applications create a folder ~/Library/Caches/TemporaryItems/ (for more details, see the article https://osxdaily.com/2018/08/17/where-temp-folder-mac-access/).

They can also be cleaned manually, or using system maintenance procedures – sudo periodic daily, sudo periodic weekly, sudo periodic monthly.

On Windows, the location of temporary folders is as follows:

%windir%\Temp and %Temp%, where is the environment variable %windir% = “C:\Windows”, and the variable %Temp% = “C:\Users\Username\AppData\Local\Temp”, if you didn’t change anything when installing Windows.

They are cleared manually, or the file is manually created and then added to startup Filename.bat or Filename.cmd with two command chains inside:

CD [%windir%\Temp] && RMDIR /S /Q .

CD [%Temp%] && RMDIR /S /Q .

Just like caches, temporary files can be cleaned using optimization utilities – such as CCleaner and Reg Organizer for Windows, Onyx, CleanMyMac and CCleaner for Mac OS, Ubuntu Cleaner for Linux. Moreover, some of them, such as CleanMyMac, allow you to audit large files on your disk.

In general, if the situation on the disk is completely catastrophic, first of all, audit your downloads and documents, empty the recycle bin and temporary folders, then either install optimization utilities or go to the cache. On Mac and Windows, you can also use the built-in cleaning tools (for Mac OS, see https://support.apple.com/ru-ru/HT206996for Windows see https://ichip.ru/sovety/ekspluataciya/kak-ochistit-disk-ot-musora-novye-vstrooennye-sredstva-windows-10-sdelayut-vse-na-avtomate-727022), and on Linux – using terminal commands (https://zalinux.ru/?p=3047).

1.3 Compressing files on disk

Many modern file systems allow you to compress (most often using the ZIP algorithm) stored files. This feature applies primarily to Linux (BtrFS file system) and Windows (NTFS). Apple’s ApFS does not have full compression, but it can automatically truncate strings of zeros in files, which compresses them a little and also speeds up the operation of the SSD.

In general, compression not only slightly speeds up access to a file (since its size is reduced and less disk space is used to store it, which speeds up writing and reading the file, and also reduces the likelihood of its fragmentation), but also very slightly reduces wear on the hard drive, so as the amount of data to save is less. In return, this scheme increases the load on the processor (compression-decompression), but for modern processors the difference in load is even less significant than for a hard drive to write an uncompressed file compared to a compressed one.

1.4 Paging

If there is not enough RAM, swapping begins. Unused programs and data are uploaded to a special file or partition on the disk, freeing up space in memory for currently used ones. Then, if necessary, back. At the speed of an SSD, this is not so significant, and it is not always noticeable, but on an HDD it is sometimes a complete nightmare! During this procedure, at least some of the programs, and at most the entire OS, freeze for some time and become inaccessible. In the most severe cases, you can have time to make tea… How can this be avoided?

  • The best way is to buy and install more RAM than is required for the tasks being performed. In addition, excess RAM will give us additional space for the second act of the Marlezon ballet optimization section. And 16 GB of RAM costs about 2 times less than 1 TB SSD.

  • The second method is to use a small SSD or fast flash drive for swapping. On Linux, you should create a swap partition on this drive, and Windows will immediately ask if this drive can be used for TurboBoost? And if you agree, it will place a quick swap file there, which will immediately reduce the severity of the situation as soon as the RAM starts to run out. Yes, even if the swap file is simply transferred to the second HDD, which the system no longer uses, the situation will improve, albeit slightly, but still quite noticeably. In Mac OS, such paging file settings are not provided, but you can achieve a similar result either using third-party utilities or by directly editing system settings files – for example, like described here.

2. Expanding the boundaries – caching file operations in RAM

2.1 Preload frequently used programs, libraries and documents. Prelink, Prebinding, Preload, Prefetch, Superfetch

Prelink in Linux and Prebinding in Mac OS are utilities that register applications (and services) with shared libraries in the system (dyld cache on Mac OS) or executable file (in the case of Linux), which will speed up the launch of these applications.

And Preload (Linux) is a daemon (service) that loads the necessary libraries in advance. In Mac OS, everything, as usual, is already configured by the developers, but in Linux both components must be installed and configured – the setup procedure is described in the article.

And if for Mac OS there is nothing to compare with, since everything works out of the box, then in the case of Linux, after all the steps indicated in the article, loading the OS (at the services stage) will really speed up noticeably. It will also speed up loading applications. I did not save the exact measurements, but they are performed with two commands: sudo systemd-analyze time — total loading time — and sudo systemd-analyze blame — loading time by service (https://unlix.ru/analyzing-download-speed-linux/).

The Superfetch service in Windows works similarly to Preload. But it analyzes absolutely all downloaded programs and system services, and not just shared libraries. The result of the analysis is saved on disk – and after several reboots (of course, not in a row, but after at least 5-15 minutes of work), everything that was launched most often between these reboots (including OS components) will be loaded into RAM in advance during idle moments between loading stages. The difference in the speed of downloading and launching the most common programs can reach 1.5 – 2 times!

After the next cleaning, do not be lazy to immediately update Superfetch, dyld (described in section 1.2.1), Prelink (https://habr.com/ru/post/108454/) – the result will not keep you waiting!

2.2 Bigger or better? Data compression in RAM. ZRAM, ZSwap

No matter how much RAM you have, its more economical consumption will not only reduce the need to work with the paging file, but moreover, in our case, it will allow you to use the paging file itself directly in RAM in a compressed form, which is more than an order of magnitude (speed ratio HDD and RAM according to measurements) will speed up the swapping procedure, if required.

So, let’s install and configure, according to instructions.

A small note: unlike the settings in the mentioned instructions, the parameter vm.swappiness I set it to 90 or 85 (% free RAM) so that displacement into the compressed swap begins already at 10 or 15% full. At RAM speeds, the performance overhead for compressing/decompressing the page file is absolutely negligible, while RAM consumption, according to the Linux System Monitor, both for a clean booted system and for open Firefox browser tabs, has decreased by approximately 1. 5 times. Of course, this will not save us from overusing memory by 2 or more times, but we can already afford not to think about performance up to 1.5 times the amount of RAM.

In the case of Windows (from version 10) and Mac OS (from version 10.9 Mavericks), compressed swap storage in RAM is currently shipped with these operating systems and is enabled by default.

2.3 Methods for caching file operations in RAM

Having made sure that RAM consumption is optimized and having determined with monitoring utilities that in the most resource-intensive scenarios of your work, the supply of free RAM is still more than 30%, we can begin to do some optimization of disk operation. Of course, this will not speed up reading or writing large files like videos, but in the case of small office, temporary, auxiliary (like textures or configuration) files, as well as cleared file caches (browser, for example), this can not only significantly speed up the work , but also completely rid the hard drive of some file operations, which will reduce its wear – and if for an HDD this will be a very slight extension of service life, then in the case of an SSD it’s just the opposite – the increase in performance will be small, and the reduction in wear will be very noticeable !

2.3.1 TmpFS

So, transferring temporary files to a virtual disk in RAM (RAM disk).

Let’s start with Linux, which already has all the necessary components, all that remains is to transfer the temporary file folders to TmpFS, like described here.

For Mac OS this method is less relevant; you can create RAMFS Thusbut move the folder there (and anywhere in general) /private/tmpThere doesn’t seem to be a possibility. Offhand – you can create a tmp folder in RAMFS, and instead of the existing folder /private/tmp make a link to it (in the same way as /tmp – link to /private/tmp). You can do the same with a custom temporary folder, but, firstly, this method has not been tested, so if your Mac does not boot after this, be prepared to return everything as it was, and secondly, to create a link, you need to delete the folder, and if there are open files there at the time of deletion, the system will lock the folder along with them. Then you will have to do this either from Single mode (Cmd-S on boot) or from recovery mode (Cmd-R on boot, there is a terminal there). With both methods, your RamFS settings will not be loaded and files will not be locked, only system settings will be loaded. In general, as a modern author wrote – “Empirically – that is, by touch!” (https://knigger.com/read-book/kniga-delirium-tremens-strasti-po-nikolayu-2005).

Finally, in Windows you can transfer temporary folders to a RAM disk, but to create it you will have to use third-party utilities – for example, using ImDisk – this utility is not only installed in startup as a service, but also creates its own icon in the Windows Control Panel, instructions on the link.

Further settings are simple and boil down to changing environment variables like this described here.

2.3.2 Disk cache utilities

Unfortunately, this type of software is currently only available for Windows. Here is a review article with measurements of file access speed for one of the utilities this type. Currently the program has changed its name to PrimoCache and has outgrown its beta and freeware status and costs $30 for home use and $40 for enterprise use. Also, at the moment there are several more similar paid utilities, they cost from $25 to $30, as well as a free analogue – HDDTurbo with a minimum of manual settings and automatic use of all free RAM. By the way, this particular utility already has a version for Mac OS!

I would like to note that Mac Os and Linux do not have such a well-developed boot optimization mechanism as Superfetch, and therefore, taking into account the preloading of frequently used programs and documents into memory, the greatest efficiency HDDTurbo and its analogues will be available in Windows OS.

For working with office files, graphics, software development and web design (working with small files), such disk access optimizers are a great option! Even for SSD! Perhaps suitable for a DBMS, but under supervision and with regular database backups. For working with 3D graphics and video (large and very large files), this method is most likely useless, if not completely, then almost. After all, if the program has created a 4 GB cache for you, and you open (save) a 12 GB file, you will have to wait exactly the same as without the cache. Small files, of course, also need to be written to disk, and the speed of this recording itself will not change at all, but logistics arise. With a good distribution of requests according to urgency and the consolidation of small operations into group ones. However, in cases where this logistics will allow you to concentrate small file operations in RAM and give a “green light” to a large file, the final speed of working with it, if not increased (very slightly), then at least this will allow you not to pause for this time working with small files.

It is important to remember that this method has a significant drawback: RAM is volatile! Therefore, before using it as a cache, you should take care of the uninterrupted power supply – purchase an uninterruptible power supply for the system unit, and make sure that the battery is in order for the laptop. As part of this work, I did not set out to find a way to automate stopping the software cache when switching to a UPS or discharging the battery to a certain value, but you should not optimize performance at the expense of reliability. Perhaps a similar solution (for example, running scripts with commands to start and stop caching) can be found in the OS power management settings.

3. Don’t forget about the CPU – who ate all the resources?!

Actually, this is the point where I start diagnosing a computer – it is checked quickly, and the result can give significant results right away! After all, optimization may be required not only to improve the performance of an acceptable operating system, but also to restore a system that is already in a critical state. And scanning disk caches and temporary files with the utilities listed above also takes time – and the longer the OS is in worse condition.

So, let’s start in order.

3.1 CPU (RAM, disk) load

To determine which application is actively working, loading our CPU almost 100% (or has eaten up all the RAM, causing swapping, or even reads/writes to the disk too actively) – all operating systems have a special utility. In Windows it is called Task Manager (it has an improved free analog Process Explorer from the developer SysInternals), in Mac OS – System Monitor, in Linux – System Monitor. In addition, for working in the Terminal, on Linux and Mac OS there is a text analogue of this utility – Top. It is here that we can see which applications and services are consuming resources, even sort them for convenience by CPU usage, memory or disk operations, and then force stop and further, if necessary, analyze the logs, look for similar problems on specialized forums, or simply reinstall or even delete if the application is not needed, or it was installed as a free bonus to something else (Trojans and adware often do this in Windows, so you can run a free anti-virus scanner, for example, DrWeb CureIt). If this application turns out to be Svchost (service launch application), then there is a problem with one of the services. It’s also worth starting with an anti-virus scanner; if that doesn’t help, look at the services running by this Svchost process (using SysInternals Process Explorer), first of all, make sure if any of them are installed with the latest software (it’s clear from the service name), open the services snap-in (This PC\Manage (from the context menu)\Services and Applications\Services) and analyze each of the services of this Svchost instance (description, working/not working, required/not required, system/third party). We stop one by one (starting with third-party software services) and check the result.

In Mac OS, a different situation is not uncommon: the processor is loaded with active work by one or another system service (for example, Mdworker – the indexing service of the Spotlight search engine) And if in the case of Mdworker it is most often worth just waiting for indexing to complete, then in other cases (or when Mdworker is running more than 1-2 hours, which is already suspicious) the standard access rights to the user’s files should be restored (they could have been corrected by some application or its installer, after which the service unsuccessfully and endlessly knocks on such a file with all the speed available to it). In previous versions of Mac OS, access rights correction was available in the standard Disk Utility for checking the disk, but after changing access rights for system files began to be blocked at the OS level, this function was removed. However, for user caches, application configuration files, and other user files, this situation still sometimes occurs. Currently, you can restore access rights to the original ones for user files using the utility CleanMyMac.

In the case of Linux, there are almost no problems that are typical for this OS, with the exception of infrequent, but still quite typical problems with the video driver, which in some cases (I suspect that with new models of video cards) should be replaced with a proprietary one. Otherwise, the same standard diagnostics using the System Monitor, as described in the article.

3.2 Throttling. CPU temperature

All modern processors can skip cycles when overheated, slowing down and cooling down. This feature is called Throttling. The temperature at which throttling begins is determined by the factory settings of the processor. For modern models this is usually 90-100°. In any case, regular operation at temperatures of 80° and above can already negatively affect the soldering of the crystal to the substrate (microcracks from temperature changes, gradual oxidation of the solder) and reduce the service life of the processor. At higher temperatures, throttling begins and the computer slows down sharply to prevent overheating from getting worse. Usually in Windows I check the temperature with a program Aida64in Mac OS (and in Windows on Mac) – I install it immediately Mac Fan Controlin Linux you can use, for example, Hardinfo (installed from the repositories using the apt, yum utility: https://www.tecmint.com/hardinfo-check-hardware-information-in-linux/).

A short-term increase in temperature (in games and heavy applications, for example, when editing video) above 70° is a normal situation, especially if the temperature drops quickly when finishing work.

Sometimes (for example, in Apple laptops) the fan starts to pick up speed only when the temperature already exceeds 60-70°. Such settings protect your ears and the fan, and not the processor at all. That’s why on any MacBook I first install Mac Fan Control and tell the owner about the usefulness of this utility. It allows you to increase fan speed directly as the processor temperature rises. As a result, even the temperature at maximum load does not exceed 80°, although with standard settings it approaches 90.

If at rest the processor temperature is above 55°, when the processor load is less than 30% (YouTube) the temperature is above 65°, and at loads from 80 to 100% it easily goes beyond 80, and most importantly, it is not in a hurry to drop when the load stops – then Most likely, the processor cooling system is not coping well and needs preventive maintenance, which means we move on to point 3.3.

3.3 Cooling system. Processor working conditions.

As we said above, regular operation at temperatures above 80° reduces the life of the processor (and other components located close to it). The most difficult case in my memory is a powerful and compact gaming laptop Razer – Outwardly a MacBook clone, but more powerful on Windows. Due to the powerful processor and video chip, as well as a very compact cooling system with almost no performance reserve, it is already quite hot, and after several years without preventive maintenance… the motherboard PCB under the video chip burned through to holes!! Of course, such a board can no longer be repaired. Therefore, every year and a half we arm ourselves with screwdrivers, thermal paste (the old one has already dried up…), wet wipes (…and it needs to be wiped off, and if it has hardened, gasoline will help!) and finally, a vacuum cleaner or at least a hard brush. If you have a system unit, don’t forget about the video card! Remove and disassemble its cooling system – IT ALSO needs preventative maintenance!

Very often, on the inside of the radiator (the fan blows outward) there is not just a layer of dust, but a piece of pressed felt that completely blocks the air flow. Sometimes this felt is also oily (this is if they smoke at the computer)! If the fan is disassembled, you can also wipe it inside and lubricate it (silicone grease with a density of 200-400 is ideal).

If your workspace needs to be properly lit, heated (and cooled in the summer), and ventilated, then your CPU workspace is cool! In most cases – through fresh thermal paste, a radiator and a fan (sometimes a water pump). Don’t forget about it!

Conclusion

Both at home and at work, and even more so on highly critical computing systems (military, medical, industrial), the response time of the software, and therefore the operating system, must be acceptable and stable. And if on highly critical systems computing power (as well as the cooling system) is initially provided with a reserve (although even this does not always guarantee proper performance), then on home computers, workstations, as well as servers that are less critical to stable, uninterrupted operation, it is important to ensure the highest possible guaranteed speed is undeniable.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *