# who, when and why should start using them

According to preliminary forecasts, today’s standard 2048-bit encryption keys, recommended in 2015 by NIST, the National Institute of Standards and Technology, will still be sufficiently secure until 2030. Nevertheless, new 4096-bit keys can already be found in a number of security systems. What is the reason for their “premature” appearance, let’s talk under the cut.

As we wrote above, it is assumed that 2048-bit RSA keys should last for about 8-10 more years. This forecast is based on the statistics of the increase in computing power and the development of methods of mathematical analysis. Roughly speaking, the more powerful the computer and the more complex the algorithm for calculating encrypted data, the more vulnerable small keys are. For example, in 2010, when 1024-bit keys have been in use since 2007, a group of scientists successfully managed to calculate data encrypted with a 768-bit key. Consequently, the likelihood of a vulnerability in 1024-bit keys has increased significantly.

In fact, the matter is not limited to one empirical approach: cryptologists have a whole set of indicators of the potential vulnerability of keys.

So, in the near future, a transition to new 4096-bit keys is expected. Does this mean that 2048-bit keys are vulnerable right now?

In a nutshell, no, it’s okay. Let’s turn to NIST Special Publication 800-57, Part 1. This is the 5th revision of this document, dated May 2020. Table 2 of the above section provides information on key strength, measured in units of bits of security. Here is how the document stands for this indicator:

*This is a number related to the amount of work (that is, the number of operations) required to break a cryptographic algorithm or system.*

*In this Recommendation, the degree of security is defined in bits and is a specific value from the set {80, 112, 128, 192, 256}. Note that a security level of 80 bits is no longer considered acceptable.*

And here is the table itself:

Pay attention to an interesting footnote: the system for assessing the level of cryptographic strength will be significantly revised after the start of practical use of quantum computers.

Here it is worth making some reservation. NIST claims that they use “currently known methods of breaking encrypted data” to collect data, but someone on the Crypto Stack Exchange found out that NIST uses an algorithm proposed by Dutch scientist Argen K. Lenstra to calculate the complexity of a factorization attack called “number field sieve method”. This is very convenient, since not all key sizes are specified in the NIST recommendations. If you want to practice applied cryptology, start the Mathematica program. With its help, you can calculate the security level of a key of any size. For example, here is the formula for 2048-bit:

> N[Log2[Exp[(64/9*Log[2^2048])^(1/3)*(Log[Log[2^2048]])^(2/3)]]]

The result is the number 116.884. So a 2048-bit RSA key is equivalent to 116 “bits” when rounded down. Note that NIST also rounds the GNFS complexity result to 112 bits, a common value for symmetric encryption algorithms, to allow people to apply the same policies they would apply if they considered symmetric algorithms. Our JS and the above Mathematica code gives the raw GNFS complexity.

But back to the NIST report. As follows from the table, a level less than 80 (which corresponds to a maximum of 1024-bit keys) is not considered acceptable. 2048-bit keys have a security level of 112 and are allowed to be used.

Another important point: the footnote to line 3TDEA says that according to SP 800-131A, this encryption algorithm, despite the level of security it provides, can only be used until 2023, after which its use will be prohibited for cryptographic data protection, excluding special cases where the risk is justified.

A little lower in the same document there is another table describing the approximate time limits for using encryption methods depending on their security level.

As you can see, the level of 112 remains acceptable until 2030.

So, it turns out that 2048-bit keys that provide security level 112 can be used without problems for another 8 years. By 2030, productive environments should fully adapt and adopt a new, more secure standard without any problems. In the meantime, 4096-bit keys don’t really make sense. But is it?

Many applications and security systems already offer users to generate new 4096-bit keys.

*4096-bit key is offered as an option when generating a server key on JSCAPE MFT Server v10.2*

There are several reasons for that. Firstly, this is the so-called work ahead of the curve. Software vendors prefer to prepare for the transition in advance and ensure that their users are supported by tomorrow’s cyber resilience tools. No one will give you a guarantee that somewhere deep underground a malicious scientist is not launching a quantum supercomputer to get to your data. This is, of course, a joke. But it’s important to remember that no one, including NIST, knows how accurate the timing predictions are. Accordingly, there is no 100% certainty that by 2030 no one will be able to crack a 2048-bit key, so it is better to play it safe and be one step ahead of the attackers.

It would seem that at this point you can close the article and go generate a brand new 4096-bit key for yourself. But take your time: in addition to security, there is also performance. The longer the key, the more CPU time is spent on its generation and subsequent encryption and decryption operations. Therefore, if your physical server is on the verge of optimal performance, is it worth it to additionally load it with keys of excessive (at least for now) size? Let’s dig a little deeper.

In the real world, secure file transfer protocols such as HTTPS, FTPS, or SFTP typically only use RSA keys at the very beginning of a connection to encrypt symmetric keys. As soon as the data transfer begins, all further encryption processes will rely directly on symmetric keys.

Thus, the performance drop due to the use of a 4096-bit key will be felt only during a small part of the file transfer session. Of course, if your server has to handle many simultaneous file transfers, this can have a significant performance impact. Without knowing clear numbers and some input data (normal/peak load on the server, CPU, network bandwidth, etc.), it is difficult to calculate how critical the use of 4096-bit keys will become.

Therefore, before you introduce 4096-bit keys into a “combat” environment, it makes sense to correctly check everything on a test bench in order to eliminate surprises. For example, IBM states the following:

*If 4096-bit cryptographic operations are calculated entirely in software, all available CPU resources are depleted fairly quickly.*

One of the technical bloggers, an “early adopter” of the technology, in 2015 cited the following data:

*As follows from the presented results, signing 4096-bit RSA keys requires more than 7 times more CPU time than 2048-bit ones.*

You can check your server for “compatibility” with 4096-bit keys using OpenSSL.

Another important point – right now it hardly makes sense to switch to 4096-bit keys if you do not have significant and serious reasons for doing so, since not all vendors are ready to support them. For example, Amazon CloudFront, Cisco IOS XE versions below 2.4, and Cisco IOS Release 15.1(1)T still do not support 4096-bit keys.

In conclusion, we say: everything has its time. It hardly makes sense to talk seriously about 4096-bit keys in the next 2-4 years. It will be quite enough to keep your finger on the pulse and slowly transfer the infrastructure to new keys when it becomes really important and necessary.