# Collaborative confidential computing at your fingertips

In this short note, I want to touch on the topic of collaborative confidential computing and try to briefly outline the essence of these approaches and dispel several ambiguities that have developed in the interpretation of this term in the modern information field. Hope it works 🙂

I will start a little from afar, in general I am interested in the topic of distributed data processing with confidentiality, in particular, I am actively looking at the development of such a direction as Federated Learning. I often come across articles and materials on this topic in which I observe some terminological confusion, since the terms Federated Learning and Confidential Computing are often used as synonyms, but this is not entirely true. Maybe I’m not quite right, but the set of methods for “learning” (learning) and for “computing” are actually different and are not a subset of each other. Therefore, first of all, I want to speak about my understanding of their fundamental difference:

**Federated Learning**– a set of methods, as a result of which we get a trained mathematical model (which is used for inference on new data sets) and they are closely related and inseparable from machine learning methods. These methods solve such applied problems as training a scoring model based on data from different banks, without exchanging data about borrowers, etc.

**Confidential Computing**– a set of methods, the purpose of which is the joint processing of data, that is, the direct use of data in the calculation (find the average, sum, etc.) of some mathematical expressions in order to obtain the exact result of mathematical calculations. For example, these methods solve such applied tasks as obtaining a sales report for the entire market as a whole based on data from different retail chains, each of which does not want to disclose its sales volume, etc.

Sorry for the possible clumsy wording, but I wanted to focus on these fundamental differences. I hope you feel the difference. But in fact, if you delve into the subject in more detail, then even more advanced methods and terms like “differential privacy” and so on appear, but the creation of a complete anthology of this area is probably the topic of a separate article, and within the framework of this material, I would like to focus on on the above two classes of methods.

*The key conclusion here is – do not confuse Federated Learning and Confidential Computing – these are different methods for different classes of tasks, although they solve the same problem.*

So there are quite a lot of these methods that fall into the area of Confidential Computing. Most of them lie in the field of mathematics and cryptography. But in the last couple of years, this topic has been popularized very much by the fact that hardware developers (Intel, IBM, NVidia) have implemented data protection at the hardware level. That is, by uploading data to one server, you get a guarantee that without your permission no one will get access to your data, even having physical access to the server, which previously could not be guaranteed by any methods, since any software protection in general has always been leveled by the threat of compromise physical access to the storage system. At the same time, in order to share data, on such a server (where data from different owners are loaded) you still need to use either FL or CC or some other methods of joint data processing. But using them within a single server makes the calculation process much simpler and faster.

By the way, little attention is paid to this, but I must note that there is a very subtle point of finding the right compromise: all these methods (both FL and CC) are applicable both for a distributed structure (the data is on different servers, each of which belongs to a separate owner), so for centralized (data lies on one server that supports hardware protection functions). But only if:

**distributed computing**– we have all the problems of distributed computing: unreliable transmission medium, data source compromise, slow transmission speed, asynchronous operation of nodes, etc.

PR from hardware vendors has led to the term Confidential Computing becoming synonymous with hardware data protection techniques.

Here, for example, the TAdviser website gives the following definition: “Confidential Computing is designed to protect the data in use using a hardware trusted execution environment (Trusted Execution Environments, TEE). This is a rapidly growing data security sector, which has recently received increased attention .”

But it is not so. Hardware protection is a prerequisite for using SS methods, but in itself it is not a data processing method, but only a guarantor of their protection. In any case, in a distributed environment or in a centralized one, algorithms for processing them must be applied to the data, which belong either to the CC class or to the FL.

*The key takeaway is that the hardware protection method advertised by the vendors is an enabler for applying secure collaborative computing methods, not the method itself.*

Now let’s now turn to the very mathematical essence of methods of joint confidential computing. We go to our favorite Wikipedia and find such a correct and clear definition:

“In cryptography, a confidential computation protocol (also secure, secure or secret multi-party computation, eng. secure multi-party computation) is a cryptographic protocol that allows several participants to perform a calculation that depends on the secret input data of each of them, in such a way that no participant was unable to obtain any information about the other’s secret inputs.”

This definition is very correct. But further on the page there is a rather complicated mathematical explanation, and if you turn to scientific articles in which cryptography of the entire process is also used, then it becomes completely sad.

But now, attention, an enchanting example that reveals the essence of these methods right on the fingers. In fact, I spied him in the report of Peter Emelyanov, from the company ubic.techwhich was made as part of our “Federated Learning” section at the conference OpenTal.AI 2022.

**Task:**

there are 3 friends, each of whom owns some amount of money. We want to calculate the total amount of money that 3 comrades have, but on the condition that each of the comrades does not know how much the other participants own.

**Decision:**

** Step 1:** each friend decomposes his sum into 3 (the total number of participants in the calculation) numbers in an arbitrary way, while maintaining the condition that the sum of these numbers must be equal to the amount of money the friend has, otherwise the numbers can be arbitrary.

** Step 2:** each participant keeps one number, and exchanges other numbers with each other participant.

** Step 3:** each participant sums up the numbers received from other participants with his own number and receives the sum

** Step 4:** each participant publishes the resulting amount to all other participants

** Step 5:** each participant calculates the average of the received amounts and receives the total average of all money from the calculation participants

**Outcome:** each participant found out the average amount of money in the group, but did not reveal to any other participant the amount they own.

The whole process is clearly shown in the figure:

That, in fact, is all. In my opinion, this scheme reveals the essence of the methods of “joint confidential computing” as simply and clearly as possible.

Hope it was helpful 🙂