Detect privacy-preserving phishing messages with neural networks

Phishing Emails are messages that appear very similar to real ones, such as a newsletter from your favorite online store, but also entice people to click on attached malicious links or documents.

More and more creative phishing campaigns in recent years require smarter detection systems. In this regard, deep learning comes to the rescue, namely natural language processing (NLP) using deep neural networks. Learning and testing these networks requires knowledge from a large number of emails.

However, daily messages from users containing personal information are difficult to collect in real life on a server for training a neural network due to privacy concerns (you don’t want any artificial intelligence to read your messages). Therefore, in article A method for detecting phishing messages, called Federated Phish Bowl (hereinafter FPB), is proposed, which uses federated learning and a long short-term memory recurrent neural network (LSTM).

Let’s take a closer look at how the proposed method works!

Neural networks, namely LSTM

Now almost all people have at least heard about neural networks, but not many people know how they work. To begin with, let’s imagine that this is a “black box” that can solve any given problem. Here are the most popular ones:

  • Classification (for example, distinguish cats from dogs)

  • Regression (according to a person’s description, determine his age)

There are infinitely many specific subcases. The main thing is to understand what comes to the input of the network and what we want to receive from it.

More specifically, a neural network is a complex function that is applied to the input data (we can represent any picture or text in the form of matrices or vectors), and the output of this function is a matrix, vector or number, depending on the task …

To work with text sequences, recurrent neural networks (RNN, their improvement is LSTM, which is bi-directional when the sequence is processed in two directions) were invented. Networks are called recurrent, because in them the same blocks process elements of sequences and their outputs. These networks are good in that, unlike, for example, convolutional ones, they take into account the order of the elements of the sequence (words in sentences can be processed according to the timeline).

First, you need to get an informative vector representation of words, which should take into account the proximity between words, their contexts, and also be of a conscious size (for example, Word Embeddings).

Let’s take a closer look at LSTM blocks:

This is what one LSTM cell looks like. It can be seen from the figure that a word vector (Xt, marked in green), a vector of short and long-term memory from the previous cell (Ht and Ct, respectively) comes to the input of the cell, and Ht can also be considered as a separate output from the cell. Namely, we read the entire message that came to the mail using the network, that is, the exit from the last cell collected all the information about the text. Therefore, this output can be used to understand whether this message is phishing or not. In fact, by showing the network many different messages (phishing and not), we teach her to classify our messages.

Various architectural complications are applied to improve the quality of the classification. For example, the network can be made bidirectional (Bidirectional LSTM).

First, the network processes words from the beginning of the message to the end, and then from the end to the beginning. Then it combines the results of both passes for a more informative presentation of the sequence. This approach helps the network not to “forget” information from the beginning of the sequence (this happens when the message is very long).

FPB suggests using the approach shown in the following image:

Here, 3 bidirectional LSTMs are used, where each subsequent network processes the output of the previous one, after which the result is fed to the input to a fully connected layer (we can assume that we multiply the output by a matrix, and then apply nonlinearity. Where are neural networks without them). Then we get some number for each message, which using the function sigmoid translate into the range [0, 1], which allows us to predict the likelihood that our message is phishing.

Federated learning

We’ve figured out how to determine if a message is phishing. It remains to understand how to maintain the confidentiality of user data. First, let’s introduce the concept of federated learning.

Federated learning is a machine learning technique that trains an algorithm on multiple decentralized devices or servers containing local sample data without sharing it. This approach differs from traditional centralized machine learning methods (where all local datasets are loaded onto a single server), as well as more classical decentralized approaches, which often assume that local datasets are equally distributed. The general principle is to train local models on local data samples and exchange parameters (for example, weights of a deep neural network) between the corresponding local nodes with a certain frequency to create a global model common to all nodes.

The main difference between federated and distributed learning lies in the assumptions about the properties of local datasets, since distributed learning aims to parallelize computing power, while federated learning aims to train on heterogeneous datasets. While distributed training also aims to train a single model on multiple servers, the general underlying assumption is that local datasets are equally distributed (iid) and roughly the same size. None of these hypotheses are designed for federated learning; instead, datasets tend to be heterogeneous and can vary in size by several orders of magnitude.

Moreover, clients (nodes) participating in federated learning can be unreliable, as they are prone to more disruptions due to the fact that they usually use less powerful communications and battery-powered systems (for example, smartphones). In distributed learning, nodes are typically data centers that have powerful computing power and are connected to each other by fast networks.

The figure shows the private learning method proposed in this article.

The two models run concurrently on an FPB server, where one is used for training and the other for real-time work. For every few training iterations, the retrained model replaces the current real-time model. Two advantages of this architecture are data confidentiality, in which original texts in emails are considered impossible to recover based on the overall model weights, and higher model performance using aggregation.

This article proposes an N-client scenario in which each client has the same amount of well-balanced training data. In other words, the data owned by each client is independent and equally distributed (IID).

To train the FPB model using federated learning (FL), the parameter server (PS) initializes the global model (DL) based on the aforementioned bidirectional LSTM neural networks and sends the global model with the global word-to-vector transformation matrix to all clients in the first stage of training (see Fig. .). Subsequently, each round, PS randomly selects a subset of clients to train the local model, where the client updates the local DL model based on their training data.

Conclusion

We reviewed an effective method for detecting phishing emails in mail, which allows you to keep user data confidential.

Sources of

  1. PRIVACY-PRESERVING PHISHING EMAIL DETECTION BASED ON FEDERATED LEARNING AND LSTM

  2. LSTM

  3. Bidirectional recurrent neural networks

  4. Word Embeddings

  5. Federated learning

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *