How I rewrote an application from Go to Rust


Disclaimer

Please note: I myself am a beginner both in Rust and in general in programming and there may be errors in the code.

The article consists of a compilation of my few experiences and opinions, as well as a little comparison of the characteristics of two spherical horses in a vacuum.

I heard about Rust a few years ago and everyone either had enough or condemned it, for various reasons. But he somehow didn’t take it on himself – to me, unprepared for such a syntax and not familiar with such languages ​​at least at a basic level, at that time it seemed completely incomprehensible. But after a while, I decided to write something similar to a benchmark for testing local HTTP API servers.

I am writing an article about this and my experience – suddenly I will be useful to some of the newcomers.

The first version of such a “benchmark” was written in go. In general, this version suited me, Go is well suited for small applications and, unlike Rust, has a library for working with HTTP in the standard package, and fasthttp works even better. But still, the weight of the binary as much as 5 MB (this is already after -ldflags “-s -w”) was a little embarrassing.

It is clear that in a world where some people write small applications in Java with a total weight of under 100 MB, my application seems very light, but this did not suit me personally.

At that moment, I decided that I should try to fix it and rewrite it in Rust, because. I don’t have the skills or the patience for C++.

The main disadvantages of the first version of the “benchmark” on Go:

  • The weight of the final binary. Even after -ldflags "-s -w" and stripping (which takes only about 100-200 KB) is somehow a lot.

  • RAM consumption is higher than it could be. Especially the difference is felt on a small number of requests, if there are 10K or more requests, there is almost no difference.

  • Unstable operation of the “main” Go-routine, which, with a target RPS (request per second) of 1K, could issue from 600 to ~ 800 requests per second.

I’ll talk about the pros and cons of Go and Rust in comparison below.

And so, for an easy implementation of an idiomatic application in Rust, we need lightweight threads (they are also goroutines), fortunately Tokio can provide them for us! This library can give us Go functionality in the form of coroutines and channels, but only in Rust and better.

“Better” in terms of less binary weight, and in my opinion more performance due to the language itself.

And so, we found ourselves a “runtime” – Tokiobut Rust does not yet have a standard library for working with HTTP, here I decided to use Hyperbecause Reqwest is simply huge and works even worse than the standard library in Go, and ureq still more than Hyper, but in terms of performance it hardly differs.

We will also use the command line argument parser – argparse, and for “global variables” the lazy_static macro.

Total Cargo.toml:

[package]
name = "akvy"
version = "0.2.0"
edition = "2021"

[dependencies]
tokio = { version = "1.24.2", features = ["full"] }
hyper = { version = "0.14", features = ["full"] }
lazy_static = "1.4.0"
argparse = "0.2.2"

[profile.release]
lto = true
strip = true

In the settings profile to reduce the size. Strip because anyway, it is not supposed to debug the application outside the debug mode, and I want to reduce the binary as much as possible.

Let’s start parsing the code.

For those who are impatient, here is a link to GitHub with the actual code, and here we will analyze the main points with explanations.

It’s worth starting with the main function of the entire application

async fn get(uri: Uri) {

    // Записываем время начала, чтобы посчитать время ответа
    let start = Instant::now();

    // Создаём объект клиента и совершаем запрос по переданному URL.
    let client = Client::new();
    let resp = client.get(uri).await;

    /*
      Тут lock в { .. } чтобы сразу же отдать блокировку.
      На сколько знаю - компилятор должен в этой же
      области видимости отдать блокировку.  
    */
    {
        REQ_TIME
            .lock()
            .unwrap()
            .push(start.elapsed().as_millis() as u32);
    }

    // Если случайная ошибка - ERRORS += 1 и выходим из функции
    if resp.is_err() {
        *ERRORS
            .lock()
            .unwrap() += 1;
        return;
    }

    // Если ошибка HTTP - ERRORS += 1
    if !resp.unwrap().status().is_success() {
        *ERRORS
            .lock()
            .unwrap() += 1;
    }
}

Speaking of “global variables” – these are two Arc<Mutex<T>>packed in lazy_static! { … } macro:

lazy_static! {

    // Хранит массив u32 в миллисекундах,
    // по нему считается среднее, максимальное и минимальное
    // время ответа, а по количеству элементов в массиве - количество запросов.
    static ref REQ_TIME: Arc<Mutex<Vec<u32>>> = Arc::new(Mutex::new(Vec::new()));

    // u128, в котором хранится количество ошибок.
    static ref ERRORS: Arc<Mutex<u128>> = Arc::new(Mutex::new(0));

}
A little about Arc>

Arc> is used to safely read and change variables, only the function that locked this Mutex can work with variables under Mutex, and after work it unlocks it and another function can use the variable, etc.

T – any data type.

Let’s immediately consider the function of parsing from text to Uri:

fn parse_url(url: String) -> Uri {

    // Если URL содержит HTTPS, то закрываем приложение
    if !url.contains("https://") {
        let uri = url.parse();
        if uri.is_err() {
            println!("URL error!");
            exit(1)
        }
        return uri.unwrap();
    }

    println!("App work only with HTTP!");
    exit(1)
}

Everything is standard here, in addition to checking for content in the https:// line – the fact is that initially Hyper does not support HTTPS, you need to connect other dependencies, and firstly, this will most likely add space to the binary, and secondly, the application should test local HTTP servers, and not attack other people’s HTTPS sites, and thirdly, I’m too lazy for now.

The function uses the standard method .parse()and everything else is just a convenient shell.

Now let’s go through main() from top to bottom.

Set the standard characteristics for the application

let mut url_in = String::from("http://localhost:8080");
let mut rps: u16 = 10;

And parse the command line arguments:

{
    // Создаём объект парсера и описание
    let mut ap = ArgumentParser::new();
    ap.set_description("Set app parameters");

    // Парсим URL в переменную url_in
    ap.refer(&mut url_in)
        .add_option(
            &["-u", "--url"], // Флаги
            Store, // Store - положить значение в переменную
            "Target URL for bench"); // Описание для -h

    // Парсим RPS в переменную rps
    ap.refer(&mut rps)
        .add_option(
            &["-r", "--rps"],
            Store,
            "Target number of requests per second"
        );

    // Сам парсинг аргументов
    ap.parse_args_or_exit();
}

Next, we parse our string into Uri and display the characteristics of the benchmark in the console:

let url = parse_url(url_in);
println!("\n{} | {}", url, rps);

// И записываем время начала теста
let start = Instant::now();

We also need to create our “endless” loop, which will call the function at a certain interval get(url) в отдельном таске (task, та же горутина).

let mut interval = time::interval(Duration::from_micros(1_000_000 / rps as u64));

    // Создаём главный таск,
    // который в цикле будет создавать другие таски
    tokio::spawn(async move {
        loop {
            // Клонируем URL из main в область видимости цикла,
            // концепция владения ведь
            let url = url.clone();
            
            // Создаём таск, в котором будет работать запрос
            tokio::spawn(async move {
                get(url).await; // await обязателен, т.к. функция async
            });
            
            // Ждём заданное время и обнуляем интервал,
            // после повторяем цикл
            interval.tick().await;
        }
    });

Here we create Interval periodically at the right time. It is important to note that you cannot simply use tokio::time::sleep because for intervals less than ~100 microseconds, such a cycle will not be capable. Sleep will sleep not less specified time, or more.

Because the main loop is spinning in another task – the application goes further and we need to terminate it correctly. IMHO the best way is to handle Ctrl + C in the console:

// Создаём обработчик сигнала Ctrl + C
let mut stream = signal(SignalKind::interrupt()).unwrap();

// Ждём сигнала, не пускаем приложение дальше без него
stream.recv().await;

// Записываем время
let end = start.elapsed();

And then follows a huge block with the output of information

{
        // Переносим данные из Mutex в локальные переменные
        let req = REQ_TIME.lock().unwrap().to_vec();
        let err = *ERRORS.lock().unwrap();
        
        // Считаем минимальное время ответа
        let min: u32 = match req.iter().min() {
            Some(min) => *min,
            None => 0
        };

        // Считаем максимальное время ответа
        let max: u32 = match req.iter().max() {
            Some(max) => *max,
            None => 0
        };

        // Считаем среднее время ответа
        // проверяем на 0 элементов в массиве
        let sum = req.iter().sum::<u32>() as u128;
        let average: u32 = {
            if sum != 0 {
                (sum as u32 / req.len() as u32) as u32
            } else {
                0
            }
        };
        
        // Красиво выводим всю накопившуюся информацию

        print!("\n\n");
        println!("Elapsed:             {:.2?}", end);
        println!("Requests:            {}", &req.len());
        println!("Errors:              {}", err);
        println!("Percent of errors:   {:.2}%", percent_of_errors(req.len(), &err));
        println!("Response time: \
                \n - Min:              {}ms \
                \n - Max:              {}ms \
                \n - Average:          {}ms", min, max, average);
    }

Compare Go and Rust

This comparison itself is already wrong, immoral and should be punished by the vice police, but we will do it. Yes, let’s compare high-level Go with low-level Rust. In itself, this comparison is already praise for Go, because no one stutters to compare, for example, Python and Rust in performance, and Go – all the time.

We measure by numbers:

All tests were done on my laptop – MacBook Air M1 8gb, HTTP requests to http://httpbin.org/ip

Rust

go

Binary weight

1.5 MB

5.6 MB

RAM consumption after a minute at 10K RPS

28.6 MB*

25.7 MB*

Execution time for 100K requests with a limit of 10K per second.

10.03 sec.

12.09 sec.

*Result of a minute test in Go:

{
  "req_count": 471213,
  "err_count": 441348,
  "average_response_time_ms": 68.38669,
  "max_response_time_ms": 7031,
  "min_response_time_ms": 0,
  "time_of_bench_sec": 61.92429,
  "percent_of_errors": 93.6621
}

*Result of a minute test in Rust:

http://httpbin.org/ip | 10000

Elapsed:             60.64s
Requests:            606176
Errors:              603539
Percent of errors:   99.56%
Response time: 
 - Min:              0ms 
 - Max:              36195ms 
 - Average:          17ms

Is it that Go consumes less RAM than Rust? Has the plastic world won?

Well, not really… As you can see from the results of both one-minute tests – Go has not completed another 130K of the required requests, hence the memory consumption is less. But still, he was very pleased, or rather, not Go itself, but fasthttp. If we used the standard http library, then the gap in both RAM and the number of requests would be much larger.

It is clear that all these are just numbers and they do not reflect the real state of affairs, but still they exist and I showed them. And yes, it was expected.

Pros and cons of Rust versus Go

Pros:

  • Performance

  • Binary size

  • No GC (Garbage Collector)

  • No runtime

  • Good OOP (Yes, not standard, but that’s what I like about it, IMHO)

  • Smart compiler with many optimizations.

  • Memory compatibility. In Rust, you can write a library for Go, Python, Ruby, etc. Or use in conjunction with C/C++

Minuses:

  • Difficulty in learning. Both in mastering syntax, the concept of ownership and lifetime, and in libraries, which are sometimes much more difficult to use than in Go.

  • It is more difficult to make a cross-platform application. For example, from under my M1 it will not be possible to compile Rust into a binary for Linux or Windows, but Go is easy.

  • VSCode configured for Rust is just disgusting, again – IMHO. Yes, and I did not set it up for three hours, as some recommend in such situations.

  • Haven’t tried it myself, but many people claim that Rust still has problems with async I/O. I can’t say, I don’t have much experience.

Actually, this is all the little that I managed to learn about Rust in a couple of months of lazy learning. If you need a conclusion – use what you like best. Go is ideal for API servers and the like, where the main load is on the network and drives. And Rust is good for computing. In addition, no one forbids them to combine.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *