How to make and configure your own CDN

A CDN (Content Delivery Network) is a group of servers hosted in different geographic regions with the goal of providing fast loading of content to users in those regions. Most often, content delivery networks are used to speed up the loading of static files: pictures, videos, scripts, zip archives. Each of the CDN servers simply stores the same files, and the user receives them from the nearest server.

Content storage in most content delivery networks is organized as follows: the CDN server, having received a request for a file from the user for the first time, downloads it from the original server to itself, caches it and immediately gives it to the user. For all subsequent requests, the file is already served from the cache. Some services allow you to configure the storage duration of cached data, as well as their preloading (precache).

Sometimes you may need to set up your own content delivery network. Let's look at why this is needed and how to do it.



This is our future CDN of 5 servers, which will distribute content to the whole world

Why set up your own CDN

Here are a few examples when you should think about creating your own content delivery network:

  • the costs of a third-party CDN service significantly exceed the costs of creating your own
  • when we want to have a guaranteed channel and fixed disk space for a persistent cache, and not share them with other clients
  • if special settings for storing and delivering content are required
  • in case of placing several production servers in different regions to speed up the delivery of dynamic data
  • if we do not want third-party services to collect and store data about our users, for example: IP address, requested URLs, etc.
  • when other services do not have points of presence in the region we need

In most other cases, it is better to use the services of a paid CDN service.

Getting ready to launch

To implement our plan we will need:

  1. Several servers in different regions, possibly virtual (VPS)
  2. Separate subdomain. In our example it will be cdn.nashsait.org
  3. geoDNS service, with the help of which a user accessing a subdomain will be directed to the server closest to his region

We rent servers and set up geoDNS

First, let’s determine where the main user audience is located. In our example, this is Kazakhstan, so we definitely need to have a point of presence in this region so that everything works as quickly as possible for most users. The remaining servers will be “scattered” around the world. It is most convenient to rent them from hosting providers that offer several locations for hosting.

We will order 5 virtual servers with 25GB disk, unlimited traffic and the latest Debian or Ubuntu:

Kazakhstanip: 86.104.73.235

Netherlandsip: 94.232.245.17

USAip: 45.89.53.214

Brazilip: 95.164.5.110

Japanip: 5.253.41.115

So that the user, when accessing cdn.nashsait.org was directed to the server closest to it, we need a working DNS with the geoDNS function. It can be configure it yourself or use a ready-made service, for example CloudDNS.

We will use CloudDNS: register, select a GeoDNS tariff and add a new DNS zone in your personal account, indicating our main domain nashsait.org. During the zone creation process, you will be asked to select future NS servers for the domain. Let's mark all available ones and copy them to a separate text file for the future.

If we are already using the main domain (for example, it hosts a website or runs email), then immediately after creating the zone we need to add existing working DNS records.

Then for the subdomain cdn.nashsait.org you need to create several A-records, each of which, depending on the user’s region, will point to one of our CDN servers. Regions can be continents, countries, or individual states (for the USA and Canada). Let's start in South America and route all requests from there to a server in Brazil:

Let's do the same for other regions, and it is recommended to add one of them as the “default” region. As a result, the list of A-records will look like this:

The bottom entry “Default” means that all other regions not specified in other entries (Europe, Africa, satellite Internet, etc.) will be sent to a server in the Netherlands.

This completes the geoDNS setup, all that remains is to go to the website of our domain registrar and change the NS server for nashsait.org to those that we previously copied into a separate text file.

Adding SSL certificates

To make the CDN work over HTTPS, we will install a free SSL certificate from Let's Encrypt. This can be conveniently done using

ACME Shell script

which allows you to validate a domain by DNS via the ClouDNS API.

Just install acme.sh on one of the servers, and then copy the resulting certificate to all the others. Let's install on a server in the Netherlands:

root@cdn:~# wget -O - https://get.acme.sh | bash; source ~/.bashrc

It is worth noting that during installation a separate CRON task is created to automatically update certificates in the future.

When issuing a certificate, domain verification will occur through DNS, and the necessary records for this will be automatically added to ClouDNS through their API. Therefore, in the ClouDNS account in the “API&Resellers” menu, we need to create a new user API, creating a password for it. Received with the password we will indicate in the file ~/.acme.sh/dnsapi/dns_cloudns.sh (do not confuse with similar dns_clouddns.sh). Here are the lines in the file that need to be uncommented and edited:

CLOUDNS_AUTH_ID=<auth-id>
CLOUDNS_AUTH_PASSWORD="<пароль>"

Next, let's start obtaining an SSL certificate for

cdn.nashsait.org

root@cdn:~# acme.sh --issue --dns dns_cloudns -d cdn.nashsait.org --server letsencrypt --reloadcmd "service nginx reload"

The process may take up to several minutes and if a subdomain verification error occurs, try running the command again. If the result is successful, a list of files of the installed certificate will appear on the screen:

Let's remember these paths; they will be needed when copying the certificate to other CDN locations and for setting up a web server. The “Reload error for” error is not significant and will not appear when further updating certificates on a fully configured server.

Let's log into four other servers and copy the received certificate to each, creating the appropriate directories so that the paths to the files are the same everywhere:

root@cdn:~# mkdir -p /root/.acme.sh/cdn.nashsait.org_ecc/
root@cdn:~# scp -r root@94.232.245.17:/root/.acme.sh/cdn.nashsait.org_ecc/* /root/.acme.sh/cdn.nashsait.org_ecc/

This copying needs to be done regularly, so on the same four servers we add a daily run of the command to CRON:

scp -r root@94.232.245.17:/root/.acme.sh/cdn.nashsait.org_ecc/* /root/.acme.sh/cdn.nashsait.org_ecc/ && service nginx reload

For everything to work, access to the Dutch server from all other four must be configured

by key

, without having to enter a password. Be sure to do this.

Setting up Nginx

At all five CDN points, we will install Nginx as a web server for distributing content and configure it as a caching proxy server:

root@cdn:~# apt update
root@cdn:~# apt install nginx

Default config file

/etc/nginx/nginx.conf

replace it with the one below:

nginx.conf

user www-data;
worker_processes auto;
pid /run/nginx.pid;

events {
    worker_connections 4096;
    multi_accept on;
}

http {
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    types_hash_max_size 2048;

    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    access_log off;
    error_log /var/log/nginx/error.log;

    gzip on;
    gzip_disable "msie6";
    gzip_comp_level 6;
    gzip_proxied any;
    gzip_vary on;
    gzip_types text/plain application/javascript text/javascript text/css application/json application/xml text/xml application/rss+xml;
    gunzip on;            

    proxy_temp_path    /var/cache/tmp;
    proxy_cache_path   /var/cache/cdn levels=1:2 keys_zone=cdn:64m max_size=20g inactive=14d;
    proxy_cache_bypass $http_x_update;

server {
  listen 443 ssl;
  server_name cdn.nashsait.org;

  ssl_certificate /root/.acme.sh/cdn.nashsait.org_ecc/fullchain.cer;
  ssl_certificate_key /root/.acme.sh/cdn.nashsait.org_ecc/cdn.nashsait.org.key;

  location / {
    proxy_cache cdn;
    proxy_cache_key $uri$is_args$args;
    proxy_cache_valid 90d;
    proxy_pass https://nashsait.org;
    }
  }
}

In this config we will edit:

  • max_size — cache size, no more than the available disk space
  • inactive — storage period for cached files that have not been accessed
  • ssl_certificate And ssl_certificate_key — absolute paths to SSL certificate files
  • proxy_cache_valid — storage period for cached files
  • proxy_pass — URL of the site from which the CDN downloads and caches files. We have it nashsait.org

Note the similarity of the directives

inactive

And

proxy_cache_valid

. To avoid confusion, let's look at them using a simple example. This is what happens when

inactive=14d

And

proxy_cache_valid 90d

:

  • if the file is not requested within 14 days, it is deleted
  • if the file is requested at least once every 14 days, then it will begin to be considered out of date only after 90 days, and after this period, with the next request, Nginx will download it again from the original server

By specifying the required values ​​in

nginx.conf

apply them:

root@cdn:~# service nginx reload

Please note that Nginx will not cache data if it is received from the original server with any cookies (“Set-Cookie” header). You can ignore this condition by adding the following directives to the config:

proxy_ignore_headers "Set-Cookie";
proxy_hide_header "Set-Cookie";

This completes the setup. Additionally, you can create a bash script to clear the cache:

purge.sh

#!/bin/bash
if [ -z "$1" ]
then
    echo "Purging all cache"
    rm -rf /var/cache/cdn/*
else
    echo "Purging $1"
    FILE=`echo -n "$1" | md5sum | awk '{print $1}'`
    FULLPATH=/var/cache/cdn/${FILE:31:1}/${FILE:29:2}/${FILE}
    rm -f "${FULLPATH}"
fi

Running this script will delete the entire cache, a separate file is deleted like this:

root@cdn:~# ./purge.sh /test.jpg

Testing our CDN

Using online ping services, you can check pings to our content delivery network from different places:

The ping is good, now let’s place the image in the root of the main site

test.jpg

and on

Ping-Admin

Let's look at its download speed via CDN:

Everything works, content is distributed quickly. Now we have our own working CDN with unlimited traffic and points of presence on all continents.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *