let someone else take care of your problems

Today we will look at the implementation Sidecar in Golang.

Implementing Sidecar in Go

What we will do: we will create a main microservice and next to it Sidecar, which will be responsible for a simple task – logging and proxying requests.

Let's start with a simple HTTP server that will listen on port 8080 and return a simple message.

package main

import (
	"fmt"
	"net/http"
)

func handler(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintf(w, "Hello from Main Service!")
}

func main() {
	http.HandleFunc("/", handler)
	fmt.Println("Main Service running on port 8080")
	http.ListenAndServe(":8080", nil)
}

Nothing complicated. This is the main service that accepts HTTP requests and responds to them.

Now let's create the service Sidecar. His responsibilities will include logging all requests that pass through him and proxying them to the main service.

package main

import (
	"io"
	"log"
	"net/http"
)

func proxyHandler(w http.ResponseWriter, r *http.Request) {
	log.Printf("Received request: %s %s", r.Method, r.URL.Path)

	resp, err := http.Get("http://localhost:8080" + r.URL.Path)
	if err != nil {
		http.Error(w, "Error in Sidecar", http.StatusInternalServerError)
		return
	}
	defer resp.Body.Close()

	w.WriteHeader(resp.StatusCode)
	io.Copy(w, resp.Body)
}

func main() {
	http.HandleFunc("/", proxyHandler)
	log.Println("Sidecar Service running on port 8081")
	http.ListenAndServe(":8081", nil)
}

Here Sidecar receives requests for 8081 port, logs them and proxies them to the main service, which runs on 8080.

Let's start both services:

go run main_service.go

and in another console:

go run sidecar_service.go

Now if we send an HTTP request to localhost:8081we will see a response from the main service and an entry in the Sidecar logs:

curl localhost:8081
# Output: Hello from Main Service!

Sidecar Application Examples

Traffic logging and monitoring via Sidecar

Let's say there is a microservice that serves HTTP requests, and you need to add logging of all incoming requests, but without interfering with the main service code. We use Sidecar for this task.

Main service:

package main

import (
	"fmt"
	"net/http"
)

func handler(w http.ResponseWriter, r *http.Request) {
	fmt.Fprintf(w, "Hello from Main Service!")
}

func main() {
	http.HandleFunc("/", handler)
	fmt.Println("Main Service running on port 8080")
	http.ListenAndServe(":8080", nil)
}

Sidecar for request logging:

package main

import (
	"log"
	"net/http"
	"io"
)

func proxyHandler(w http.ResponseWriter, r *http.Request) {
	// Логируем запросы
	log.Printf("Request: %s %s", r.Method, r.URL.Path)
	
	// Прокси запрос на основной сервис
	resp, err := http.Get("http://localhost:8080" + r.URL.Path)
	if err != nil {
		http.Error(w, "Error in Sidecar", http.StatusInternalServerError)
		return
	}
	defer resp.Body.Close()
	
	// Копируем ответ основного сервиса обратно клиенту
	w.WriteHeader(resp.StatusCode)
	io.Copy(w, resp.Body)
}

func main() {
	http.HandleFunc("/", proxyHandler)
	log.Println("Sidecar running on port 8081")
	http.ListenAndServe(":8081", nil)
}

In this example Sidecar works as a proxy between the client and the main service, logging all requests before forwarding them to the main service. Requests are sent to the port 8081where he works Sidecarand then proxied to the main service that runs on the port 8080.

Adding caching via Sidecar

Challenge: add caching for frequently requested data, but without changing the logic of the main service. You can use Sidecarwhich will cache responses from the main service and return cached data for repeated requests.

Main service:

package main

import (
	"fmt"
	"net/http"
	"time"
)

func handler(w http.ResponseWriter, r *http.Request) {
	// Имитация длительной обработки
	time.Sleep(2 * time.Second)
	fmt.Fprintf(w, "Data from Main Service!")
}

func main() {
	http.HandleFunc("/", handler)
	fmt.Println("Main Service running on port 8080")
	http.ListenAndServe(":8080", nil)
}

Sidecar for caching:

package main

import (
	"fmt"
	"io"
	"log"
	"net/http"
	"sync"
	"time"
)

// Простая структура для хранения кэша
type Cache struct {
	data      map[string]string
	expiry    map[string]time.Time
	cacheLock sync.RWMutex
}

// Инициализация кэша
var cache = Cache{
	data:   make(map[string]string),
	expiry: make(map[string]time.Time),
}

// Продолжительность хранения данных в кэше
const cacheDuration = 10 * time.Second

// Проверка наличия данных в кэше
func getFromCache(path string) (string, bool) {
	cache.cacheLock.RLock()
	defer cache.cacheLock.RUnlock()

	data, found := cache.data[path]
	if !found || time.Now().After(cache.expiry[path]) {
		return "", false
	}
	return data, true
}

// Добавление данных в кэш
func saveToCache(path, response string) {
	cache.cacheLock.Lock()
	defer cache.cacheLock.Unlock()

	cache.data[path] = response
	cache.expiry[path] = time.Now().Add(cacheDuration)
}

// Прокси с кэшированием
func proxyHandler(w http.ResponseWriter, r *http.Request) {
	// Проверяем кэш
	if cachedData, found := getFromCache(r.URL.Path); found {
		fmt.Fprintf(w, cachedData)
		log.Printf("Served from cache: %s", r.URL.Path)
		return
	}

	// Если в кэше данных нет, делаем запрос на основной сервис
	resp, err := http.Get("http://localhost:8080" + r.URL.Path)
	if err != nil {
		http.Error(w, "Error in Sidecar", http.StatusInternalServerError)
		return
	}
	defer resp.Body.Close()

	body, _ := io.ReadAll(resp.Body)

	// Сохраняем в кэш
	saveToCache(r.URL.Path, string(body))

	// Возвращаем ответ клиенту
	w.WriteHeader(resp.StatusCode)
	w.Write(body)
}

func main() {
	http.HandleFunc("/", proxyHandler)
	log.Println("Sidecar with Caching running on port 8081")
	http.ListenAndServe(":8081", nil)
}

In this example Sidecar caches responses from the main service for 10 seconds. If repeated requests are made during this time, the client receives data from the cache rather than from the main service.


Conclusion

And remember, the main thing is not to overload Sidecar and clearly understand where the tasks of the main service end and Sidecar’s responsibilities begin.

On October 28, there will be an open lesson “Ways to separate microservices into components.” Practical examples will show how to properly structure a microservice architecture to improve the scalability and manageability of systems. In particular, we will analyze the most effective approaches to decomposing services based on domain models and data. You can sign up for a lesson on the Software Architect course page.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *