Trace in Spring Boot 3 using Zipkin and Kafka as transport

Preface

The insurance company AlfaStrakhovanie (hereinafter referred to as SK), where I work, sells insurance products not only in its offices, but also through a network of insurance agents. Agents can issue policies using the REST API. This API is developed and maintained by the team I am lucky enough to work with.

The article contains context and technology, so if you are only interested in the technical part, you can safely proceed to the “Prerequisites” section.

The article does not contain a description of Zipkin and the concept of traces. You can read about Zipkin on the official website.

Context

A little about how an insurance product is sold

The main sales operations are:

  • Insurance product calculation;

  • Saving insurance policy data in insurance company core systems;

  • Payment;

  • Printing documents.

The specifics of the design involve performing the above operations with time delays. For example, having calculated an insurance product, we give the client time to make a decision. If the decision is positive, storage, payment, etc. are made based on the calculation. We use a microservice architecture, dividing operations into services responsible for them.

It is worth noting that each such service, in turn, accesses dozens of other core services. The main framework used to write such services is Spring Boot.

When things go wrong

The above call chains sometimes return HTTP 500, after which the classic parsing process begins – what went wrong. Our company standard requires the presence of logs and traces, so getting to the cause of the error is not difficult. Further, with this understanding, corrections are made to the systems (or the end user is consulted).

There are many services, Zipkin is one

Zipkin is used for application tracing. The transport is Kafka. This approach is considered generally accepted within the organization. Our bright future in terms of traces looks like this: Having traces from all services, we can easily track call chains and understand with minimal time what went wrong (if something went wrong).

Prerequisites

So, what do we have as input:

  • Java 17;

  • Service written in Spring Boot 3.1.2;

  • Corporate Zipkin, receiving traces via Kafka transport;

  • Corporate Kafka with a topic in which traces should be written;

  • The need for Autoconfiguration (since there are many services, I would not like to write an implementation for each) will be considered in the second step.

Step 1. Implementation based on an existing service

In this section (as well as in subsequent ones) we will not dive into the details of the implementation of an existing service. The above introductions will be sufficient, which will form the technical environment in which we will find ourselves.

First step problem

Add configurations that will allow the service to send traces to Zipkin with the Kafka transport. Provide the ability to collect traces within the application, i.e. not only between services, but also within, between components of the same service.

Implementation of the first step

Dependencies

For implementation we need the following dependencies. It is worth noting that when writing the service, parent was used hereinafter org.springframework.boot:spring-boot-starter-parent:3.1.2, from which all versions pulled themselves up. They are listed below for your reference.

<!-- Для добавления observation в приложение -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    <version>3.1.2</version>
</dependency>
<!-- Для создания observations с использованием @Observed -->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-aop</artifactId>
    <version>3.1.2</version>
</dependency>
<dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
    <version>3.0.9</version>
</dependency>
<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-bridge-brave</artifactId>
    <version>1.1.3</version>
</dependency>
<dependency>
    <groupId>io.zipkin.reporter2</groupId>
    <artifactId>zipkin-reporter-brave</artifactId>
    <version>2.16.3</version>
</dependency>
<dependency>
    <groupId>io.zipkin.reporter2</groupId>
    <artifactId>zipkin-sender-kafka</artifactId>
    <version>2.16.3</version>
</dependency>

Properties

Will be required to set up a connection to the Kafka topic to which we will send traces.
Let’s move them to a separate configuration branch. We’ll get something similar to:

@ConfigurationProperties("custom.tracing")
public record CustomTracingProperties(
        /* Kafka Producer username */
        String username,
        /* Kafka Producer password */
        String password,
        /* Kafka Producer topic */
        String topic,
        /* Kafka Producer bootstrap servers */
        String bootstrapServers
) {
}

What will ultimately be filled from the fragment application.yaml:

custom:
    tracing:
        username: login в kafka
        password: пароль от логина
        topic: топик, в который пишем трейсы
        bootstrap-servers: broker1,broker2
@Test: CustomTracingProperties should collect properties from `application.yaml`
@SpringBootTest(
        properties = {
                "custom.tracing.bootstrap-servers=server1,server2",
                "custom.tracing.password=pass",
                "custom.tracing.username=user",
                "custom.tracing.topic=topic"
        }
)
@EnableConfigurationProperties(CustomTracingProperties.class)
class CustomTracingPropertiesTest {

    @Autowired
    private CustomTracingProperties properties;

    @Test
    void should_FillProps() {
        Assertions.assertThat(properties)
                .isNotNull()
                .satisfies(props -> {
                    Assertions.assertThat(props.username())
                            .isNotBlank()
                            .isEqualTo("user");
                    Assertions.assertThat(props.password())
                            .isNotBlank()
                            .isEqualTo("pass");
                    Assertions.assertThat(props.topic())
                            .isNotBlank()
                            .isEqualTo("topic");
                    Assertions.assertThat(props.bootstrapServers())
                            .isNotEmpty()
                            .contains("server1", "server2");
                });
    }

    @ConfigurationPropertiesScan(basePackageClasses = CustomTracingProperties.class)
    @TestConfiguration
    static class TestConfig {
    }

}

KafkaSender

Manually configuring KafkaSender. You will need to provide a valid authentication method to Kafka. This example uses:

Knowing all the inputs, we assemble the configuration:

@Configuration(proxyBeanMethods = false)
@EnableConfigurationProperties({KafkaProperties.class, CustomTracingProperties.class})
@RequiredArgsConstructor
public class KafkaSenderConfiguration {

    private final CustomTracingProperties customTracingProperties;

    @Bean("zipkinSender")
    public Sender kafkaSender(KafkaProperties config, Environment environment) {

        // Adding properties of Kafka for tracing
        final Map<String, Object> properties = config.buildProducerProperties();

        // Bootstrap-servers получаем из CustomTracingProperties, разбирая строку в лист
        properties.put("bootstrap.servers", STRING_TO_LIST.apply(customTracingProperties.bootstrapServers(), ","));

        // Key/Value serializers
        properties.put("key.serializer", ByteArraySerializer.class.getName());
        properties.put("value.serializer", ByteArraySerializer.class.getName());

        // SASL properties
        properties.put("sasl.jaas.config",
                String.format("org.apache.kafka.common.security.scram.ScramLoginModule required username=%s password=%s;",
                        customTracingProperties.username(),
                        customTracingProperties.password()));
        properties.put("sasl.mechanism", "SCRAM-SHA-256");

        // Security
        properties.put("security.protocol", "SASL_PLAINTEXT");

        // Client Id
        final String serviceName = environment.getProperty("spring.application.name");
        properties.put("client.id", serviceName);

        // Building sender with properties
        return KafkaSender
                .newBuilder()
                .topic(customTracingProperties.topic())
                .overrides(properties)
                .build();
    }
}

The example above shows how to use KafkaSender.newBuilder() We fill in the properties necessary for connecting to Kafka, incl. using CustomTracingProperties and constants. The constants in the example are written as literals to make it clearer.

Note. The line with brokers is parsed into the sheet in this way:

BiFunction<String, String, List<String>> STRING_TO_LIST = (sequence, delimiter) ->
                Arrays.stream(sequence.split(delimiter))
                        .map(String::trim)
                        .toList();
@Test: KafkaSender must be configured correctly
@SpringBootTest(
        classes = KafkaSenderConfiguration.class,
        properties = {
                "custom.tracing.bootstrap-servers=specified-server1,specified-server2",
                "custom.tracing.password=pass",
                "custom.tracing.username=user",
                "custom.tracing.topic=topic-for-traces",
                "spring.application.name=some-app"
        }
)
class KafkaSenderConfigurationTest {

    @Qualifier("zipkinSender")
    @Autowired
    private Sender sender;

    @Test
    void shouldConfigureSender() {
        Assertions.assertThat(sender)
            .isNotNull()
            .isInstanceOf(KafkaSender.class)
            .satisfies(kafkaSender ->
                    then(kafkaSender)
                            .extracting("topic")
                            .isEqualTo("topic-for-traces")
            )
            .extracting("properties")
            .isInstanceOfSatisfying(Properties.class, properties -> {
                then(properties.get("bootstrap.servers"))
                        .asList()
                        .hasSize(2)
                        .contains("specified-server1", "specified-server2");

                then(properties.get("key.serializer"))
                        .isEqualTo("org.apache.kafka.common.serialization.ByteArraySerializer");

                then(properties.get("value.serializer"))
                        .isEqualTo("org.apache.kafka.common.serialization.ByteArraySerializer");

                then(properties.get("security.protocol"))
                        .isEqualTo("SASL_PLAINTEXT");

                then(properties.get("sasl.jaas.config"))
                        .isEqualTo("org.apache.kafka.common.security.scram.ScramLoginModule required username=user password=pass;");

                then(properties.get("client.id"))
                        .isEqualTo("some-app");
            });
    }
}

Observation

In order to be able to view traces of the interaction of components within one service, we will also add the configuration:

@Configuration(proxyBeanMethods = false)
@EnableAspectJAutoProxy
public class ObservedAspectConfiguration {

    @Bean
    public ObservedAspect observedAspect(ObservationRegistry observationRegistry) {
        return new ObservedAspect(observationRegistry);
    }
}

So, annotating methods (or classes) with @Observedwe will get traces of a specific method (or all methods of a class).

@Component
public class SomeClass {
    // Обратите внимание на нижеуказанную строку
    @Observed(name = "observation-name")
    public void foo() {
        System.out.println("bar");
    }
}

Parameter name in the abstract @Observed serves to determine the name of the observed object. We will consider testing this configuration later in this article.

Learn more about Observability in Spring Boot 3.

Management

In Spring Boot 3 traces were submitted to Micrometer Tracing, this should not be forgotten. I added these settings to application.yaml:

management:
    tracing:
        enabled: true
        sampling:
            probability: 1.0

Let’s check the result and move on to the starter

Having implemented the functionality and made sure that the service traces are available in Zipkin, we will assemble everything into an auto-configuration. You will probably find it strange that the results are not shown at this step. They will definitely be given later when we test the auto-configuration.

Step 2. Create an auto-configuration

Second step problem

Create an auto-configuration that can be connected to a ready-made service written in Spring Boot as a separate library (starter).

Implementation of the second step

Let’s create a new Spring Boot project, add a class MyTracingAutoConfiguration.class. Let’s transfer all the configurations and dependencies described above to the project. Let’s make sure that the configuration is enabled when management.tracing.enabled=true.

Let’s keep the dependency below, but add to it <scope>provided</scope>. This action indicates that actuator We already have it in the classpath of the service to which we will connect the starter.

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
    <scope>provided</scope>
</dependency>

The moment when autoconfiguration is enabled

I would like to draw your attention to the fact that we will perform the configuration before ZipkinAutoConfiguration.class to avoid bean conflict (restTemplateSender vs. kafkaSender). Let’s do this using annotation @AutoConfigureBefore.

Autoconfiguration

@AutoConfiguration
@ConditionalOnProperty(name = "management.tracing.enabled", havingValue = "true")
@AutoConfigureBefore(ZipkinAutoConfiguration.class)
@Import({KafkaSenderConfiguration.class, ObservedAspectConfiguration.class})
public class MyTracingAutoConfiguration {
}

In the example above we defined the following conditions:

  • @ConditionalOnProperty – says that the configuration will be enabled if property exists management.tracing.enabled in meaning true;

  • @AutoConfigureBefore(ZipkinAutoConfiguration.class) – says what was discussed in the “Preliminary actions” section (see above).

Next we import the above configurations:

Registering the auto-configuration

In order for Spring Boot to see the autoconfiguration, we need to add the file /src/main/resources/META-INF/spring/org.springframework.boot.autoconfigure.AutoConfiguration.imports with a pointer to a class MyTracingAutoConfiguration (indicate canonical autoconfiguration class name).

You can read more about this here.

Some tests

Below are tests that, in my opinion, are important to include in the article, because… they clearly demonstrate how the application works.

@SpringBootConfiguration in a test environment

To test the autoconfiguration I will add a class TestConfiguration.classannotated @SpringBootConfiguration into a test package that contains autoconfiguration tests. More details about this can be found in this video.

@SpringBootConfiguration
public class TestConfiguration {
}

How to test @Observed?

The test will help you understand how we use the annotation @Observed and get traces at the level of interaction of application components.

Step 1. You will need to add a dependency:

<dependency>
    <groupId>io.micrometer</groupId>
    <artifactId>micrometer-tracing-integration-test</artifactId>
    <scope>test</scope>
</dependency>

Step 2. Create an annotation @EnableTestObservationwhich will prepare the context for the test.

Preparing a test context involves adding a configuration bean observationRegistry (TestObservationRegistry from the dependency that was added for testing). We also import our ObservedAspectConfiguration and add an annotation @AutoConfigureObservability.

We make a separate annotation in order to reuse it in several tests.

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.TYPE)
@AutoConfigureObservability
@Import({
        ObservedAspectConfiguration.class,
        EnableTestObservation.ObservationTestConfiguration.class
})
public @interface EnableTestObservation {

    @TestConfiguration
    class ObservationTestConfiguration {

        @Bean
        TestObservationRegistry observationRegistry() {
            return TestObservationRegistry.create();
        }
    }
}

Step 3. Write a test and annotate it @EnableTestObservation:

@EnableTestObservation
@SpringBootTest(classes = ObservedAspectConfigurationTest.SomeClass.class)
@ImportAutoConfiguration(ZipkinAutoConfiguration.class)
class ObservedAspectConfigurationTest {

    @Autowired
    private SomeClass someClass;

    @Autowired
    private ApplicationContext context;

    @Test
    void shouldObserve() {
        someClass.foo();

        final TestObservationRegistry observationRegistry = context.getBean(TestObservationRegistry.class);
        TestObservationRegistryAssert.assertThat(observationRegistry)
                .hasObservationWithNameEqualTo("observation-name")
                .that()
                .hasBeenStarted()
                .hasBeenStopped();
    }

    @TestComponent
    public static class SomeClass {
        @Observed(name = "observation-name")
        public void foo() {
            System.out.println("bar");
        }
    }
}

In the test we checked that when calling the method foo() test component someClassobservation started and ended.

Let’s test the condition for enabling/ignoring the configuration

@Test: The configuration should be enabled when management.tracing.enabled=true is present
@SpringBootTest(
        classes = MyTracingAutoConfiguration.class,
        properties = {
                "custom.tracing.bootstrap-servers=specified-server1,specified-server2",
                "custom.tracing.password=pass",
                "custom.tracing.username=user",
                "custom.tracing.topic=topic-for-traces",
                "spring.application.name=some-app",
                "management.tracing.enabled=true"
        }
)
@EnableTestObservation
class MyTracingAutoConfigurationEnabledTest {

    @Autowired
    private ApplicationContext applicationContext;

    @Test
    void shouldConfigureSender() {
        Assertions.assertThat(applicationContext.getBean(Sender.class))
                .isNotNull()
                .isInstanceOf(KafkaSender.class);
    }

    @Test
    void shouldConfigureObservation() {
        Assertions.assertThat(applicationContext.getBean(ObservedAspect.class))
                .isNotNull();
    }
}
@Test: Configuration should NOT be enabled in the absence of management.tracing.enabled=false
@SpringBootTest(properties = "management.tracing.enabled=false")
class MyTracingAutoConfigurationDisabledTest {

    @Autowired
    private ApplicationContext context;

    @Test
    void shouldNotConfigureSender() {
            assertThatThrownBy(() -> context.getBean(KafkaSender.class))
                    .isInstanceOf(NoSuchBeanDefinitionException.class)
                    .hasMessage("No qualifying bean of type 'zipkin2.reporter.kafka.KafkaSender' available");
    }
}

Checking functionality

Let’s create a new project, in pom.xml Let’s add the following dependencies:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
<!-- автоконфигурация из предыдущего шага -->
<dependency>
    <groupId>ru.alfastrah.api</groupId>
    <artifactId>spring-boot-tracing-starter</artifactId>
    <version>1.0.0</version>
</dependency>
<dependency>
    <groupId>org.projectlombok</groupId>
    <artifactId>lombok</artifactId>
</dependency>

Next we will add application.yamlwe indicate the properties we need:

custom:
  tracing:
    username: login
    password: password
    topic: topic
    bootstrap-servers: broker1,broker2
    
logging:
  level:
    org.apache.kafka.clients.NetworkClient: debug
    root: info
  pattern:
    # Добавим следующий паттерн чтобы отображались traceId и spanId
    level: "%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]"
management:
  tracing:
    # Включаем трейсы
    enabled: true
    sampling:
      probability: 1.0

spring:
  application:
    name: demo-tracing-app

Let’s add a service FooService to check the functionality of the annotation @Observed:

@Service
@Log4j2
public class FooService {

    @Observed(name = "observation-name")
    public void internalFoo() {
        log.info("this is an internalFoo log");
    }
}

Let’s write a simple controller:

@RestController
@RequiredArgsConstructor
public class TracingController {

    private final FooService fooService;

    @GetMapping(path = "/foo")
    public String foo() {
        fooService.internalFoo();
        return "bar";
    }
}

Let’s launch the application and send the request

curl http://localhost:8080/foo
$ bar                  

Let’s check the logs. IN application.yaml You could see that the log level for org.apache.kafka.clients.NetworkClient: debug. This setting is specified to ensure that the message was sent to Kafka. Let’s make sure that the request was:

2023-09-22T21:31:43.270+03:00 DEBUG [demo-tracing-app,,] 16342 --- [emo-tracing-app] org.apache.kafka.clients.NetworkClient   : [Producer clientId=demo-tracing-app] Sending PRODUCE request with header RequestHeader ... etc.

We continue to study the logs. Let’s find traceId (650ddd8f171924cbfc5d355d33fb9d9b), by which we will search for a trace in Zipkin:

2023-09-22T21:31:43.260+03:00  INFO [demo-tracing-app,650ddd8f171924cbfc5d355d33fb9d9b,c2987710f3440cd1] 15011 --- [nio-8080-exec-1] c.e.d.t.application.service.FooService   : this is an internalFoo log

Let’s find the trace in Zipkin:

Trace in the Zipkin interface

Trace in the Zipkin interface

The picture above shows a trace containing 2 spans: parent and child (the same one specified using @Observed(name = "observation-name")).

Has our bright future arrived?

Actually only partially. Yes, we have traces and the ability to quickly connect them, but the presented configuration adds them for a specific application. It does not allow you to put them together into a single chain of calls (to see spans from different applications under one traceId).

In other words: if we raise 2 applications by connecting a starter to them, and one will call the other, we will be able to see 2 different traceIds: the 1st and 2nd, respectively.

A configuration that allows us to tie everything together is our next goal, which is yet to be realized. Dear reader, if you have any suggestions on this matter, I will be grateful to them in the comments.

Conclusion

We considered the option of adding traces to a Spring Boot 3 application with the possibility of later reusing this functionality. The traces are sent to Zipkin via Kafka. The functionality also allows you to add traces of inter-component interaction within the application.

Autoconfiguration source code available at this link.

I hope you found this article interesting.
Thank you for your attention.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *