Implementations of Microkernel architecture using Java OSGI

I would like to share my experience in implementing microkernel architecture in Java using OSGI (Open Service Gateway Initiative). This approach is an intermediate option between micro-service and monolithic architecture. On the one hand, there is a separation between components at the VM level; on the other hand, intercomponent interaction occurs without network participation, which speeds up requests.

Introduction

Source: Image by o'reilly

Source: Image o'reilly

Microkernel architecture involves dividing the application functionality into many plugins, each of which guarantees extensibility, provides isolation and separation of functionality. It is assumed that components will be divided into two types: core and plugins. The core contains the minimum functionality required for the system to function, and the application logic is divided between plugins. It is expected that interaction between plugins will be kept to a minimum, this will improve the isolation of each component, which will improve testability and simplify maintenance.

In this case, the system kernel requires information about running modules and how to interact with them. The most common approach to solving this problem is through organizing a plugin registry, which includes information about the name of the plugin and the available interfaces.

This pattern can be implemented using completely different technologies. For example, we can isolate the core and connect plugins through dynamic loading of jar files without additional isolation.

OSGI offers an approach to plugin isolation by separating the code for each plugin at the classloader level. Each plugin can be loaded with a separate loader, thereby providing additional isolation. The disadvantage of this solution is the potential for class conflicts: identical classes loaded using different loaders cannot interact.

As a high-level solution, you can consider Apache Karaf, which positions itself as Modulith Runtime and provides integration with the main frameworks: JAX-RS and Spring Boot. This tool simplifies interaction with OSGI technology by providing high-level abstractions.

Source: Apache Keraf

Alternatives to consider include direct OSGI implementations: Apache Felix, Eclipse Equinox, and Knopflerfish. Using low-level solutions will give us greater freedom in the design process.

Pluginized architecture based on Apache Felix

Context

To integrate with various customer data sources, we used a solution based on Apache Camel, which, based on a user configuration, connected to an arbitrary data source (from FTP to OPC UA) and applied user-defined transformations to the received data. This solution has proven itself to be reliable, as well as easy to expand for the case of protocols that already exist in Apache Camel. The disadvantage of this solution was the complexity of connecting new protocols that are not available in Apache Camel. The problem was the appearance of dependency hell, which consisted of the appearance of incompatible transitive dependencies.

This is what served as the main driver for exploring other approaches to building an integration service. In addition, I had a feeling that it was possible to implement more efficient application initialization by eliminating Spring from the project and manually configuring services. This was possible due to the small number of dependencies between the components.

As a solution, it was proposed to use Apache Felix, independently define the interface for the data processing component and dynamically connect plugins at the application startup stage. It is worth emphasizing that we needed to implement a data processing pipeline: receiving data from a remote source, several stages of transformation and writing to our data storage system or reading from our system, several stages of transformation and writing the result to a remote data source.

  • READ FLOW: Read from customer's system, Convert, Write to our system

  • WRITE FLOW: Read from our system, Transform, Write to customer's system

It was important to take into account the context of the task, which consists of the presence of simple interactions between the stages of data processing. The value object format has been unified. At the same time, the data processing pipeline did not contain logical blocks or one-to-many connections during the data transfer process. This greatly simplified data processing.

Project structure

Launcher. A separate project was allocated – launcher, which served as the core of the system. His area of ​​responsibility was limited to launching the osgi Framework, reading the configuration and dynamically connecting the necessary plugins that were explicitly specified in the configuration; as well as linking all plugins into a single pipeline based on user configuration.

During the process of implementing the kernel and connecting the base plugin, it turned out that the documentation was not enough for the correct configuration of the application. It turned out to be very useful to use Github search to compare your own and someone else's solution that probably works.

Shared Code. The common code was allocated into two projects: api – a set of interfaces for implementing pipeline data processing and parent – a common parent for all projects containing api as a dependency, as well as the maven plugin configuration, which made it possible to obtain a jar file with the plugin code.

Plugins. Each plugin was placed in a separate maven project and packaged into a jar file with a special structure (bundle in osgi terms). The maven plugin org.apache.felix:maven-bundle-plugin is responsible for generating the correct structure, which takes the project name, activator (entrypoint) and a list of private/export/import/embed dependencies as settings.

Plugin structure (bundle)

Each plugin contains an activator – a class that will be launched when the plugin is connected. At this point, the plugin is expected to register its services with the context. Each service can contain meta information, which it will write to the Dictionary.

public class Activator implements BundleActivator {
  @Override
  public void start(final BundleContext bundleContext) {    
      Dictionary<String, Object> dictionary = new Hashtable<>();
      dictionary.put("CustomField", "API_IMPL_V1");
      bundleContext.registerService(ApiService.class, new ApiServiceImpl(), dictionary);
    }
}

The application core (Host in OSGI terms) can make a request to the context to retrieve the registered services, specifying the metadata fields:

var references =
        context.getServiceReferences(ApiService.class, "(CustomField=*)");
Map<String, ConnectorService> index = new HashMap<>();
for (ServiceReference<ConnectorService> reference : references) {
    var  service = context.getService(reference);
    index.put(reference.getProperty("CustomField").toString(), service);
}

In this case, the plugin will contain dependencies that are not available to other plugins if they are marked as Private.

Non-obvious things that I would like to know about

No. 1. The specification does not allow classes to be in the default package. This requirement applies not only to your project, but to all your dependencies. The error that will be displayed if the requirement is violated will not be informative:

[ERROR] Bundle {groupId}:{artifactId}:bundle:{version} : The default package '.' is not permitted by the Import-Package syntax.
This can be caused by compile errors in Eclipse because Eclipse creates
valid class files regardless of compile errors.
The following package(s) import from the default package null
[ERROR] Error(s) found in bundle configuration

To solve this problem, you need to place a conditional breakpoint in the code of the plugin “org.apache.felix:maven-bundle-plugin” and independently find the dependency containing the incorrect class structure.

I posted a detailed solution to this problem in a separate article: https://medium.com/@mark.andreev/how-to-fix-the-default-package-is-not-permitted-by-the-import-package-syntax-in-osgi-3b59a6c18e71

No. 2. Unobvious required settings “org.osgi.framework.launch.Framework”. You will not be able to run apache felix without specifying the temporary directory “Constants.FRAMEWORK_STORAGE”. If problems arise, the error will not be informative.

No. 3. No error in case of problems while loading bundle. The only way to understand that the bundle has not loaded is to compare the SymbolicName of the bundle with null.

Bundle addition = bundleContext.installBundle(location);
if (addition.getSymbolicName() != null) {
   // TODO: add error
}

No. 4. Difficulties in passing library classes to the plugin. The solution was to unify the interfaces in the api library and use only these classes for communication between plugins.

Conclusion

The solution based on Apache Felix demonstrated not only the difficulty in adapting an insufficiently popular technology, which was expressed in a lack of knowledge on Stackoverflow and the need to use a debugger to investigate most problems, which complicates the analysis of incidents. On the other hand, thanks to this technology, we received low connectivity between system components, isolation of plugins at the class loader level, and a simpler project structure by separating each pipeline component into a separate project; and significant startup acceleration.

It is important to consider that the positive experience is directly related to the loose coupling between plugins and the lack of common shared dependencies beyond the api library.

If you require closer interaction, then you should still pay attention to Apache Karaf. Most likely, you will be more comfortable not implementing low-level interaction with OSGI similar to that described in the project.

Afterword

Have you had any experience implementing microkernel architecture? How did you solve this problem?

Mark Andreev

Senior Software Engineer

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *