What will Java developers be told at JPoint 2023?
From REST to GraphQL: A 20 Minute Adventure
Every day we are faced with thin or thick REST endpoints, as well as the need to modify them to meet new customer needs. But what if it is enough to have a data model and queries over it, moreover, universal for all clients? There is such an approach – GraphQL. Is he a silver bullet or not?
The speaker will try to answer this question. Let’s talk about the problems in the Kinopoisk API, see how they can be solved using various technologies, and discuss the pros and cons of GraphQL. Then we’ll talk about how to organize the transition of a large project from one technology stack to another. Naturally, the transition process was not without problems and technical challenges that had to be solved, they will also be touched upon in the report. At the end, we will sum up, discuss the current state of the system and ways of its further development.
A rational approach to the decomposition of systems into modules or microservices
For many teams, finding the optimal decomposition is a mixture of art and craft, with poorly predictable effort and results. To increase the predictability, quality and speed of decomposition in his teams, Alexey developed a special technique – decomposition based on effects. And then I found scientific confirmation that this approach allows you to get results comparable to using DDD several times faster. In the report, the speaker will present the methodology and demonstrate its application in a commercial project.
Working with data
Spring, Hibernate, the Value Object pattern and its limits
When developing software, the question of validation and correct work with data always comes up. If you perform a business operation with invalid user-supplied input, the consequences can be dire. Semyon will explain what the Value Object pattern is, how to implement it in the Spring/Hibernate stack, and when its use may interfere with further development.
DTO is one of the simplest design patterns. But in the real world, the acronym DTO refers to different kinds of objects. In the report, we will consider why DTOs are needed, where they can be used, and what tools are available for working with them. Let’s figure out how many transformations data can go through on the way from the database to the API and back, what are the features of data mapping using different structures, and how many types of projections are there in the ORM.
Apache Spark for distributed database. Connector internals
Alexey is developing the Apache Spark connector for Tarantool. In the report, he is going to highlight the features of the connector device for various databases and discuss how Spark is relevant for distributed (multi-node) databases, and what alternatives there may be. He will also discuss how to balance the settings of the Spark cluster and the database cluster for optimal performance. The report will be of interest to developers of drivers and connectors to databases, as well as Scala programmers.
When It Went Kafka 3: Where Apache Kafka Ends and Consumer Begins
The third report from the series, in which we will talk about the device and work of Consumer. Let’s take a closer look at KIPs that have greatly influenced the work of Consumer. Let’s tweak the Kafka and Consumer settings.
1. When everything went according to Kafka
2. When it all went according to Kafka 2: Overclocking the Producers
Offset and keyset: how much does pagination cost for production?
“Pagination in Spring sucks! Never use it! It costs too much for your DBMS!” Surely you have heard this or even said it yourself. What is the problem with pagination? If everything is so bad, then why in the coolest framework of our time they could not do a normal splitting of the results of SQL queries into pages?
Speakers will try to find answers to all questions. You will learn what are the difficulties with pagination, what is wrong with the offset construct, why it is so difficult to refuse it, what designers have to do with it, and how to design the API in such a way as to minimize the damage from future changes to a minimum.
B-Tree indexes in databases on the example of Spring Boot applications, PostgreSQL and JPA
How does the index speed up the search, in what order should the columns be listed in the index, and how to work with multiple search criteria? Vladimir will explain how regular™ indexes work in PostgreSQL. If the report gets into the golden “must-see developer onboarding” collections, then the goal has been achieved. The speaker will explain whether it is necessary to index foreign keys, where and order by conditions, as well as consider cases when an index slows down work, and how to reduce the impact of an index on an application.
Asynchronous Data Acquisition System: DIY!
Data acquisition and equipment control systems (SCADA) are no longer some kind of exotic. Any large production uses them. Not to mention all sorts of “smart” houses. But an interesting fact is that most of these systems (at least open ones) were developed about 20 years ago and now they are “ideologically backward”.
The speaker will analyze the architecture of data collection systems of varying degrees of obsolescence, discuss how his team made a completely asynchronous data collection system (Controls-kt) on reactive streams (coroutines), and also tell you what are the pros and cons of this.
Hibernate – Cartesian product problem with paginated queries
We can run into a Cartesian product problem when using Hibernate without even noticing it. It manifests itself especially clearly in requests with pagination. Artyom says:
When you may encounter this problem.
As a seemingly obvious solution is not always the most productive.
What bug did he find when switching to Hibernate 6.
How to solve the Cartesian product problem.
We ship to Kafka from the database: with and without CDC
Andrey is a Platform Owner of a streaming data processing platform at Raiffeisen Bank and a data engineer. He will share the nuances of building an internal PaaS solution on large volumes and talk about using Kafka Connect to fetch data from a database both with and without an available CDC. Let’s discuss how to work with Kafka Connect at the enterprise level: how to unify metadata, how to deploy and fall back from CI, how to provide such a service as a PaaS, how to manage access.
Eugene will tell you how to apply the reactive side of the Akka framework from the Akka Streams module in practice. In particular, he will show how they solved the problem of streaming data processing using Java and Akka Streams: what problems they encountered and how these problems were fixed.
Project Reactor is not just a hype trend, but a way to build a scalable and high-load-tolerant application. MDC is a key diagnostic and monitoring tool that allows you to easily and conveniently enrich blocks of code with metadata defined elsewhere. Unfortunately, they don’t get along well with each other. The official solution offered in the Project Reactor Readme allows you to use MDC to log your own events between reactive statements. However, this does not affect calls to third-party libraries inside reactive operators, which can also log their work.
The speaker’s department developed an alternative implementation of MDC that works fully in a reactive application without the above limitation. Also, a nice side effect of using this implementation was the removal of the restriction on the type of value in MDC: now not only string. The speaker will talk about the details of the implementation and briefly about the way to it in the report.
Calling a blocking API in Spring WebFlux: why it’s bad and what to do about it
World of Plat.Form
Reactive Spring WebFlux applications can use the blocking API for a variety of reasons. Using an example, let’s see what the use of blocking calls can lead to. Let’s figure out what tools can be used to find and fix the problem.
Reactive with Apache Thrift + Project Armeria: how WebFlux was screwed up without REST
Victor’s team has about 200 microservices on Apache Thrift. Initially, they were made synchronously. With the evolution of the product, the load grew, and they faced a problem: some services could not cope with it. Victor will explain why scaling doesn’t help in this situation, how Project Armeria can help, and when to use a reactive approach. The speaker will describe the experience and results of the team.
OpenJDK 19 introduced support for the RISC-V architecture. We’ll take a look at RISC-V and see how much OpenJDK supports it. Vladimir will evaluate the performance of the JDK port under RISC-V and answer the question “What can it be compared to?”
You will see how JVM performance issues under RISC-V are searched and see some examples of issues found and resolved.
How neural networks will deprive you of work (or not)
Jan will look at how to work with ChatGPT, Copilot, Dalle-2 in a JVM environment and how they can be used as a tool or framework. In the demo, we will analyze the integration of AI into Telegram bots and the creation of projects based on AI. Consider the possibility of AI performing routine tasks for developers on the example of real cases.
Competency Matrix and Assessment of Java Developers
At X5 Tech, we actively invest in the development of people and set goals for them thanks to the competency matrix. Employee goals allow the team to understand which competencies are most in demand and cover them with their own training materials. Alexander will tell how they compiled a competency matrix for Java developers and how employees interact with it.
The Art of System Design. How to build a distributed system and pass an interview
System Design has long and firmly entered the practice of interviews in popular Western companies and startups. Now large Russian companies are starting to actively use this type of interview – here it is called the architectural section, or systems design. System Design evaluates the skills of Senior level candidates and above in terms of practical experience, general knowledge and technological outlook, as well as the ability to design services and work with requirements.
In his presentation, Vladimir will reveal the principles by which the System Design interviews at Big Tech are built, and will give recommendations that will help candidates to successfully pass this round. This knowledge will be useful to developers in their work, especially if they are engaged in high-load projects.
The conference is not only reports. For example, there will be a series of thematic discussions on Java Technology Radar: let’s talk about the current state of affairs in Spring, JVM, performance tools and more. There will be activities of partner companies, and much more.
Information about this will still be supplemented, the most complete and up-to-date can always be found on the website. JPoint (tickets are there).
But in many ways, the conference is communication. Those who will personally meet at the Moscow site on April 18–19 will certainly begin to communicate. And those who connect remotely are unlikely to talk so much with each other, but they will also be able to properly question the speakers: either by video link after the report, or by sending questions to the chat (whichever is more convenient for you).
And we cannot write about this most important part in advance either in the habrapost or on the website. Because what kind of dialogues will start depends not on us, but on you. We can only say that in our experience, a two-day offline is always an occasion to communicate properly. So see you April and let’s talk!