DDD Tactical Patterns
Using DDD in practice
If you're faced with a large, unwieldy system, follow this plan:
We conduct Event Storming
Invite business customers to a meeting to clarify requirements and contextualize the system. I recommend that you become thoroughly familiar with the Event Storming process, as I won't go into detail in this article.
The result of the meeting will be a complete understanding of the system and its contexts (you can read more about contexts here).
For our fictitious domain, contexts might look like this:
Warehouse
– warehouse context inside the marketplaceAccounting
– accounting context within the marketplaceDelivery
– delivery context within the marketplace
Looking for subdomains
Now that contexts have been defined, it is important to understand that they can be quite broad. For example, the context of warehouse operations may involve many internal systems, each of which may have its own complex structure.
I propose to identify subdomains in each context and determine their types, which will become the basis for the use of various tactical patterns, which we will talk about in this article.
For context Warehouse:
OrderManagement(Core)
– management of orders in the warehouseLocation(Supporting)
– managing the location of goods in the warehouse
Context Accounting includes:
Reports(Core)
– generation of financial reportsVerification(Supporting)
– checking orders and issuing invoices
Context Delivery represented by the following subdomains:
Core.Board(Core)
– order boardCore.Couriers(Core)
– courier managementSupporting.Tracking(Supporting)
– tracking delivery status
Embedding tactical patterns into subdomains
Each subdomain within a context has its own importance and level of complexity, which requires the use of appropriate patterns. Some patterns are better suited for simple subdomains, others for more complex ones. It is important to use them for their intended purpose and not be tied to one template in order to avoid using it in inappropriate situations.
Basic tactical patterns
Transaction Script
Imagine that you are developing an authorization service. How complex can its business logic become? Is adding architecturally complex solutions to this service justified? Consider the following code:
export const register = async (req: Request, res: Response) => {
const { email, password } = req.body;
try {
const existingUser = await User.findOne({ email });
if (existingUser) {
return res.status(400).json({ message: 'User already exists' });
}
const hashedPassword = await bcrypt.hash(password, 10);
const newUser = new User({ email, password: hashedPassword });
await newUser.save();
res.status(201).json({ message: 'User registered successfully' });
} catch (error) {
res.status(500).json({ message: 'Server error', error });
}
};
export const login = async (req: Request, res: Response) => {
const { email, password } = req.body;
try {
const user = await User.findOne({ email });
if (!user) {
return res.status(400).json({ message: 'Invalid credentials' });
}
const isMatch = await bcrypt.compare(password, user.password);
if (!isMatch) {
return res.status(400).json({ message: 'Invalid credentials' });
}
const token = jwt.sign({ id: user._id }, JWT_SECRET, { expiresIn: '1h' });
res.json({ token });
} catch (error) {
res.status(500).json({ message: 'Server error', error });
}
};
Here is an example of a pattern Transaction Script. The essence of this pattern is that we organize the business logic using procedures, each of which processes one request from the view. Simply put, Transaction Script is when all the business logic is concentrated in the application (or services) layer. Although the procedural style may seem outdated (domain model advocates may criticize it as anemic), it is great for simple tasks such as authorization.
The authorization service is a great example. Generic subdomain, where the use of the Transaction Script pattern is quite justified. Feel free to apply this approach in Supporting subdomain, where the complexity of tasks does not require unnecessary architectural load.
Active Record
The next most complex pattern to consider is Active Record. The essence of this pattern is that the business logic, like the Transaction Script pattern, is located in the service layer, but much of this logic can be integrated into ORM models. At the same time, we place in the ORM models only that logic that does not contain infrastructure dependencies. Let's look at an example:
export class VerificationService {
constructor(
private readonly verificationRepository: Repository<Verification>,
) {}
async update(
updateVerificationDto: UpdateVerificationDto,
): Promise<Verification> {
const verification = await this.verificationRepository.findOne({
where: {
id: updateVerificationDto.id,
},
});
if (verification === null) {
throw new BadRequestException(
`Verification with id ${updateVerificationDto.id} not found`,
);
}
if (updateVerificationDto.signed) {
verification.signReport();
}
if (updateVerificationDto.completed) {
verification.completeVerification();
}
return this.verificationRepository.save(verification);
}
}
export class Verification {
@PrimaryGeneratedColumn('uuid')
id: string;
/// ... columns
signReport() {
if (this.completed) {
throw new Error('Cannot sign a report that has already been completed.');
}
this.signed = true;
}
completeVerification() {
if (!this.signed) {
throw new Error(
'Cannot complete verification without signing the report.',
);
}
if (this.reportNumber < 0) {
throw new Error('Report number cannot be negative.');
}
this.completed = true;
}
}
In this example, the ORM model gets rid of its anemicity, and the code becomes more structured and expressive. Unfortunately, there is a lot of undeserved criticism surrounding Active Record. Some consider it an anti-pattern, but it is important to note that only pure business logic should be contained in model methods. Please avoid accessing the database in these business methods so that your Active Record will never turn into an anti-pattern.
Active Record is a great compromise between the domain model (which we'll talk about later) and Transaction Script. This pattern is good for both Supportingand for Generic subdomains. Don't neglect it!
Domain model
Domain Model is a key aspect of tactical DDD. This template works well for many Core subdomains, where it is critical to ensure the quality and speed of changes.
Entity
The core of the domain model pattern is to use Entity with pure business logic. Unlike Active Record, business logic here is not placed in ORM models, but is encapsulated in separate pure classes (entities). By adding behavior to the essence, we turn the model from anemic to full-fledged, and moving away from the ORM layer helps get rid of unnecessary infrastructure dependencies. Let's look at a code example:
export class CurierEntity {
id: string
name: string
orders: OrderEntity[]
addOrder(newOrder) {
if (this.isActicve === true) {
if (this.rating > 4) {
this.order.push(order)
const totalRating = this.rating * this.orders.length;
const updatedRating = (totalRating + 0.1) / (this.orders.length + 1);
this.rating = updatedRating;
}
}
}
}
export class OrderEntity {
id: string
name: string
curier: CurierEntity;
create(newOrder) {
///
}
}
The service layer will be thin, since the main part of the logic is concentrated in entities. We retrieve entities from the database and store them entirely:
export class CurierService {
async addOrder(id, order) {
curier = await this.repository.findById(id)
curier.addOrder(new OrderEntity({...order}))
await this.repository.save(curier)
}
}
The repository is presented below. As you can see, all the “magic” with ORM and entity mapping happens here:
export class CurierRepository {
findById(curierId): CurierEntity {
const curierOrm = await this.prisma.cureir.findById(curierId)
return CurierMapper.mapToDomain(curierOrm)
}
save(curier: CurierEntity): CurierEntity {
const curierOrm = CurierMapper.mapToORM(curier)
const updatedCurier = await this.prisma.curier.save(curierOrm)
return curierMapper.mapToDomain(updatedCurier)
}
}
Using domain entities we get many benefits:
Storage independent: We don’t care how the data is stored in the database.
Clear responsibility: Responsibility for information management lies with those who have all the necessary information.
Relational view: We can construct our entities according to relational principles.
Simplified testing: Less need for mocks, making tests easier and more reliable.
Refusal from database-driven development: We are moving towards smarter modeling focused on business logic.
Neat service layer: We get a clear and understandable service layer, which simplifies code maintenance.
Unit
Alone Entity not enough. Although entities are great at encapsulating business logic, the question arises: how do we connect them together? How to establish clear modular boundaries and ensure transactional consistency?
If entities are not merged correctly, business rules can get lost. Let's look at an illustrative example. Let's imagine that due to a limit constraint, we cannot add a new order to the warehouse.
export class WarehouseEntity {
addOrder(order: OrderEntity) {
if (this.orders.length > 500) {
throw new Error('Limit 500');
}
this.orders.push(order);
}
}
export class Curier {
addOrder(curierId, newOrder) {
const curier = curierRepository.findById(warehouseId)
curier.addOrder(new OrderEntity(...newOrder))
return curierRepository.save(curier)
}
}
This code should work, but what if in another part of the system someone decided to add an order, bypassing WarehouseEntity
?
export class OrdersService {
reorder(curierId, oldOrder) {
const order = new OrderEntity({...oldOrder, curierId})
return ordersRepository.save(order)
}
}
Voila! Such a bug is difficult to catch – you will have to rely on testing or, worse, on the excellent memory of your colleagues. In the worst case, the functionality of one part of the system can damage another. People should constantly be aware of all the checks within entities and take them into account when developing new features. To avoid such inconsistency, we need abstractions to manage entity boundaries. Fortunately, such an abstraction exists.
Unit is a hierarchy of entities that helps preserve business rules and ensure transactional consistency. If an aggregate is selected Courier
change it only through the root. No manipulation with Order
directly – only through the parent.
Advantages of the units:
Ease of testing: The unit contains pure business logic, which simplifies the testing process.
Simple interface: The right units provide a crisp and clear interface, hiding the complexity under the hood.
Integrity: The unit is a single unit that we can remove, modify and store. We can say that the unit is the basis of your future module.
Thus, using aggregates helps you avoid many of the problems associated with consistency and business rule management, while keeping your code clear and structured.
Unit and modularity
Quite often there is advice that entities should be grouped into aggregates according to the principle 1 to 1 (one entity – one aggregate). This is argued by the fact that it is difficult to correctly divide the system into aggregates, so it is easier to immediately achieve maximum granularity. This is harmful and wrong; never do that.
It is important to realize the value of aggregates. As I mentioned earlier, an aggregate can be the basis of modularity, and the very concept of an aggregate overlaps with modularity in many ways. The unit is an independent unit with depth and relative independence from external components. One unit “owns” and controls the data that is under its control. Nothing gives you the right to change this data in any way externally; only the unit is responsible for this.
The module, in turn, must also encapsulate certain functionality in such a way that in the future it can be easily separated and turned into an independent deployment unit. It is important to achieve good encapsulation within the module so that it is truly high-quality and useful. I see a lot of similarities between the concepts of an aggregate and a module!
Please try to design units with depth in mind and provide a narrow and simple interface for interacting with them. This will not only improve the structure of your code, but also make it easier to understand and maintain.
Excessively large units
Imagine that we have couriers who have orders, orders contain goods, and positions contain something else. The chain can go on forever.
Retrieving huge amounts of data from the database to perform small updates is extremely inefficient. Dividing aggregates according to one entity, as is sometimes advised, is more like an engineering disaster. It's important to find a balance.
To do this, you need to analyze business processes, ask questions to experts and look for eventual consistency between entities, which may indicate weak consistency. If strong consistency (ACID) is not critical for operations between entities and there are not many such interactions, this may indicate weak coupling.
For example, how often do you need to update order documentation when working with couriers? This probably doesn't happen that often. So why drag documents into the courier unit?
For these rare interactions, messaging is best. When performing a business transaction, simply add a new message to the array messages
inside the unit:
crashOrder(orderId: string) {
const order = this.orders.find((el) => el.Id === orderId);
order.changeStatus(false);
this.messages.push(
new OrderCrashedEvent({
aggregateId: this.id,
payload: {
orderId: order.Id,
},
}),
);
}
On the repository side, you can retrieve added messages from the entity and send them to the message broker (or to the database and then to the broker if using the pattern transactional outbox):
async saveCurier(curier: CurierEntity): Promise<CurierEntity> {
const curierORM = CurierMapper.mapToORM(curier);
const outboxORM = warehouse.pullMessages()
const crOrm = await this.dataSource.transaction(
async (transactionalEntityManager) => {
await transactionalEntityManager.save(outboxORM);
return await transactionalEntityManager.save(curierORM);
},
);
return CurierMapper.mapToDomain(crOrm);
}
However, no one forbids you to implement messaging in a different way. There are no universal solutions. The main thing is to understand the principles and motives behind certain decisions.
Try to avoid situations where you need to update multiple aggregates in a single ACID transaction. If this becomes a frequent occurrence, reconsider the aggregate boundaries. Perhaps you have done them incorrectly.
Trillema
In using a pattern with a domain model, we inevitably encounter a trilemma, which is that it is impossible to simultaneously satisfy three key attributes: domain model completeness, domain model purity, and performance. You have to choose two out of three.
Let's consider the problem of changing a user's phone number with a preliminary check of its uniqueness. How to solve it correctly?
Let's imagine a situation in which we need to change a user's phone number, after first making sure that it is unique. How to solve this problem competently?
Keeping the domain model complete and clean while sacrificing performance. In this case, we can implement a number uniqueness check directly in the data model. However, this approach requires unloading all users for verification, which will negatively affect system performance.
export class UserService {
async changeEmail(id, email) {
user = await this.repository.findById(id)
allUsers = await this.repository.findAll()
user.changeEmail(email, allUsers)
await this.repository.save(user)
}
}
export class User {
changeEmail(email, allUsers) {
const userWithEmail = allUsers.find(u => u.email === email);
if (userWithEmail) {
throw
}
this.email = email;
}
}
Maintaining the performance and completeness of the model, but at the expense of purity. You can inject an infrastructure dependency directly into the model to check the uniqueness of a phone number. While this option can provide good performance, it breaks the purity of the model by mixing business logic with infrastructure details. As a result, the code may look attractive, but it is more difficult to maintain and develop.
export class UserService {
async changeEmail(id, email) {
user = await this.repository.findById(id)
user.changeEmail(email, this.repository)
await this.repository.save(user)
}
}
export class User {
async changeEmail(email, repository) {
const userWithEmail = await repository.findByEmail(email);
if (userWithEmail) {
throw
}
this.email = email;
}
}
Maintaining model performance and purity, but sacrificing completeness. In this option, we split the decision-making process between the domain and service layers. The business logic for checking the uniqueness of the number is implemented at the service level, which allows you to maintain the purity of the model and high performance. This approach requires clear definition of decision points and interactions between layers, but is generally the most optimal for most applications.
export class UserService {
async changeEmail(id, email) {
userWithEmail = await this.repository.findByEmail(email);
if (userWithEmail) {
throw
}
user = await this.repository.findById(id)
user.changeEmail(email)
await this.repository.save(user)
}
}
export class User {
changeEmail(email) {
this.email = email;
}
}
Thus, in real projects one has to find a balance between these three attributes. The choice of approach depends on system priorities and architectural requirements.
Value Objects
In Domain-Driven Design (DDD), Value Objects are a concept that adds value by emphasizing the essential characteristics of an object rather than its unique identity. These objects have no identifiers, but can encapsulate data and behavior associated with it. Value Objects are immutable and defined solely by their attributes, making them ideal for modeling concepts such as money, dates, or addresses.
export class AmountObjectValue {
public amount: number;
public rate: number;
constructor(attributes: Attributes) {
this.amount = attributes.amount;
this.rate = attributes.rate;
}
applyDiscount(discount: number): number {
return this.amount * discount;
}
getAmoutWithoutTax(): number {
return this.amount * (100 - this.rate);
}
differenceAfterTax(): number {
return this.amount - this.getAmoutWithoutTax();
}
}
Use Value Objects more often, especially when they can hide significant business logic, provide necessary encapsulation, and reduce unnecessary complexity in the Entity.
Read Model
When designing aggregates, keep in mind that they are only needed for data modification operations. If you only need to read without changes, it is better to use the Read Model pattern.
export class ReportReadModel {
readonly id: string;
readonly isValid: boolean;
readonly orderId: string;
readonly reportNumber: number;
readonly positions: ReportPositionReadModel[];
constructor(attributes) {
this.id = attributes.id;
this.isValid = attributes.isValid;
this.orderId = attributes.orderId;
this.reportNumber = attributes.reportNumber;
this.positions = attributes.positions;
}
}
You may have many reading models for different scenarios – don't be afraid to create them as needed. The main rule: do not use them to make changes, since aggregates are responsible for all changes.
It is likely (and most likely will be the case) that the read model will contain data from different modules. This is not a problem since it is used solely for reading. When we talk about modularity and module boundaries, the key focus is on data modification operations.
Some summary
So, we went through the main templates and applied tactical patterns, the target picture of our system looks like this.
For the Warehouse context:
The Accounting context consists of:
The Delivery context is represented by three contexts:
Core.Board(Core)
– Domain ModelCore.Couriers(Core)
– Domain ModelSupporting.Tracking(Supporting)
– Transaction script
You can look at all the code for our fictitious domain on github (Typescript, Golang). Don’t forget about strategic patterns; pay more attention to them first of all. Use tactical patterns where they will actually help and not add unnecessary headaches.