There are many times when starting a new project as a monolith makes sense, especially in very lean projects where the requirements and the product are not entirely clear from the beginning. In such a project, the domain and domain models shift and change a lot as the application pivots, and the requirements evolve. As the project and product mature, hopefully, their domain takes shape and settles, becoming more stable. By now, some parts of the domain will be more active than others.
This is the stage where micro services bring advantages, allowing the development teams to focus on smaller areas that are more manageable, easier to handle, test, and deploy. But that aspect also adds a bit of overhead as all these different domains require focus to integrate and automate. At the beginning of a project, almost all areas of the application evolve requiring continuous integrations and deployments. At that moment, there is no real benefit in splitting them. Considering that the requirements might be too fragile and, as a result, the domain model of the application might be quite volatile, micro services add to the overhead, as some may disappear or shift considerably.
However, once the application settles, how do you move it from a monolith to a micro-service based architecture? How do you partition the application? How do you ensure that those partitions are clean and not end up in a dependency nightmare? Where do you start to carve out functionality and how do you ensure that, at each step, you still have a product that you can ship and that your users can use?
Some of the concepts and patterns introduced by Domain Driven Design (DDD) can be very helpful to answer these questions.
Eric Evans presented the notion of Domain Driven Design (DDD), for the first time, in his book (under the same name) where he introduced patterns and principles to address the complexity in the development of business applications. Although this already existed before micro services, DDD appeared in the context of service-oriented architectures (SOA) where the clean partitioning of services is a major factor. From this perspective, micro-services are just a different implementation of the core concepts of SOA, one that successfully addresses some of the pitfalls of the over-regulated and over-engineered ESB, WS-centered world.
A micro-service requires a clean model with little or no dependencies on other services, whereas the models in a monolith are already established. Good modeling practices exist to minimize coupling between modules or packages, yet the constraints of a monolith are not as drastic as those required in a micro-service.
For example, a common problem occurs when a physical entity in the domain is modeled as a single concept in the application and is used in different business contexts. Classic examples are the concept of patient or product. Take the concept of product. Many times, a supermodel is constructed trying to contain all the aspects of the physical product: from stock which is a concern of the inventory context to pricing which is the responsibility of the sales context, to purchase orders which belongs to the procurement department, etc.
Figure 1 Product is a giant concept being used in multiple business contexts
As a result of these, we end up with a supermodel that is shared by many contexts. If we want to transform any context (completely or partly) into a micro service, this will have an impact on the other business contexts because of the shared model.
The best approach to decoupling would be a single model for each business context containing a unique representation of the product that only satisfies the needs of its particular context. For somebody in sales, the value of the product has to do with the category, characteristics, image, while the colleague in procurement defines the product by delivery times, the number of products in batch, price per supplier, etc.
Each such different concept within the business context has its own meaning, even though it refers to the same physical product.
Therefore, one way to split the monolith into autonomous services would be to partition these services per the business context, as many times these can be self-contained and will have the fewest dependencies.
Figure 2 Each business context has its own, specific model of the product
The resulting models will be decoupled and will be made smaller, making them easier to maintain.
Yet, this is precisely the concept of bounded context which is a central pattern in Domain Driven Design.
A bounded context has, as its name suggests, a boundary: each model has a clear and unequivocal meaning. In a bounded context, you will have models that only make sense in that respective context, like the concept of "lead" (potential sales contact) will only make sense in the marketing context.
You can also have models that have entirely different meanings in different contexts. For example, "price" associated with a product in the sales context would represent a different meaning than "price" would in the procurement context.
In the Domain Driven Design definition, a bounded context owns the vertical slice of functionality that runs from the presentation layer, through the business logic layer and even the data storage of the business context.
Figure 3 Each business context becomes a micro-service
Here, we can see the similarities in architecture with microservices. All the benefits that the microservice architecture provides for maintenance now change to models, inside a bounded context, models that have a high probability of being contained inside the bounded context. Moreover, there is no restriction on the bounded context architecture. In Figure 3 we used layered architecture inside the bounded contexts, but now you can use whatever architecture fits the bounded context best: layered, CQRS, microkernel and so on.
Even if, in our figure, it seems that we do not have relationships between bounded contexts, most of the time there are such relationships. You cannot have a sales process of a product without an inventory check. Therefore, there are dependencies among micro services, dependencies that are quite important. There are two other concepts, which are well defined in Domain Driven Designs, which support us: the anti-corruption layer and contexts map.
The anti-corruption layer is a translation layer that acts as a border patrol in the bounded context. It ensures that in case there are integration needs with other legacy or external systems, the models are not influenced / forced by the external models. This becomes extremely important when starting with a monolith because the models in the monolith are already established and will rarely fit the new models that match the microservice-based bounded contexts.
However, there can be many types of integration and collaboration among systems and DDD defines several patterns that deal with these:
Shared kernel - a shared model is developed to be used in multiple contexts, like a library with classes that have been agreed by the multiple teams working on multiple systems or contexts, and used when consuming or producing data from/to other contexts.
Open host service - an external system is used by many contexts and writing an anti-corruption translation layer for each would cause overhead and duplication. Instead, you define a clear contract and expose it as an open service, hence the open host service name.
Upstream/downstream - there is a clear relationship between two contexts: one which consumes services (downstream) and another one which provides the services (upstream).
As different microservices / bounded contexts work together, it is crucial to document the relationship among them clearly. A context map in DDD is responsible for capturing the technical and organizational relationship between bounded contexts.
Figure 4 Application context map
It is important to know how the DDD practices and patterns can help, but how do we start? Given the difference in required and provided models between the clean bounded contexts and the monolith, a rewrite will bring high risks and destabilize the current project (by introducing many bugs and problems.) In the context of legacy application migration, which is very similar to our case, Eric Evans, the initiator of DDD, recommends not to perform a substantial rewrite of the monolith but to proceed in small steps: an incremental approach where we form small context bubbles that we can expand.
The bubble context strategy, as described by Evans, suggests the creation of a small bounded context that is established using an anti-corruption layer. To get started, we need to choose an important but small sized business problem and a small team that is experienced in DDD practices, and intimate with the codebase and business domain. The bubble will isolate the work, so that the team can evolve the model and is unconstrained by the existing models contained in the monolith.
The new functionality can be modeled as desired and integrated in the monolith using the existing database. Pursuing this idea further, we don't create a new database with the new context and query the monolith database, but we will rearrange the data into the new models. DDD practices use the repository pattern to abstract the actual data to access operations and, as such, it will implement an anti-corruption layer based on the data repository.
Any time an object is needed from the bubble, it is requested through the repository. The repository implementation, in the anti-corruption layer, coordinates the steps needed to receive the reply. First, it invokes legacy functions and/or queries the legacy database(s). It passes the returned data to the translators, which rearrange the information into the objects used in the bubble. Finally, the anti-corruption layer returns those objects through the repository interface. The repository's contract is fulfilled without having any data itself.
Figure 5 Establishing a bubble context
Even though the Bubble context will be clean, it does not have all the benefits of a proper microservice as it still depends on the monolith for data and services. The deployment and scalability of the resulting bubble context are not yet where they should be. To get it to the next level, we must turn the bubble context into an autonomous context with its own storage and proper isolation. The approach depends very much on the specifics of the monolith models. We either manage to rip all the new context functionality out of the monolith and move it to the new context, or we refactor the monolith to consume the domain context that is implemented in the bubble from the newly established bubble context, so that all functionality covered by the bubble context will reside in the bubble context.
Figure 6 Autonomous bubble context
In this refactoring process, the Mikado method can help a great deal.
The Mikado method (which gets its name from a European pick-up sticks game) is a structured way to make significant changes to complex code in a safe manner. How many times do we start to refactor only to find ourselves many hours later with so many changes that we no longer feel confident that the system will work (because, let's face it, who has 100% unit test coverage on a monolith that might be a legacy as well)?
The Mikado method is based on four basic steps:
Set a goal. Think about what you want to achieve, like a starting point and the endpoint for a change, or the success criteria. Like in our product-based examples, in the monolith, we replace a couple of fields from the super product model with fields from the new bubble context model.
Experiment. Start making the changes you desire to see which parts of the system will break. Whatever breaks will give you feedback on what your next goal should be.
Visualize. Write down the goal and the prerequisites necessary to achieve it.
At some point, a change will no longer break the system, and then you can unwind, in reverse order, the visualized chain of changes.
Figure 7 Mikado-based refactoring visualization
By using this approach any major refactoring becomes more manageable and, thus, safer.
Many patterns and practices in Domain Driven Design can have a great impact on the quality of the application and in the way we communicate. The importance of patterns and practices increases as the software products are expected to have to adapt more frequently to a greater diversity of demands. Combining DDD with newer architectures that have evolved from web scale projects and Agile practices like refactoring, will provide the toolset to propel lean projects that can fulfill the demand and the challenges of today and tomorrow.
For more insight, I can highly recommend the following books:
"Patterns, Principles, and Practices of Domain-Driven Design" by Scott Millett and Nick Tune