Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Domain-Driven Design with .NET Core

You're reading from   Hands-On Domain-Driven Design with .NET Core Tackling complexity in the heart of software by putting DDD principles into practice

Arrow left icon
Product type Paperback
Published in Apr 2019
Publisher Packt
ISBN-13 9781788834094
Length 446 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Alexey Zimarev Alexey Zimarev
Author Profile Icon Alexey Zimarev
Alexey Zimarev
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Why Domain-Driven Design? FREE CHAPTER 2. Language and Context 3. EventStorming 4. Designing the Model 5. Implementing the Model 6. Acting with Commands 7. Consistency Boundary 8. Aggregate Persistence 9. CQRS - The Read Side 10. Event Sourcing 11. Projections and Queries 12. Bounded Context 13. Other Books You May Enjoy

Dealing with complexity

Before writing about complexity, I tried to find some fancy, striking definition of the word itself, but it appeared to be a complex task on its own. Merriam-Webster defines the word complexity as the quality or state of being complex and this definition is rather obvious and might even sound silly. Therefore, we need to dive a bit deeper into this subject and understand more about complexity.

In software, the idea of complexity is not much different from the real world. Most software is written to deal with real-world problems. Those problems might sound simple but be intrinsically complex, or even wicked. Without a doubt, the problem space complexity will be reflected in the software that tries to solve such a problem. Realizing what kind of complexity we are dealing with when creating software thus becomes very important.

Types of complexity

In 1986, the Turing Award winner Fred Brooks wrote a paper called No Silver Bullet – Essence and Accident in Software Engineering in which he made a distinction between two types of complexity—essential and accidental complexity. Essential complexity comes from the domain, from the problem itself, and it cannot be removed without decreasing the scope of the problem. In contrast, accidental complexity is brought to the solution by the solution itself—this could be a framework, a database, or some other infrastructure, with different kinds of optimization and integration.

Brooks argued that the level of accidental complexity decreased substantially when the software industry became more mature. High-level programming languages and efficient tooling give programmers more time to work on business problems. However, as we can see today, more than 30 years later, the industry still struggles to fight accidental complexity. Indeed, we have power tools in our hands, but most of those tools come with the cost of spending the time to learn the tool itself. New JavaScript frameworks appear every year and each of them is different, so before writing anything useful, we need to learn how to be efficient when using the framework of choice. I wrote some JavaScript code many years ago and I saw Angular as a blessing until I realized that I spend more time fighting with it than writing anything meaningful. Or take an example of containers that promised us to bring an easy way to host our applications in isolation, without all that hassle with physical or virtual machines. But then we needed an orchestrator, and we got quite a few, spent time learning to work with them until we got Kubernetes to rule them all and now we spend more time writing YAML files than actual code. We will discuss some possible reasons for this phenomenon in the next section.

You probably noticed that essential complexity has a strong relation to the problem space, and accidental complexity leans towards the solution space. However, we often seem to get problem statements that are more complex than the problems themselves. Usually, this happens due to problems being mixed with solutions, as we discussed earlier, or due to a lack of understanding.

Gojko Adžić, a software delivery consultant and the author of several influential books, such as Specification by Example and Impact Mapping, gives this example in his workshop:

"A software-as-a-service company got a feature request to provide a particular report in real time, which previously was executed once a month on schedule. After a few months of development, salespeople tried to get an estimated delivery date. The development department then reported that the feature would take at least six more months to deliver and the total cost would be around £1 million. It was because the data source for this report is in a transactional database and running it in real time would mean significant performance degradation, so additional measures such as data replication, geographical distribution, and sharding were required.

The company then decided to analyze the actual need that the customer who requested this feature had. It turned out that the customer wanted to perform the same operations as they were doing before, but instead of doing it monthly, they wanted it weekly. When asked about the desired outcome of the whole feature, the customer then said that running the same report batched once a week would solve the problem. Rescheduling the database job was by far an easier operation that redesigning the whole system, while the impact for the end customer was the same."

This example clearly shows that not understanding the problem can lead to severe consequences. We as developers love principles like DRY. We seek abstraction that will make our code more elegant and concise. However, often that might be entirely unnecessary. Sometimes we fall to the trap of using some tool or framework that promises to solve all issues in the world, easily. Again, that never comes without a cost. As a .NET developer, I can clearly see this when I look at the current obsession with dependency injection among the community.

True enough, Microsoft finally made a DI container that makes sense, but when I see it being used in a small console app just to initialize the logger, I get upset. Sometimes, more code is being written just to satisfy the tool, the framework, the environment, than the code that delivers the actual value. What seemed to be the essential complexity in this example turned out to be a waste:

Complexity growth over time

The preceding graph shows that with the ever-growing complexity of the system, the essential complexity is being pushed down and the accidental complexity takes over. You might have doubts about the fact that accidental complexity keeps growing over time when the desired functionality almost flatters out. How could this happen, definitely we can't spend time only creating more accidental complexity? When systems become more prominent, a lot of effort is required to make the system work as a whole and to manage large data models, which large systems tend to have. Supportive code grows and a lot of effort is being spent to keep the system running. We bring cache, optimize queries, split and merge databases, the list goes on. In the end, we might actually decide to reduce the scope of the desired functionality just to keep the system running without too many glitches.

DDD helps you focus on solving complex domain problems and concentrates on the essential complexity. For sure, dealing with a new fancy front-end tool or use a cloud document database is fun. But without understanding what problem are we trying to solve, it all might be just waste. It is much more valuable to any business to get something useful first and try it out than getting a perfect piece of state-of-the-art software that misses the point entirely. To do this, DDD offers several useful techniques for managing complexity by splitting the system into smaller parts and making these parts focus on solving a set of related problems. These techniques are described later in this book.

The rule of thumb when dealing with complexity is—embrace essential, or as we might call it, domain complexity, and eliminate or decrease the accidental complexity. Your goal as a developer is not to create too much accidental complexity. Hence, very often, accidental complexity is caused by over-engineering.

Categorizing complexity

When dealing with problems, we don't always know whether these problems are complex. And if they are complex, how complex? Is there a tool for measuring complexity? If there is, it would be beneficial to measure, or at least categorize, the problem's complexity before starting to solve it. Such measurement would help to regulate the solution's complexity as well, since complex problems also demand a complex solution, with rare exceptions to this rule. If you disagree, we will be getting deeper into this topic in the following section.

In 2007, Dave Snowden and Mary Boone published a paper called A Leader's Framework for Decision Making in Harvard Business Review, 2007. This paper won the Outstanding Practitioner-Oriented Publication in OB award from the Academy of Management's Organizational Behavior division. What is so unique about it, and which framework does it describe?

The framework is Cynefin. This word is Welsh for something like habitat, accustomed, familiar. Snowden started to work on it back in 1999 when he worked at IBM. The work was so valuable that IBM established the Cynefin Center for Organizational Complexity, and Dave Snowden was its founder and director.

Cynefin divides all problems into five categories or complexity domains. By describing the properties of problems that fall into each domain, it provides a sense of place for any given problem. After the problem is categorized into one of the domains, Cynefin then also offers some practical approaches to deal with this kind of problem:

Cynefin framework, image by Dave Snowden

These five realms have specific characteristics, and the framework provides attributes for both, identifying to which domain your problem belongs, and how the problem needs to be addressed.

The first domain is Simple, or Obvious. Here, you have problems that can be described as known knowns, where best practices and an established set of rules are available, and there is a direct link between a cause and a consequence. The sequence of actions for this domain is sense-categorize-response. Establish facts (sense), identify processes and rules (categorize), and execute them (response).

Snowden, however, warns about the tendency for people to wrongly classify problems as simple. He identifies three cases for this:

  • Oversimplification: This correlates with some of the cognitive biases described in the following section.
  • Entrained thinking: When people blindly use the skills and experiences they have obtained in the past and therefore become blinded to new ways of thinking.
  • Complacency: When things go well, people tend to relax and overestimate their ability to react to the changing world. The danger of this case is that when a problem is classified as simple, it can quickly escalate to the chaotic domain due to a failure to adequately assess the risks. Notice the shortcut from Simple to Chaotic domain at the bottom of the diagram, which is often being missed by those who study the framework.

For this book, it is important to remember two main things:

  • If you identify the problem as obvious, you probably don't want to set up a complex solution and perhaps would even consider buying some off-the-shelf software to solve the problem, if any software is required at all.
  • Beware, however, of wrongly classifying more complex problems in this domain to avoid applying the wrong best practices instead of doing more thorough exploration and research.

The second domain is Complicated. Here, you find problems that require expertise and skill to find the relation between cause and effect, since there is no single answer to these problems. These are known unknowns. The sequence of actions in this realm is sense-analyze-respond. As we can see, analyze replaces categorize because there is no clear categorization of facts in this domain. Proper analysis needs to be done to identify which good practice to apply. Categorization can be done here too, but you need to go through more choices and analyze the consequences as well. That is where previous experience is necessary. Engineering problems are typically in this category, where a clearly understood problem requires a sophisticated technical solution.

In this realm, assigning qualified people to do some design up front and then perform the implementation makes perfect sense. When a thorough analysis is done, the risk of implementation failure is low. Here, it makes sense to apply DDD patterns for both strategic and tactical design, and to the implementation, but you could probably avoid more advanced exploratory techniques such as EventStorming. Also, you might spend less time on knowledge crunching, if the problem is thoroughly understood.

Complex is the third complexity domain in Cynefin. Here, we encounter something that no one has done before. Making even a rough estimate is impossible. It is hard or impossible to predict the reaction to our action, and we can only find out about the impact that we have made in retrospect. The sequence of actions in this domain is probe-sense-respond. There are no right answers here and no practices to rely upon. Previous experience won't help either. These are unknown unknowns, and this is the place where all innovation happens. Here, we find our core domain, the concept, which we will get to later in the book.

The course of action for the complex realm is led by experiments and prototypes. There is very little sense in creating a big design up front since we have no idea how it will work and how the world will react to what we are doing. Work here needs to be done in small iterations with continuous and intensive feedback.

Advanced modeling and implementation techniques that are lean enough to respond to changes quickly are the perfect fit in this context. In particular, modeling using EventStorming and implementation using event-sourcing are very much at home in the complex domain. A thorough strategic design is necessary, but some tactical patterns of DDD can be safely ignored when doing spikes and prototypes, to save time. However, again, event-sourcing could be your best friend. Both EventStorming and event-sourcing are described later in the book.

The fourth domain is Chaotic. This is where hellfire burns and the Earth spins faster than it should. No one wants to be here. Appropriate actions here are act-sense-respond, since there is no time for spikes. It is probably not the best place for DDD since there is no time or budget for any sort of design available at this stage.

Disorder is the fifth and final realm, right in the middle. The reason for it is that when being at this stage, it is unclear which complexity context applies to the situation. The only way out from disorder is to try breaking the problem into smaller pieces that can be then categorized into those four complexity contexts and then deal with them accordingly.

This is only a brief overview of the categorization of complexity. There is more to it, and I hope your mind gets curious to see examples, videos, and articles about the topic. That was the exact reason for me to bring it in, so please feel free to stop reading now and explore the complexity topic some more. For this book the most important outcome is that DDD can be applied almost everywhere, but it is of virtually no use in obvious and chaotic domains. Using EventStorming as a design technique in complex systems would be useful for both complicated and complex domains, along with event-sourcing, which suits complex domains best.

Decision making and biases

The human brain processes a tremendous amount of information every single second. We do many things on autopilot, driven by instinct and habit. Most of our daily routines are like this. Other areas of brain activity are thinking, learning, and decision-making. Such actions are performed significantly more slowly and require much more power than the automatic operations.

Dual process theory in psychology suggests that these types of brain activity are indeed entirely different and there are two different processes for two kinds of thinking. One is the implicit, automatic, unconscious process, and the other one is an explicit conscious process. Unconscious processes are formed over a long time and are also very hard to change because changing such a process would require developing a new habit, and this is not an easy task. Conscious processes, however, can be altered through logical reasoning and education.

These processes, or systems, happily co-exist in one brain but are rather different in the way they operate. Keith Stanovich and Richard West coined the names implicit system, or System 1 and explicit system, or System 2 (Individual difference in reasoning: implications for the rationality debate? Behavioral and Brain Sciences 2000). Daniel Kahneman, in his award-winning book Thinking Fast and Slow (New York: Farrar, Straus and Giroux, 2011), assigned several attributes to each system:

System 1 and System 2

What does all this have to do with DDD? Well, the point here is more about how we make decisions. It is scientifically proven that all humans are biased, one way or another. As developers, we have our own ways of solving technical problems and of course we're ready to pick up the fight when being challenged by the business about the way we write software to solve their problems. At the other hand, our customers are also biased towards their ways, they probably already were earning money without our software or, they might have some other system created twenty years ago by ancient Cobol programmers and it somehow works, so they just want a modern or even cloud-based version of the same thing. The point I am trying to make here is that we should strive to mitigate our biases and be more open to what other people say and still not fall into a trap of their own biases. It was not without a reason for Google People Operations team to create the Unconscious Bias @ Work workshop to help their colleagues to become aware of their biases and show some methods to deal with them.

The Cynefin complexity model requires us to at least categorize the complexity we are dealing with in our problem space (and also sometimes in the solution space). But to assign the right category, we need to make a lot of decisions, and here we often get our System 1 responding and making assumptions based on our biases and experiences from the past, rather than engaging System 2 to start reasoning and thinking. Of course, every one of us is familiar with a colleague exclaiming yeah, that's easy! before you can even finish describing the problem. We also often see people organizing endless meetings and conference calls to discuss something that we assume to be a straightforward decision to make.

Cognitive biases are playing a crucial role here. Some biases can profoundly influence decision-making, and this is definitely System 1 speaking. Here are some of the biases and heuristics that can affect your thinking about system design:

  • Choice-supportive bias: If you have chosen something, you will be positive about this choice even though it might have been proven to contain significant flaws. Typically, this happens when we come up with the first model and try to stick to it at all costs, even if it becomes evident that the model is not optimal and needs to be changed. Also, this bias can be observed when you choose a technology to use, such as a database or a framework.
  • Confirmation bias: Very similar to the previous one, confirmation bias makes you only hear arguments that support your choice or position and ignore arguments that contradict the arguments that support your choice, although these arguments may show that your opinion is wrong.
  • Band-wagon effect: When the majority of people in the room agree on something, this something begins to make more sense to the minority that previously disagreed. Without engaging System 2, the opinion of the majority gets more credit without any objective reason. Remember that what the majority decides is not the best choice by default!
  • Overconfidence: Too often, people tend to be too optimistic about their abilities. This bias might cause them to take more significant risks and make the wrong decisions that have no objective grounds but are based exclusively on their opinion. The most obvious example of this is the estimation process. People invariably underestimate rather than overestimate the time and effort they are going to spend on a problem.
  • Availability heuristic: The information we have is not always all the information that we can get about a particular problem. People tend to base their decisions only on the information in hand, without even trying to get more details. This often leads to an over-simplification of the domain problem and an underestimation of the essential complexity. This heuristic can also trick us when we make technological decisions and choose something that has always worked without analyzing the operational requirements, which might be much higher than our technology can handle.

The importance of knowing how our decision-making process works is hard to overestimate. The books referenced in this section contain much more information about human behavior and different factors that can have a negative impact on our cognitive abilities. We need to remember to turn on System 2 in order to make better decisions that are not based on emotions and biases.

You have been reading a chapter from
Hands-On Domain-Driven Design with .NET Core
Published in: Apr 2019
Publisher: Packt
ISBN-13: 9781788834094
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image