Object-Oriented Enterprise Architecting

Object-Orientation

Object-orientation a.k.a object-oriented thinking / modelling / programming is a way to explore real world complexity by turning real world elements into interacting “objects”. A restaurant, as an example can be modelled as a set of interacting objects representing concepts such as guests, tables, waiters, cooks, orders, bills, dishes, payments and so on. Object-oriented modelling can be conducted by rudimentary tools as as illustrated in figure 1.

Figure 1: Object Model

Object-orientation goes back to the late 1950ties, but was first made available for broader usage with the Simula programming language in the mid 1960ties. Simula inspired new object oriented languages such as Smaltalk, C++, Java and many more.

Object-oriented programming lead to the development of object-oriented analysis and design, and more formal and powerful modelling techniques and languages such as UML (Unified Modelling Language), SYSML, ArchiMate and AKM (Active Knowledge Modelling) to mention a few. It can also be argued that EventStorming, a collaborative workshop technique is object oriented.

On the dark side, object-orientation does not protect agains poor practice. Poor design practice leads typically to tightly coupled systems, systems that become expensive or even impossible to adapt and enhance as technology and business change.

Design Patterns

Design patterns is one way to enhance design practice by provision of tangible abstractions and concepts that help practitioners to create more healthy structures. Patterns originate from civil architecture and is attributed to Christopher Alexander and his work on pattern languages for buildings and cities.

The Gang of Four (GoF) 1995 book Design Patterns – Elements of Reusable Object-Oriented Software was the first book that introduced patterns for object-oriented software development. The book is still relevant and highly recommended as an introduction to patterns.

Another seminal book that embrace patterns is Eric Evans 2003 book Domain-Driven Design: Tackling Complexity in the Heart of Software. Eric argues that business problems are inherently complex, ambiguous and often wicked, and therefor must the development team spend more time on exploring domain concepts, and to express and test them in running code as fast as possible to learn.

Eric argues also for the importance of a common language in the team, the ubiquitous language, a language that embraces both domain and technical concepts. OrderRepository is an example of such language that make equally sense for subject matter experts and developers alike. It enables conversations like: orders are stored in the order repository and the order repository provides a function for creating and finding orders.

EventStorming is framed around the discovery and capture of three design patterns: Commands, DomainEvents and Aggregates as illustrated by figure 2.

Figure 2: Event Storming Artefacts

The model reads that the doctor diagnoses the patient and add the diagnosis to the patient record. Then a treatment is prescribed, before the effects are checked at a later stage. EventStorming begins with capturing the capturing the events, and from them the commands and aggregates are derived.

There are particularly three software design patterns that I think should be part of every enterprise architects toolbox.

  • Command Query Responsibility Segregation (CQRS) separates read operations from write operations enabling a clearer thinking on what those things mean in our architecture. Data Mesh is not possible without CQRS.
  • Data Mesh is an architectural approach to manage operational and analytical data as products. Its enabled by CQRS and it comes with its own suppleness.
  • Event Sourcing is based on storing changes as independent events. Bank accounts works this way as each deposit and withdrawal are stored as a a sequence of events, enabling the account to answer what was my deposits on a particular date back in time. Event Sourcing should not be conflated with Event Storming which is a workshop methodology.

The motivation behind the claim is that these patterns shape the architecture and the architects thinking. By knowing them the architect can make rational judgments with respect to their relevance in a given context.

Enterprise Architecting

Enterprise architects have used object-oriented concepts for years taking advantage of modelling languages such as UML, ArchiMate and others. Architects with interest and understanding of agile methodologies might also have explored EventStorming workshop techniques.

Independent of technical tooling enterprise architects face a growing challenge as enterprises digitalise their operating and business models. Take our healthcare model and scale it to a hospital or include multiple hospitals, the primary health services and elder care. In such environment you will have solutions from multiple vendors, solutions from different technical generations, and solutions that are outside your own control. Add then that sector is political sensitive and full of conflicting interests.

The catch being that this is not unique for healthcare but is the nature of the real world. Real world business problems are most often wicked or complex. Scaling up forces architects to slice the problem space into useful modules that can be managed independently. Such slicing must be done with care as coupling and lack of cohesion will haunt the chosen architecture.

The architectural crux is to get the slicing right. Sometimes this is easy as system boundaries follows natural boundaries in the domain. But this is not always the case as many enterprises have ended up with dysfunctional structures that leads to fragile and error prone handovers. Handover of patient information in the healthcare sector is a good example. The catch is that handovers are everywhere and its effects are loss of critical information and rework.

Strategic Design

Domain Driven Design offers a technique that help us model the slicing of a large model called Bounded Context and Context Mapping. Applied on our healthcare example we can create something like the diagram in figure 3.

Figure 3: Context map

Bounded Context are organised into a Context Map that captures the relationships between various bounded context. Adding to the complexity, each bounded context might need access to different aspects of the patient record. Example, pharmacies have no need for individual food constraints that are relevant for nursing homes and hospitals.

Making this even complex is the fact that all these contexts can be further decomposed as in figure 4. Add to this that in the case of an individual patient, specialists from different contexts need to collaborate to decide upon the path forward. It’s normal for surgeons, radiologists and oncologists to discuss a x-ray image of a tumor. Such trans-discipline collaboration is critical for problem solving, and its in these interactions balanced solutions to hard problems are shaped.

Figure 4: Domain decomposition

Figure 5 present two architectural alternatives, distributed and centralised. In the distributed architecture each bounded contexts is free to have whatever system they find useful as long as they are able to send and receive patient record update messages (events).

In the centralised architecture a new shared bounded context that manages the patient record has been introduced and the “operational” contexts accesses the shared and centralised record management system. Which one of those alternatives are “best” boils down to tradeoffs. Both come with strengths and weaknesses.

Figure 5: Architectural Styles

What matters is how we choose to pursue implementation. The crux i distributed architectures boils down to message standardisation and the establishment of a transport mechanisms. Centralised architecture can be realised in two principle ways.

  • By using an old school integrated application with user interfaces and a shared database. Then force everybody to use the same solution. Integrated means tight coupling of user interfaces and the underpinning data / domain model into something that is deployed as one chunk.
  • By developing a loosely coupled application or platform based on API’s that can be adapted to changing needs. Loos coupling means that the data management part – the record keeping is separated from end user tools along the lines described here.

Making the wrong choice here is most likely catastrophic, but beaware that all alternatives comes with strengths and weaknesses. To understand the alternatives feasibility a bottom-up tactical architecting endeavour is needed. Such endeavour should take advantage of battle proven patterns and design heuristics. In the end a claims based SWOT (Strengths, Weaknesses, Opportunities and Threats) analysis might prove its weight in gold.

Tactical Design

Tactical design implies to dig into what users do, what information they need for doing what they do and to develop the information backbone that shapes the sector’s body of knowledge. Evaluation of implementation alternatives require tactical level design models as the devil is in the details.

Tactical design is best explained using a practical example taking advantage of the capabilities provided by the AKM (Active Knowledge Management ) modelling approach. What make AKM different from other methods and tools is its dynamic meta models. The example model in figure 6 exploits the IRTV (Information, Roles, Tasks and Views) meta model. A deep dive in AKM modelling and meta-modelling will be addressed later.

The example builds on our healthcare case, and the purpose is to highlight the main modelling constructs, their usage and their contribution to the model as a tool for enlightened discussions, and future tradeoff analysis.

Healthcare can be thought of as stories about patients, diseases and treatments and that is what we will try to demonstrate by our toy model. Take note of the fact that some Information datatypes and Views have suggested types that can be used to create a richer and more domain specific language.

Be aware that diseases might have multiple treatments, and that a treatment can be applicable for more than one disease. This is by the way a good example of the “muddiness” of the real world where everything one way or the other is entangled.

Figure 6: AKM Enterprise Architecture Model

The model in figure 6 captures two stories. The first story present a patient consultation session where the GP diagnoses a patient and updates the patient’s medical record. The second story shows how a researcher updates the treatment protocol.

At this point the sharp-eyed should be able to discover a pattern, and for those who don’t please read Reinventing the Library before you start studying figure 7 below. Here the model from figure 6 is restructured and simplified so the key points can be highlighted.

Figure 7: Simplified Enterprise Architecture with Bounded Contexts

Firstly, the sector’s body of knowledge is structured around three concepts that are managed in a “library”. Such library could of cause be extended to include infrastructure components such as hospitals, caring homes, and even staffing. It all depends on what questions the enterprise want to answer and accumulate knowledge about.

Secondly, the design of functional domains by grouping related tasks into what DDD defines as bounded contexts. This design task should be guided by the key design heuristic; maximise cohesion, minimise coupling while reflecting over what can be turned into independent deployable’s if the architecture should take physical form as software applications.

Lastly, views as the key to loose coupling and as artefacts that need to be rigorously designed. Views are the providers of what AKM call workspaces. The model above contain two types of views. The first type is used to separate roles from tasks within a bounded context. This view is typically visual and interactive in nature as its designed to support humans. For those familiar with multi-agent design such view could be seen as an agents environment as explained here.

The second type of view are those that bridges between cohesive functional domains and the underpinning library. These views can also be used to create interaction between operational bounded contexts as can be seen in the case of the Diseases View in figure 7.

View would benefit from being designed according to the CQRS pattern, basically separating commands from Queries as shown in figure 8. In addition to queries and commands can views be the home for transformation, event processing and communication. In a software context views exposes domain specific API’s, they represents bounded contexts and they can be deployed independently as architectural quanta’s. Again the sharp-eyed should see that views might be the key for those who want to think in terms of data mesh and data products. A data-mesh boils down to transforming data so the data can be served to fit the consumers needs.

Figure 8: View Architecture

For those of you who are still here a couple of words about knowledge and the theoretical framework that motivated this post – constructor theory.

Constructor theory

Constructor theory or the science of can and can’t is a rather new theory in theoretical physics developed by David Deutch and later Chiara Marletto at the University of Oxford. The essence of constructor theory is that physical laws can be extended to cover what transformations are possible or not. This implies that Physics can be used to define concepts such as information and knowledge.

A constructor is a “machine” that can perform transformations in a repetitive way and to do that transformation it needs a “recipe”. A factory that create airplanes or cars are examples of constructors. Its the institutionalised knowledge in those entities that make it possible to mass produce samples over time with quality.

If we now revisit figure 7 and 8 it should be obvious that what we have architected could be understood as constructors. An that should not come as a surprise since constructor theory defines information by using two counterfactuals; the possibility of copying and of flipping (change state). Knowledge is defined as self preserving information.

My advice to enterprise architects is to read The science of can and can’t – enjoy as it is the perfect vacation companion.

Reinventing the library

Humans have collected, classified, copied, translated, and shared information about transactions and environment since we saw the first light of day. We even invented a function to perform this important task, the library, with the library of Alexandria as one of the most prominent examples from ancient time.

The implementation of the library has changed as a function of technological development while maintaining a stable architecture. The library is orthogonal to the society or enterprise it serve as illustrated in the figure below.

The architectural stability can most likely be explained by the laws of physics. David Deutsch published in 2012 what is now called constructor theory that use contrafactual’s to define what transformations are possible and not. According to the constructor theory of information can a physical system carry information if the system can be set to any of at least two states (flip operation) and that each state can be copied.

This is exactly what the ancient libraries did. The library’s state change when new information arrived allowing the information to be copied and shared. The library works equally well for clay tablets, parchments, papyrus rolls, paper, and computer storage. The only thing that change as function of technology is how fast a given transformation can be performed.

With the introduction of computers the role of the library function changed as many functions migrated into what we can call sector specific applications and databases. In many ways we used computers to optimise sectors at the cost of supporting cross sector interoperability. I think there was a strong belief that technology would make the library redundant.

The effect being that cross sector interaction become difficult. The situation has in reality worsened as each sector has fragmented into specialised applications and databases. What was once an enterprise with five lines of business (sectors) might now be 200 specialised applications with very limited interoperability. This is what we can call reductionism on steroids as illustrated in the figure below.

The only companies who have benefitted from this development are those who provide application integration technology and services. The fragmentation was countered by what I like to call the integrated mastodonts that grew out from what once was a simple database that has been extended to cover new needs. Those might deserve their own blogpost and we leave them for now.

Data platforms

In the mid 1990ties the Internet business boom began. Amzon.com changed retail and Google changed search as two examples. A decade later AWS provided data center services on demand, Facebook and social media was born, and in 2007 Apple launched the iPhone, changing computing and telephony forever.

Another decade down the road, around 2015, the digitalisation wave reached the heavy-industry enterprise space. One of the early insights was the importance of data and the value of making data available outside existing application silos. Silos that had haunted the enterprise IT landscape for decades. By taking advantage of the Internet technology serving big data and social media application the industrial data platform was born.

The data platform made it easier to create new applications by liberating data traditionally stored in existing application silos as illustrated below. The sharp minded should now see that what really took place was reinventing the library as a first order citizen in the digital cityscape.

The OSDU™ Data Platform initiative was born on the basis of this development where one key driver was the understanding that a data platform for an industry must be standardised and its development require industry wide collaboration.

Data platform generations

We tend to look at technology evolution as a linear process, but that is seldom the case. Most often the result of evolution can be seen as technological generations, where new generations come into being while the older generations still are in existence. This is also the case when it comes to data platforms.

Applied on data platforms the following story can be told:

  • First generation data plattforms followed the data lake pattern. Here application data was denormalised and stored in an immutable data lake enabling mining and big data operations.
  • Second generation data plattforms follows the data mesh pattern taking advantage of managing data as products by adding governance.
  • Third generation data platforms take advantage of both data lake and data mesh mechanisms but what make them different is their support of master data enabled product lifecycle management.

Master data is defined by the DAMA Data Management Body of Knowledge as the entities that provide context for business transactions. The most known examples includes customers, products and the various elements that defines a business or domain.

Product lifecycle management models

Master data lifecycle management implies capturing how master data entities evolve with time as the their counterparts in the real world change. To do so a product model is required. The difference between a master data catalogue and a product model is subtle but essential.

A master data catalogue contextualise data with the help of metadata. A product model can also do that, but in addition it captures the critical relationships in the product structure as a whole and tracks how the product structure evolve with time. Using the upstream oil and gas model below the following tale can be told.

When a target (pocket with hydrocarbons) shall be realised a new wellbore must be made. When there are no constraints there can be thousand possible realisations. As the number of constraints are tightened the number of options are reduced and in the end the team land on one that is preferred, while keeping the best options in stock in case something unforeseen happens. Let’s say that the selected well slot breaks and can’t be used before it is repaired, a task that take 6 months. Then its possible for the team to go back to the product model and look for alternatives.

Another product model property is that we can go back in time and look at how the world looked like at a given day. In the early days of a field its possible to see that there was an area where we had seismic that looked so promising that exploration wells was drilled, leading to the reservoir that was developed and so on. The product model is a time machine.

Our example product model above is based on master data entities from upstream oil and gas, entities that are partly addressed by the OSDU™ Data Platform. There are two reasons for using the OSDU™ Data Platform as an example.

Firstly, I work with its development and have reasonably good understanding of the upstream oil and gas industry. Secondly, the OSDU™ Data Platform is the closest I have seen that can evolve into a product lifecycle centric system. The required changes are more about how we think as we have the Lego bricks in place.

Think of the OSDU™ Data Platform as a library of evolutionary managed product models, not as only a data catalogue. Adapt the DDMS (Domain Data Management Services) to become work spaces that operates on selected aspects of the product models, not only the data. The resulting architecture is illustrated below.

Moving to other sectors the same approach is applicable. A product model could could be organised around patients, deceases and treatments or retail stores and assortments for that matter. The crux is to make the defining masters of your industry the backbone of the evolutionary product model.

This story will be continued in a follow-up where the more subtle aspects will be explored. One thing that stand out is that this make it easier to apply Domain Driven Design patterns as the library is a living model, not only static data items.

Hopefully if you have reached to this sentence, you have some new ideas to pursue.

Toward data-less (micro) services

Data gravity and the laws of scale

The motivation for data-less (micro) services is found in data gravity, the laws of scale and a doze of thermodynamics. All of them a mouthful in their own way, so lets begin.

Data gravity describes how data attracts data in the same way as celestial bodies attracts each other. The more data there is, the stronger the pull. Data gravity has the power to tranform the best architected software systems into unmanageable balls of mud (data & code). What is less understood is data gravity’s underpinning cause that, my claim, can be traced to the universal laws of scale as outlined by professor Geoffrey West in his book Scale, the universal laws of life and death in organisms, cities and companies. For those who do not have the time to read the book, watch one of his many online talks.

In biology each doubling of size leads to an 75% efficiency gain. The effect being that an elephant burn fewer calories pr. kg than a human, who in turn burn less than a mouse as illustrated below.

Metabolic rate as a function of body mass (plotted logarithmically)
Source: Scale

Cities follow the same pattern, but with the factor of 85%. Basically does a big city have fewer gas stations pr inhabitant than a small one. Another aspect is the super-linear scaling of innovation, wages, crime etc that comes from the social networks. This grows with a factor of 15% pr doubling in size.

According to professor West the scaling effects are caused by the fractal nature of the infrastructure that provides energy and removes waste. Think about the human circulatory system and the water, sewage, and gas pipes in cities. I am convinced the same laws apply for software system development, though with different and yet unknown factors.

Last but not least, software development as any activity that uses energy to create order will cause disorder somewhere else. This is due the the second law of thermodynamics. This mean that we need to carefully decide where we want order and how we can direct disorder to places where harm can be minimised.

Data less services:

Data-less services is the natural result of acknowledging the wisdom of the old saying that data ages like wine, software like fish. The essence being that software, the code, the logic, and the technology deteriorate with time, while data can be curated into something more valuable with time.

Therefore it make sense to keep code and data separated. Basically, to separate fast moving code from the slow moving data as a general design strategy. Another way of viewing this is to regard data the infrastructure that scale sub-linear, while the code follows the super-linear growth of innovation.

At first glance this might look like a contradiction in context of the micro-service architectural style that advocates small independent autonomous services. But when we acknowledge that one of the problems with micro-services is sacrificed data management it might make sense.

It is also worth mentioning that separation of code from data is at the heart of the rational agent model of artificial intelligence as outlined in Russel & Norvig’s seminal book Artificial Intelligence: A modern approach where an agent can be anything that can perceive and act upon its environment using sensors and actuators.

A human agent have eyes, ears and other organs for sensing, and hands, legs and voice as actuators. A robotic agent uses a camera, radar or lidar for sensing and actuates tools using motors. A software agent receives files, network packages, keyboard inputs as its sensory inputs and acts upon its environment by creating files, displaying information, sending information to other agents and so on.

The environment could be anything, from the universe to the stock market in Sidney, or a patients prostata that undergo surgery. It can be a physical reality or a digital representation of the same. The figure below shows agent and its environment. An agent consists of logic and rules and the environment consists of data.

Agent and Environment

The internal function of the agent is known as its perception-action cycle. This can be dumb as in a thermostat or highly sophisticated as in a self driving car. While agent research is about the implementation of the perception-action cycle we chose to look at the environment and the tasks the agent need to perform to produce its intended outcomes in that environment.

If the agent is a bank clerk and the environment a customer account, the agent need to be able to make deposits, withdrawals and account statements showing the balance. The environment need to contain the customer and the account. Since the protocol between agent and environment is standardised by an API, agent instances can be replaced by something that is more sophisticated that can take advantage of a richer environment. The customer account represents a long lived asset for the bank and it can be extended to cover loans as well as funds.

This approach is also known as the Blackboard Pattern.

Conclusion

Since Information systems are governed by gravity and the laws of scale they are hard to conquer since it at any crossroad is so much easier to extend something that exists then building something new from scratch. Hard enforcement of physical boundaries using micro-services comes with caveats. One being distributed data management, another being that each micro-service will begin to grow in size as new features are needed and therefore require continuous shepherding and fire extinguishing as the entropy materialises.

The proposed approach is to address this using a data-less service architecture that is supported by a shared data foundation in the same way as agents and environment using the blackboard pattern. This mean to implement an architectural style that builds on the old saying that code ages like fish and data ages like wine.

This is by the way the pattern of the OSDU Data Platform that will be addressed in a later post.

Resurrection

Wellcome back to what I hope will be a living and more regular blog on software architecture and design challenges. A lot has happened since my last blogpost in 2016, and I find that the importance of holistic design and architectural thinking has increased over the years…

Topics that will be covered moving forward includes but are not restricted to micro-services, knowledge knowledge modelling, designing collaborative workspaces and last but not least, the lessons learned from building the OSDU Data Platform.

My objective is to bring some of the lessons learnt from spending decades working on implementing software solutions to business problems to a new generation of software practitioners and thinkers. A lot has happened since I started studying computer science in 1980, at the same time we circulate around the same problems though with much more powerful tools, and at times I wander if this has been for the good or the bad. Not that it’s not good to solve demanding problems, but that our methodologies and approaches does not develop fast enough to take advantage of the technology.

The first post will come in a week’s time and address what’s wrong with micro services. It’s a big mount full, but as I see it an important one to address. The question on the table is, what are the benefits of making a distributed networked solution to what in most cases are homogenous problems, and more importantly, what kind architecting can be applied to make it better.

Looking forward to seeing you again.

Microservices and the role of Domain-Driven Design

In our #SATURN15 talk From Monolith to Microservices we addressed the challenge of data centric development, particularly when behaviour rich domain models are needed.

One of our main point in the talk was that too many developers continued with their script like programming style when they moved to Object-Oriented programming languages. Objects was treated as records and they seamed to have forgotten or never learned that object-oriented programming is all about capture of domain  behaviour and knowledge.

After reading the introductory chapter of @VaughnVernon book Implementing Domain-Driven Design this weekend another aspect became evident. The negative influence of properties and property sheets, originally introduced by Microsofts Visual Basic in 1991 and later copied by the JavaBean specification. These innovations dumbed objects down to records and even worse, trained developers to think this was the right way to design software using objects.

For Microservices to survive it is time to take object-oriented modelling back. Developers must learn that objects and object oriented programming supported by domain-driven design provides the tooling and techniques required to build behaviour rich software.  Software that not only capture data, but also domain behaviour and knowledge and make it executable.

The claim is that Microservices without sufficient capture of rich domain behaviour and knowledge will not add sufficient business value. They will just end up as distributed balls of mud.