Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
MuleSoft Platform Architect's Guide

You're reading from   MuleSoft Platform Architect's Guide A practical guide to using Anypoint Platform's capabilities to architect, deliver, and operate APIs

Arrow left icon
Product type Paperback
Published in Jul 2024
Publisher Packt
ISBN-13 9781805126188
Length 498 pages
Edition 1st Edition
Concepts
Arrow right icon
Authors (2):
Arrow left icon
Jim Andrews Jim Andrews
Author Profile Icon Jim Andrews
Jim Andrews
Jitendra Bafna Jitendra Bafna
Author Profile Icon Jitendra Bafna
Jitendra Bafna
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Chapter 1: What is the MuleSoft Platform? FREE CHAPTER 2. Chapter 2: Platform Foundation Components and the Underlying Architecture 3. Chapter 3: Leveraging Catalyst and the MuleSoft Knowledge Hub 4. Chapter 4: An Introduction to Application Networks 5. Chapter 5: Speeding with Accelerators 6. Chapter 6: Aligning Desired Business Outcomes to Functional Requirements 7. Chapter 7: Microservices, Application Networks, EDA, and API-led Design 8. Chapter 8: Non-Functional Requirements Influence in Shaping the API Architecture 9. Chapter 9: Hassle-free Deployment with Anypoint iPaaS (CloudHub 1.0) 10. Chapter 10: Hassle-Free Deployment with Anypoint iPaaS (CloudHub 2.0) 11. Chapter 11: Containerizing the Runtime Plane with Runtime Fabric 12. Chapter 12: Deploying to Your Own Data Center 13. Chapter 13: Government Cloud and the EU Control Plane – Special Considerations 14. Chapter 14: Functional Monitoring, Alerts, and Operation Monitors 15. Chapter 15: Controlling API Sprawl with Universal API Management 16. Chapter 16: Addressing Non-Functional Requirements – from a Thought to an Operation 17. Chapter 17: Prepare for Success 18. Chapter 18: Tackling Tricky Topics 19. Index 20. Other Books You May Enjoy

How have integration approaches evolved?

An iPaaS is the latest generation in a long line of Integration solutions which have evolved over the years. The need to integrate applications has been around practically since the second computer system was developed and architects realized the first system offered data and functionality that would add value to the second system. Let’s take a quick look at how integration approaches have evolved across major innovations leading to this newest generation of iPaaS. To help us visualize the different approaches, consider the following simplified use case.

J&J Music Store Use Case

J&J Music is a business founded in 1970 that sells records direct from the publishers. They regularly receive large shipments from the publishers and these records must be added to the inventory available for sale. The company developed an inventory system allowing them to keep up with the number of records they are carrying in their store. Using this system, they were able to increase the quantity on hand when new shipments of a record arrive. As records leave the shelves of the store, they can update the system to reflect the new quantity. This was all done manually by the inventory management team. A different team handles the sales.

The sales clerks in the store eventually realize they also need a system to help keep up with all the orders being placed, so they build a Sales System to track the orders and invoices. As the store grew, the sales clerks realized they were often unsure if a product was available for purchase. They needed to login to the Inventory system and check the quantity before completing the sales transaction. However, because they are not part of the inventory management team, they do not update the inventory quantity.

This worked for a while but with multiple clerks serving multiple customers, the quantity in the inventory system became unreliable. The business then decided to integrate these two systems so each of the clerks would see the same inventory totals. Moreover, the sales system would automatically decrease the inventory system total for a product whenever a purchase was made. This would allow the inventory management team to reduce staff as they would now only need to update the system when new product shipments arrived.

Point to Point

We learned in our geometry classes the shortest distance between two points is a straight line. In the early days of systems development, the architectural straight line between systems was the most direct approach to integrating them. And despite advances in technology, many if not most organizations still have systems connected with point-to-point integration.

Thinking about the use case described previously, an architect designed a point-to-point solution, seen here in Figure 1.1, to integrate the inventory system and the sales system. The design diagram as seen in Figure 1.1 shows the relationship between the Inventory System and the Sales System.

Figure 1.1 - Inventory and Sales Systems with example tables

Figure 1.1 - Inventory and Sales Systems with example tables

As you can see in this figure, the connection from the inventory system to the sales system is a direct connection. Developers for J&J Music wrote new code in the Sales system to connect directly to the Inventory system database. The developers needed the exact details of the inventory system’s connection requirements and protocols as well as the specifics of how and where the product information was stored. You can see in the diagram the table and fields were captured as well.

This integration approach begins simple enough but begins to break down as more and more systems are added to the system landscape with each system requiring data from the other systems. Even without additional systems this approach is fragile as it requires each system to know the details about every other system it will connect with. If the inventory system changed any of those details, the integration would cease to work.

Let’s say the Inventory system decided to normalize their database and now they have a product master table, and a product inventory table. Now, with the quantity moved to the product inventory table, every integration must update code for this one integration point to continue to work. In Figure 1.1 this doesn’t seem like a big problem to the J&J Music architect because only 1 other system is affected. However, in a point-to-point integration approach, we must confront the N(N-1) rule. This formula, where N is the number of systems being integrated, identifies the number of connections needed in total for the integration. In Figure 1.1 this number is 2(2-1) = 2. Now let’s refer to Figure 1.2 which introduces a third system, the Accounting System.

Figure 1.2 - Addition of Accounting System and Connections Formula

Figure 1.2 - Addition of Accounting System and Connections Formula

As you can see in this new architecture diagram, adding an accounting system and integrating it with both the Inventory and Sales systems means we have 6 connections to consider: 3(3-1) = 6. Adding a 4th system would take the total to 12 and so on and so forth. The complexity associated with making changes to systems integrated with a point-to-point strategy now becomes exponential with each new system added to the landscape. If we consider an organization with hundreds of applications, all integrated using this pattern, it’s easy to understand why the architecture term “Big Ball of Mud” was coined. Figure 1.3 shows a not unreasonable 14 system connected point to point.

Figure 1.3 - Big Ball of Mud Architecture

Figure 1.3 - Big Ball of Mud Architecture

With 14 systems in Figure 1.3, the number of connections to manage is 182!

The limitations to point-to-point integration include:

  • Tight Coupling: Each system must be aware of the other systems and any change made to one potentially impacts all the other systems it communicates with.
  • Scalability: Adding new systems and components to this kind of integration causes the management and maintenance of these systems to increase in complexity. At a certain point this architecture becomes known as the big ball of mud.
  • Interoperability: Each system will likely have its own unique technology footprint with unique protocols and connectivity requirements. These differences make it increasingly difficult to make point-to-point connections and integrations.

Middleware and Remote Procedure Calls

The limitations and issues with point-to-point integration became more pronounced as large enterprises and business continued to expand their software footprint. Systems began moving off of mainframes and onto midrange systems such as IBM’s AS/400 and even onto desktop applications developed using a client-server architecture.

During this time, Remote Procedure Call (RPC) was developed to improve the communication detail dependency when calling a remote computer system over the network. For the client initiating the call, the RPC appears to be a local function call. RPC using some middleware on the network would handle the requests coming from a client. The RPC Framework provides standards for protocols and data encoding and handles the network communications.

Standards were developed to handle the different data encoding requirements of different systems. Common Object Request Broker Architecture (CORBA) and the more modern protocol gRPC are two examples of of these standards. CORBA (as the name implies) used an Object Request Broker (ORB) to manage and standardize calls between systems and was released by the Object Management Group in the early 1990’s.

Around the same time frame, Microsoft and Java had similar protocols release. Java RMI allows objects in one Java virtual machine (VM) to invoke methods on objects in another VM. It is specific to the Java programming language and is primarily used for building distributed applications in Java. DCOM is a proprietary technology developed by Microsoft for building distributed applications on Windows platforms. It is an extension of the Component Object Model (COM) and allows objects to communicate across a network. DCOM is specific to the Windows operating system.

gRPC is a modern RPC framework released by Google in 2016 using HTTP/2 as the communication protocol and is also language agnostic, a trait it shares with CORBA.

Enterprise Service Bus

While RPC is a communication method, the Enterprise Service Bus (ESB) is an architectural pattern and software infrastructure enabling and facilitating the communication of applications across a common middleware layer. The concept and first ESB products began to show up in early 2000. Unlike the RPC approach which relied on a request-response interaction, the ESB introduced a broad range of integration functionality based on messaging patterns supporting many different communication approaches including request-response, publish-subscribe, and yes even point-to-point. In many cases, products included message transformation, service orchestration, and message routing.

The ESB also enabled enterprises to begin considering ways to connect many of their legacy systems which had stood as data silo’s previously along with external partners.

Gregor Hohpe and Bobby Woolf’s seminal book Enterprise Integration Patterns was published in 2003 and described these message-based integration design patterns. This was a major influence on many products and ESBs but perhaps none so much as MuleSoft. In 2007, Ross Mason and Dave Rosenberg introduced MuleSoft as a lightweight ESB platform. From very early on, this platform included a framework implementation of the patterns described by Hohpe and Woolf.

Let’s go back now to the J&J Music store. Some 35 years after opening the store has become a global success. Ownership successfully transitioned to cassette tapes, and CD’s and have been investigating a joint venture with a device manufacture that allows users to carry digital music in their pocket. The integration requirements over the years have grown significantly. Now the CIO has decided to purchase an ESB platform to support an aggressive plan to integrate a new accounting system. Refer to Figure 1.4 which shows an ESB architecture for J&J Music Store.

Figure 1.4 - ESB Hub & Spoke Architecture

Figure 1.4 - ESB Hub & Spoke Architecture

In Figure 1.4 we can see the ESB performing as a message broker. Each system can produce messages which are routed to the message broker. The broker then determines which system to forward the message to.

At one point, ESBs were widely adopted in enterprise IT for their potential to simplify integration and communication between diverse systems. However, several factors contributed to the decline in popularity of traditional ESBs:

  • Complex Configurations: Setting up and configuring an ESB could be complex. The configuration and maintenance of ESBs often required specialized skills, making it challenging for some organizations to manage.
  • Performance Overhead: ESBs introduced additional layers and processing steps, potentially leading to performance overhead. In situations where high-performance, low-latency communication was crucial, the overhead of an ESB became a concern.
  • Monolithic Architecture: Traditional ESBs often followed a monolithic architecture, making them less suitable for the modern trend toward microservices and more lightweight, modular architectures. Microservices and containerization became popular for their flexibility and scalability, and traditional ESBs struggled to adapt to these trends.
  • Need for Agility: Modern businesses require agility to quickly adapt to changing market conditions. Traditional ESBs, with their heavyweight and centralized nature, could hinder the agility of development and deployment processes.
  • Service Mesh and API Gateways: Newer approaches, like service meshes and API gateways, emerged as alternatives that were more focused on specific aspects of communication and often provided a more flexible and decentralized architecture.
  • Evolution of Integration Patterns: Event-driven architectures and messaging systems gained popularity as alternative approaches to integration. These architectures often focused on lightweight communication between services and embraced more decentralized and scalable patterns.
  • Rise of Cloud-Native Technologies: The rise of cloud-native technologies, containerization, and serverless computing shifted the focus toward more modular and scalable solutions. ESBs were designed in an era before these technologies, faced challenges in adapting to the new paradigm.
  • API-Centric Approaches: Organizations increasingly adopted API-centric approaches to integration. Technologies like RESTful APIs and lightweight messaging became more prevalent for connecting applications and services.

While traditional ESBs have lost some of their popularity, the concept of integration remains crucial. Organizations have just transitioned to a more modern and agile integration approach allowing them to align with the evolving landscape of technology and business requirements.

Service Oriented Architecture

About the same time ESB platforms were being introduced to the enterprise IT discussions, Service-oriented Architecture (SOA) began to gain popularity. In the late 1990s and early 2000s, enterprise applications were being built at web applications using n-tier designs and leveraging patterns such as Model-View-Controller. Browser, Java, and Microsoft based front ends were handling thin client UX functionality, and business logic was developed and run on Java and Microsoft applications running on a web server and using database connectors to run SQL queries and stored procedures on a normalized relational database management system (RDBMS) running on a different server.

Enterprises had just made it past the Y2K crises and businesses began a serious movement away from monolithic mainframe-based systems. SOA was a brand-new approach requiring a major paradigm shift which focused on developing applications from multiple distinct services.

Like ESB, SOA was presented as an architectural pattern and did not come with generally agreed upon protocols or industry standards. SOA is often mistakenly exclusively associated with SOAP Web Services. However, SOAP Web Services is a messaging protocol defining how to describe interacting with an object in another system. SOA can be implemented using SOAP-based services and well-structured XML or using RESTful web services or both. SOAP uses the Web Services Definition Language (WSDL) as the standard for describing the functionality offered by the web service.

SOA is generally defined as having 4 main components:

  • Service Provider
  • Service Broker
  • Service Registry
  • Service Consumer

Given that an Enterprise Service Bus (ESB) is implemented as a platform enabling and facilitating services across a common middleware layer, SOA is often implemented using an ESB platform. Also, it is worth mentioning that Universal Description, Discovery, and Integration (UDDI), the Service Registry mentioned above, is in one sense, the grandfather of API Portals. It’s adoption however was limited in comparison to other directories which followed.

Representational State Transfer (REST Services)

Representational State Transfer (REST) is an architectural approach to building network-based systems defined by Dr. Roy Fielding primarily in Chapter 5 of his dissertation “Architectural Styles and the Design of Network-based Software Architectures”, published in 2000. It is often abused as simply a protocol based on JSON message structure and HTTP verbs and in fact, in many cases seems to have taken on a life of its own.

The most common imagery used to describe Fielding’s vision of RESTful Web Services is, appropriately enough, a web site. Each page represents a particular state of the system, responding to parameters and requests. And each page contains knowledge of where the consumer may wish to navigate next. For example, an invoice page may be requested for a specific customer. The resulting page may be a list of products on an invoice. Navigation here may be to a specific product on the invoice. From there the consumer may be able to navigate to the quantity of the product in the inventory. And from the quantity, navigation to a backorder page where additional product could be purchased. The message payload should include hyper-links indicating valid navigation the API consumer may wish to take.

This network-based approach provided the foundation for an integration approach which MuleSoft refers to as the Application Network. REST provides a common mechanism, and indeed a language, with which to interact and integrate with the data and processes of an organization’s systems and applications, and to do so in a reusable way. This Application network is discussed in more detail in Chapter 4 of this book.

Whereas SOAP can be used over any transport layer, RESTful Web Services operate over HTTP and typically use the verbs defined by the protocol: GET, POST, PUT, PATCH, DELETE. Like SOAP, RESTful web services initially used Swagger as the standard for describing the functionality of these services. Swagger eventually became Open API Specification (OAS); you can think of this as Swagger 2.0. RAML was also introduced as a standard for describing REST functionality as well.

iPaaS

Whether an organization has point-to-point, or the latest ESB technology, or some combination, one thing they all had in common was the need to acquire compute resources, deploy servers to the data center, install and maintain software, and upgrade to newer versions of the software while trying to avoid as much down time as possible. Enter the iPaaS. With the iPaaS, organizations can fast track the implementation of RESTful APIs and create re-usable building blocks. These building blocks can operate on different compute resources running on-premise as well as in the cloud.

In 2002 Jeff Bezos issued his now famous “API Mandate” memo and the race to the cloud was on. As described earlier in the section on SOA, enterprises were moving away from the monolithic system. They were also beginning to move away from their data centers and into the cloud.

This move to the cloud was a gradual move for some organizations. One major energy organization, for example, began by moving away from allocating specific, individual servers to individual projects and towards virtual computing to cut down on dock-to-rack time. The virtual computing was still running on servers and hardware, operating inside the company’s data center and still required operations and maintenance teams to manage the compute, memory, and storage allocations. This also meant the operations team had to stay ahead of new hardware requests and production system growth. And when development or testing environments were idle, they couldn’t easily reallocate the resources for other purposes.

For other companies, the ability to acquire compute services without the costs of building out a data center provided a huge economic advantage. A major solar energy company was born in the cloud on Salesforce and never had to develop their own “on-premises” data center. If additional databases or file space or web apps were needed, the company simply added those services in the cloud space they occupied. About the only on-premises information computing done took the form of spreadsheets on company laptops.

Eventually Cloud Computing, such as Amazon’s AWS, meant that every aspect of the data center could be allocated and provisioned as a service. Many companies have now moved their data center entirely to the cloud and other newer companies were born in the cloud and never had their own data center. Need a CPU? It’s a service. More memory? A service. Private Network? Also a service. Message queue? Web Server? IP address? Database? Service, service, service, and service. Need a system to help manage your customers (CRM)? A service. Eventually, every aspect of the on-premises data center was made available as a service, on a Cloud Computing platform.

MuleSoft leveraged cloud computing early on, and the platform’s architectural approach to integration enabled companies all along the spectrum of cloud computing to develop solutions to help enable their digital transformation. The iPaaS offers companies the ability to buy a subscription for compute, storage, memory, runtime, monitoring, security, messaging, logging, as well as scheduling, alerting, cataloging, and management of integration components. All these services provided almost immediately upon request and for the most part, outside of any data center acquisition requirements.

Earlier, we outlined some of the reasons ESBs began to lose popularity as an integration solution. Let’s take a quick look at some of the reasons iPaaS took over some of that popularity and why it is considered a more modern and flexible solution compared to plain ESBs.

  • Cloud-native architecture: iPaaS solutions are typically well suited for cloud based environments. Traditional ESBs require adaptation to get them to work with cloud services
  • Low-code/no-code: the iPaaS platforms such as MuleSoft provide ways to deliver integrations with clicks not code. Most ESBs require specialized skills and complex configuration.
  • Scalability: Being born in the cloud and cloud-native, iPaaS solutions have been built with scalability in mind whereas ESBs were not initially designed to dynamically scale across a distributed platform and cloud environment.
  • Modularity and Microservices: iPaaS solutions are designed with a great deal of flexibility when it comes to integration design patterns and even protocols.
  • Focused on an API-Centric approach: the iPaaS platform places an emphasis on using an API integration approach enabling the “network-based”, or application network concept mentioned earlier. ESBs used a more traditional service based, often pub/sub based, approach to integration and was not as easily aligned to the API approach.

Understanding the historical context and the approach to integration taken over the years we’ve just looked at, we can begin to look at the MuleSoft Platform through the architecture lens. This lens will help us identify where gaps exist in the current organization, the problems they may be causing, and how to replace them with new capabilities. But before we take a high-level look at the services and capabilities that make up the MuleSoft Anypoint platform, let’s consider the challenges organizations continue to face when it comes to integration.

You have been reading a chapter from
MuleSoft Platform Architect's Guide
Published in: Jul 2024
Publisher: Packt
ISBN-13: 9781805126188
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image