Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
MuleSoft Platform Architect's Guide
MuleSoft Platform Architect's Guide

MuleSoft Platform Architect's Guide: A practical guide to using Anypoint Platform's capabilities to architect, deliver, and operate APIs

Arrow left icon
Profile Icon Jitendra Bafna Profile Icon Jim Andrews
Arrow right icon
$35.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (7 Ratings)
eBook Jul 2024 498 pages 1st Edition
eBook
$35.99
Paperback
$44.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Jitendra Bafna Profile Icon Jim Andrews
Arrow right icon
$35.99
Full star icon Full star icon Full star icon Full star icon Full star icon 5 (7 Ratings)
eBook Jul 2024 498 pages 1st Edition
eBook
$35.99
Paperback
$44.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$35.99
Paperback
$44.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

MuleSoft Platform Architect's Guide

What is the MuleSoft Platform?

In this chapter, we start by taking a high-level view of the MuleSoft Anypoint Platform. We will then look back at the evolution of different integration approaches and how the current modern integration approach fits within the MuleSoft platform. We will look at the different components that form the core building blocks of MuleSoft and how they relate to each other. These components form the basis of the architectural capabilities available in this platform. Next, the chapter will describe where this technology fits within any organization and in particular those looking to seize the future through modernization, digital innovation, and business transformation. We will also look at what MuleSoft is capable of as an integration Platform as a Service (iPaaS) and why MuleSoft is important in the modern integration approach.

In this chapter we’re going to cover the following main topics:

  • What is MuleSoft and iPaaS?
  • How have integration approaches evolved?
  • The architectural capabilities of MuleSoft
  • Solving the modern challenge to integration
  • How the MuleSoft architecture delivers modern integrations

Technical requirements

Many of the MuleSoft components and services discussed in this chapter are available by signing up for a free trial MuleSoft account and by downloading the MuleSoft Anypoint Studio. Also, you will find a GitHub repository for the chapter here as well.

What is MuleSoft and iPaaS?

Trying to define the MuleSoft platform requires us to look at it through several different lenses. There is much ground to cover when examining the Anypoint Platform because it addresses so many aspects of API Integration but at its heart is a Mule carrying the load and doing a great deal of heavy lifting.

Through a developer lens, MuleSoft is:

  • A comprehensive directory of services,
  • A pre-built connector,
  • A building block, and
  • A powerful developer portal.

It boasts a customizable, searchable public and private API directory called Anypoint Exchange. The integrated tooling with Anypoint Design Center makes the platform capable of designing, developing, and versioning API specifications using all the industry standard languages and presents them for testing using mocking services and publishing these API specifications through Exchange so other developers can find and use these building blocks.

Through an architect lens, the MuleSoft runtime engine is a platform providing deployment solutions capable of:

  • microservice style API and application isolation,
  • horizontal and vertical scaling,
  • zero downtime deployment,
  • container-based runtimes,
  • on-premises and managed cloud based runtimes.

These capabilities are augmented with Anypoint API Monitoring and analytics features which share an operations lens.

Through operations lens, MuleSoft can be seen as:

  • an API Security and
  • API Management platform.

The platform has comprehensive management tools and universal API management capabilities to manage Service Level Agreements (SLAs), versioning, and security, and to apply policies to MuleSoft developed APIs as well as non-Mule APIs developed with other tooling running in remote environments.

The Anypoint platform is all of these things. Its performance in these areas is one of the reasons it regularly lands as a Leader in Gartner’s magic quadrant for Enterprise iPaaS solution as well as for Full Life Cycle API Management Solution.

Gartner was first to describe the term iPaaS defining it as “a suite of cloud services enabling development, execution and governance of integration flows connecting any combination of premises and cloud-based processes, services, applications, and data within individual or across multiple organizations”. Garner glossary) As this definition suggests (cloud services), the MuleSoft Anypoint platform has been developed using an API first design approach, making all of the services highlighted above (and detailed throughout the rest of this book) available as APIs themselves.

MuleSoft is a sophisticated, powerful, dynamic, and feature rich integration platform solution providing technical architects with the tools and capabilities needed to design and deliver solutions for complex integration requirements.

This book is intended for those who need to see this platform through the architects’ lens. The MuleSoft Platform Architect’s job is to keep all these viewpoints of the platform in mind and understand how the combination and interaction of these different platform building blocks work with each other. Doing this will enable the organization to create flexible, scalable, and reusable solutions capable of driving the business vision forward through innovation and digital transformation.

How have integration approaches evolved?

An iPaaS is the latest generation in a long line of Integration solutions which have evolved over the years. The need to integrate applications has been around practically since the second computer system was developed and architects realized the first system offered data and functionality that would add value to the second system. Let’s take a quick look at how integration approaches have evolved across major innovations leading to this newest generation of iPaaS. To help us visualize the different approaches, consider the following simplified use case.

J&J Music Store Use Case

J&J Music is a business founded in 1970 that sells records direct from the publishers. They regularly receive large shipments from the publishers and these records must be added to the inventory available for sale. The company developed an inventory system allowing them to keep up with the number of records they are carrying in their store. Using this system, they were able to increase the quantity on hand when new shipments of a record arrive. As records leave the shelves of the store, they can update the system to reflect the new quantity. This was all done manually by the inventory management team. A different team handles the sales.

The sales clerks in the store eventually realize they also need a system to help keep up with all the orders being placed, so they build a Sales System to track the orders and invoices. As the store grew, the sales clerks realized they were often unsure if a product was available for purchase. They needed to login to the Inventory system and check the quantity before completing the sales transaction. However, because they are not part of the inventory management team, they do not update the inventory quantity.

This worked for a while but with multiple clerks serving multiple customers, the quantity in the inventory system became unreliable. The business then decided to integrate these two systems so each of the clerks would see the same inventory totals. Moreover, the sales system would automatically decrease the inventory system total for a product whenever a purchase was made. This would allow the inventory management team to reduce staff as they would now only need to update the system when new product shipments arrived.

Point to Point

We learned in our geometry classes the shortest distance between two points is a straight line. In the early days of systems development, the architectural straight line between systems was the most direct approach to integrating them. And despite advances in technology, many if not most organizations still have systems connected with point-to-point integration.

Thinking about the use case described previously, an architect designed a point-to-point solution, seen here in Figure 1.1, to integrate the inventory system and the sales system. The design diagram as seen in Figure 1.1 shows the relationship between the Inventory System and the Sales System.

Figure 1.1 - Inventory and Sales Systems with example tables

Figure 1.1 - Inventory and Sales Systems with example tables

As you can see in this figure, the connection from the inventory system to the sales system is a direct connection. Developers for J&J Music wrote new code in the Sales system to connect directly to the Inventory system database. The developers needed the exact details of the inventory system’s connection requirements and protocols as well as the specifics of how and where the product information was stored. You can see in the diagram the table and fields were captured as well.

This integration approach begins simple enough but begins to break down as more and more systems are added to the system landscape with each system requiring data from the other systems. Even without additional systems this approach is fragile as it requires each system to know the details about every other system it will connect with. If the inventory system changed any of those details, the integration would cease to work.

Let’s say the Inventory system decided to normalize their database and now they have a product master table, and a product inventory table. Now, with the quantity moved to the product inventory table, every integration must update code for this one integration point to continue to work. In Figure 1.1 this doesn’t seem like a big problem to the J&J Music architect because only 1 other system is affected. However, in a point-to-point integration approach, we must confront the N(N-1) rule. This formula, where N is the number of systems being integrated, identifies the number of connections needed in total for the integration. In Figure 1.1 this number is 2(2-1) = 2. Now let’s refer to Figure 1.2 which introduces a third system, the Accounting System.

Figure 1.2 - Addition of Accounting System and Connections Formula

Figure 1.2 - Addition of Accounting System and Connections Formula

As you can see in this new architecture diagram, adding an accounting system and integrating it with both the Inventory and Sales systems means we have 6 connections to consider: 3(3-1) = 6. Adding a 4th system would take the total to 12 and so on and so forth. The complexity associated with making changes to systems integrated with a point-to-point strategy now becomes exponential with each new system added to the landscape. If we consider an organization with hundreds of applications, all integrated using this pattern, it’s easy to understand why the architecture term “Big Ball of Mud” was coined. Figure 1.3 shows a not unreasonable 14 system connected point to point.

Figure 1.3 - Big Ball of Mud Architecture

Figure 1.3 - Big Ball of Mud Architecture

With 14 systems in Figure 1.3, the number of connections to manage is 182!

The limitations to point-to-point integration include:

  • Tight Coupling: Each system must be aware of the other systems and any change made to one potentially impacts all the other systems it communicates with.
  • Scalability: Adding new systems and components to this kind of integration causes the management and maintenance of these systems to increase in complexity. At a certain point this architecture becomes known as the big ball of mud.
  • Interoperability: Each system will likely have its own unique technology footprint with unique protocols and connectivity requirements. These differences make it increasingly difficult to make point-to-point connections and integrations.

Middleware and Remote Procedure Calls

The limitations and issues with point-to-point integration became more pronounced as large enterprises and business continued to expand their software footprint. Systems began moving off of mainframes and onto midrange systems such as IBM’s AS/400 and even onto desktop applications developed using a client-server architecture.

During this time, Remote Procedure Call (RPC) was developed to improve the communication detail dependency when calling a remote computer system over the network. For the client initiating the call, the RPC appears to be a local function call. RPC using some middleware on the network would handle the requests coming from a client. The RPC Framework provides standards for protocols and data encoding and handles the network communications.

Standards were developed to handle the different data encoding requirements of different systems. Common Object Request Broker Architecture (CORBA) and the more modern protocol gRPC are two examples of of these standards. CORBA (as the name implies) used an Object Request Broker (ORB) to manage and standardize calls between systems and was released by the Object Management Group in the early 1990’s.

Around the same time frame, Microsoft and Java had similar protocols release. Java RMI allows objects in one Java virtual machine (VM) to invoke methods on objects in another VM. It is specific to the Java programming language and is primarily used for building distributed applications in Java. DCOM is a proprietary technology developed by Microsoft for building distributed applications on Windows platforms. It is an extension of the Component Object Model (COM) and allows objects to communicate across a network. DCOM is specific to the Windows operating system.

gRPC is a modern RPC framework released by Google in 2016 using HTTP/2 as the communication protocol and is also language agnostic, a trait it shares with CORBA.

Enterprise Service Bus

While RPC is a communication method, the Enterprise Service Bus (ESB) is an architectural pattern and software infrastructure enabling and facilitating the communication of applications across a common middleware layer. The concept and first ESB products began to show up in early 2000. Unlike the RPC approach which relied on a request-response interaction, the ESB introduced a broad range of integration functionality based on messaging patterns supporting many different communication approaches including request-response, publish-subscribe, and yes even point-to-point. In many cases, products included message transformation, service orchestration, and message routing.

The ESB also enabled enterprises to begin considering ways to connect many of their legacy systems which had stood as data silo’s previously along with external partners.

Gregor Hohpe and Bobby Woolf’s seminal book Enterprise Integration Patterns was published in 2003 and described these message-based integration design patterns. This was a major influence on many products and ESBs but perhaps none so much as MuleSoft. In 2007, Ross Mason and Dave Rosenberg introduced MuleSoft as a lightweight ESB platform. From very early on, this platform included a framework implementation of the patterns described by Hohpe and Woolf.

Let’s go back now to the J&J Music store. Some 35 years after opening the store has become a global success. Ownership successfully transitioned to cassette tapes, and CD’s and have been investigating a joint venture with a device manufacture that allows users to carry digital music in their pocket. The integration requirements over the years have grown significantly. Now the CIO has decided to purchase an ESB platform to support an aggressive plan to integrate a new accounting system. Refer to Figure 1.4 which shows an ESB architecture for J&J Music Store.

Figure 1.4 - ESB Hub & Spoke Architecture

Figure 1.4 - ESB Hub & Spoke Architecture

In Figure 1.4 we can see the ESB performing as a message broker. Each system can produce messages which are routed to the message broker. The broker then determines which system to forward the message to.

At one point, ESBs were widely adopted in enterprise IT for their potential to simplify integration and communication between diverse systems. However, several factors contributed to the decline in popularity of traditional ESBs:

  • Complex Configurations: Setting up and configuring an ESB could be complex. The configuration and maintenance of ESBs often required specialized skills, making it challenging for some organizations to manage.
  • Performance Overhead: ESBs introduced additional layers and processing steps, potentially leading to performance overhead. In situations where high-performance, low-latency communication was crucial, the overhead of an ESB became a concern.
  • Monolithic Architecture: Traditional ESBs often followed a monolithic architecture, making them less suitable for the modern trend toward microservices and more lightweight, modular architectures. Microservices and containerization became popular for their flexibility and scalability, and traditional ESBs struggled to adapt to these trends.
  • Need for Agility: Modern businesses require agility to quickly adapt to changing market conditions. Traditional ESBs, with their heavyweight and centralized nature, could hinder the agility of development and deployment processes.
  • Service Mesh and API Gateways: Newer approaches, like service meshes and API gateways, emerged as alternatives that were more focused on specific aspects of communication and often provided a more flexible and decentralized architecture.
  • Evolution of Integration Patterns: Event-driven architectures and messaging systems gained popularity as alternative approaches to integration. These architectures often focused on lightweight communication between services and embraced more decentralized and scalable patterns.
  • Rise of Cloud-Native Technologies: The rise of cloud-native technologies, containerization, and serverless computing shifted the focus toward more modular and scalable solutions. ESBs were designed in an era before these technologies, faced challenges in adapting to the new paradigm.
  • API-Centric Approaches: Organizations increasingly adopted API-centric approaches to integration. Technologies like RESTful APIs and lightweight messaging became more prevalent for connecting applications and services.

While traditional ESBs have lost some of their popularity, the concept of integration remains crucial. Organizations have just transitioned to a more modern and agile integration approach allowing them to align with the evolving landscape of technology and business requirements.

Service Oriented Architecture

About the same time ESB platforms were being introduced to the enterprise IT discussions, Service-oriented Architecture (SOA) began to gain popularity. In the late 1990s and early 2000s, enterprise applications were being built at web applications using n-tier designs and leveraging patterns such as Model-View-Controller. Browser, Java, and Microsoft based front ends were handling thin client UX functionality, and business logic was developed and run on Java and Microsoft applications running on a web server and using database connectors to run SQL queries and stored procedures on a normalized relational database management system (RDBMS) running on a different server.

Enterprises had just made it past the Y2K crises and businesses began a serious movement away from monolithic mainframe-based systems. SOA was a brand-new approach requiring a major paradigm shift which focused on developing applications from multiple distinct services.

Like ESB, SOA was presented as an architectural pattern and did not come with generally agreed upon protocols or industry standards. SOA is often mistakenly exclusively associated with SOAP Web Services. However, SOAP Web Services is a messaging protocol defining how to describe interacting with an object in another system. SOA can be implemented using SOAP-based services and well-structured XML or using RESTful web services or both. SOAP uses the Web Services Definition Language (WSDL) as the standard for describing the functionality offered by the web service.

SOA is generally defined as having 4 main components:

  • Service Provider
  • Service Broker
  • Service Registry
  • Service Consumer

Given that an Enterprise Service Bus (ESB) is implemented as a platform enabling and facilitating services across a common middleware layer, SOA is often implemented using an ESB platform. Also, it is worth mentioning that Universal Description, Discovery, and Integration (UDDI), the Service Registry mentioned above, is in one sense, the grandfather of API Portals. It’s adoption however was limited in comparison to other directories which followed.

Representational State Transfer (REST Services)

Representational State Transfer (REST) is an architectural approach to building network-based systems defined by Dr. Roy Fielding primarily in Chapter 5 of his dissertation “Architectural Styles and the Design of Network-based Software Architectures”, published in 2000. It is often abused as simply a protocol based on JSON message structure and HTTP verbs and in fact, in many cases seems to have taken on a life of its own.

The most common imagery used to describe Fielding’s vision of RESTful Web Services is, appropriately enough, a web site. Each page represents a particular state of the system, responding to parameters and requests. And each page contains knowledge of where the consumer may wish to navigate next. For example, an invoice page may be requested for a specific customer. The resulting page may be a list of products on an invoice. Navigation here may be to a specific product on the invoice. From there the consumer may be able to navigate to the quantity of the product in the inventory. And from the quantity, navigation to a backorder page where additional product could be purchased. The message payload should include hyper-links indicating valid navigation the API consumer may wish to take.

This network-based approach provided the foundation for an integration approach which MuleSoft refers to as the Application Network. REST provides a common mechanism, and indeed a language, with which to interact and integrate with the data and processes of an organization’s systems and applications, and to do so in a reusable way. This Application network is discussed in more detail in Chapter 4 of this book.

Whereas SOAP can be used over any transport layer, RESTful Web Services operate over HTTP and typically use the verbs defined by the protocol: GET, POST, PUT, PATCH, DELETE. Like SOAP, RESTful web services initially used Swagger as the standard for describing the functionality of these services. Swagger eventually became Open API Specification (OAS); you can think of this as Swagger 2.0. RAML was also introduced as a standard for describing REST functionality as well.

iPaaS

Whether an organization has point-to-point, or the latest ESB technology, or some combination, one thing they all had in common was the need to acquire compute resources, deploy servers to the data center, install and maintain software, and upgrade to newer versions of the software while trying to avoid as much down time as possible. Enter the iPaaS. With the iPaaS, organizations can fast track the implementation of RESTful APIs and create re-usable building blocks. These building blocks can operate on different compute resources running on-premise as well as in the cloud.

In 2002 Jeff Bezos issued his now famous “API Mandate” memo and the race to the cloud was on. As described earlier in the section on SOA, enterprises were moving away from the monolithic system. They were also beginning to move away from their data centers and into the cloud.

This move to the cloud was a gradual move for some organizations. One major energy organization, for example, began by moving away from allocating specific, individual servers to individual projects and towards virtual computing to cut down on dock-to-rack time. The virtual computing was still running on servers and hardware, operating inside the company’s data center and still required operations and maintenance teams to manage the compute, memory, and storage allocations. This also meant the operations team had to stay ahead of new hardware requests and production system growth. And when development or testing environments were idle, they couldn’t easily reallocate the resources for other purposes.

For other companies, the ability to acquire compute services without the costs of building out a data center provided a huge economic advantage. A major solar energy company was born in the cloud on Salesforce and never had to develop their own “on-premises” data center. If additional databases or file space or web apps were needed, the company simply added those services in the cloud space they occupied. About the only on-premises information computing done took the form of spreadsheets on company laptops.

Eventually Cloud Computing, such as Amazon’s AWS, meant that every aspect of the data center could be allocated and provisioned as a service. Many companies have now moved their data center entirely to the cloud and other newer companies were born in the cloud and never had their own data center. Need a CPU? It’s a service. More memory? A service. Private Network? Also a service. Message queue? Web Server? IP address? Database? Service, service, service, and service. Need a system to help manage your customers (CRM)? A service. Eventually, every aspect of the on-premises data center was made available as a service, on a Cloud Computing platform.

MuleSoft leveraged cloud computing early on, and the platform’s architectural approach to integration enabled companies all along the spectrum of cloud computing to develop solutions to help enable their digital transformation. The iPaaS offers companies the ability to buy a subscription for compute, storage, memory, runtime, monitoring, security, messaging, logging, as well as scheduling, alerting, cataloging, and management of integration components. All these services provided almost immediately upon request and for the most part, outside of any data center acquisition requirements.

Earlier, we outlined some of the reasons ESBs began to lose popularity as an integration solution. Let’s take a quick look at some of the reasons iPaaS took over some of that popularity and why it is considered a more modern and flexible solution compared to plain ESBs.

  • Cloud-native architecture: iPaaS solutions are typically well suited for cloud based environments. Traditional ESBs require adaptation to get them to work with cloud services
  • Low-code/no-code: the iPaaS platforms such as MuleSoft provide ways to deliver integrations with clicks not code. Most ESBs require specialized skills and complex configuration.
  • Scalability: Being born in the cloud and cloud-native, iPaaS solutions have been built with scalability in mind whereas ESBs were not initially designed to dynamically scale across a distributed platform and cloud environment.
  • Modularity and Microservices: iPaaS solutions are designed with a great deal of flexibility when it comes to integration design patterns and even protocols.
  • Focused on an API-Centric approach: the iPaaS platform places an emphasis on using an API integration approach enabling the “network-based”, or application network concept mentioned earlier. ESBs used a more traditional service based, often pub/sub based, approach to integration and was not as easily aligned to the API approach.

Understanding the historical context and the approach to integration taken over the years we’ve just looked at, we can begin to look at the MuleSoft Platform through the architecture lens. This lens will help us identify where gaps exist in the current organization, the problems they may be causing, and how to replace them with new capabilities. But before we take a high-level look at the services and capabilities that make up the MuleSoft Anypoint platform, let’s consider the challenges organizations continue to face when it comes to integration.

What is the modern challenge to integration?

Organizations have experienced challenges to integration for decades. Integration of applications across an enterprise has always been a complex task often made the more difficult because of early architectural design decisions. For example, point-to-point integration of two systems as described earlier in this chapter seemed like a good idea with lots of pros until the number of systems increased beyond two.

Truthfully though, increasing beyond two applications wasn’t the real problem. In many organizations, the real issue is either not prioritizing architecture as a discipline or employing “it’s just” architecture. As in, “it’s just one extra system, go ahead and have it connect to the database and read this table”. Because integrating three systems doesn’t seem like that much more than integrating two. And four systems don’t seem like that much more than three. Of course, the problem here is before long there are hundreds of critical systems in your organization, and you have ended up with the “big ball of mud” architecture shown earlier, unable to maneuver as quickly as the business wants to. IT is then left holding the bag trying to just keep the wheels on with no capacity to take on new projects or engage with the business to understand their ever-growing list of value-added ideas and innovation and desired business outcomes.

Fortunately, volumes have been written about integration across the enterprise because of all the challenges inherent to the effort. With all this history and with all these patterns, solutions, technologies, dissertations, architecture, methodologies, and platforms, we can now assert that the challenge of integration is resolved! Right?

Sadly, this is not the case. Let’s look at two primary reasons the industry has yet to find total enlightenment when it comes to integration.

Breaking the law is harder than you think

For many organizations, the law Melvin Conway introduced in 1967 has proven as difficult to overcome as Newton’s law of gravity from 1687. Conway’s law states the design of systems reflect the communication structure of the organization. In many of the organizations I have spoken with about integration, the chair-to-keyboard or swivel-chair integration approach to integration is the pattern used to connect applications because the applications exist in the same silos as the organization operating the application. The best they can hope for is an email, ideally with a spreadsheet attached so they can capture the parts they need in their own system. This is one of the same challenges observed by Hohpe and Woolf in their Enterprise Integration Patterns published in 2004. What they suggested was that the departments and the IT teams in an organization must change how they interact with each other if enterprise integration is to be successful.

Changing these policies takes an extraordinary effort. At one company, the CIO dictated a three-word strategy: best of breed. This was repeated all the way down through the ranks of the business and the IT organizations. The strategy described a policy which encouraged bringing in the very best system available for the corporate function needed. And if it couldn’t be found off the shelf, then build the best system. The strategy also established a policy to develop a central architecture team and an enterprise application integration (EAI) information bus platform capable of integrating these best of breed applications using asynchronous messaging with an enterprise canonical model.

The result was impressive. The mission critical applications in the organization were able to publish and subscribe to broadly agreed upon data structures. And the data from one system was made available to other systems in near real-time. However, this was not without significant costs and drawbacks. Maintenance and upgrade costs of the EAI platform, in addition to cost overruns of bespoke applications built when the marketplace couldn’t yield the called for best of breed application, eventually forced the organization to abandon the strategy. A new CIO called for a new strategy, fit for purpose, recommending the business look for COTS (commercial off-the-shelf) solutions bundling more functions together at the expense of any one of them being perhaps not quite the best you could build or buy.

While each CIO had their own reasons and their own strategy, the key to the success for each one was how well they were able to promote the new policy and gain buy in from all of the relevant stakeholders in the business and in IT.

Business innovation at the speed of technical debt

If the silos of business isn’t challenging enough, the speed at which businesses must innovate just to keep up may give an integration architect whiplash. Bundled with a load of technical debt however, can significantly slow the speed of innovation.

In his seminal work, The Innovators Dilemma, Clayton Christensen described how leading companies in businesses as diverse as computer manufacturers, hard drive manufacturers, and excavators were disrupted by innovative changes in technology. None of these great businesses were prepared for or able to move quickly when these new companies with new technology began to shift from idea to market leader. As Christensen describes it, the legacy firms are victims of their own success. And, in many of these cases, the new company or division also doesn’t have the technical debt held by the incumbent technology or company.

Today, innovation is being driven across most industries through digital transformation. Digital transformation of the companies culture. Digital transformation of the supply chain. Digital transformation of the sales and marketing. Digital transformation of the relationship with the customer. Now more than ever, having the right data at the right time from the right place is important, perhaps even critical, to the operations and long term success of a business.

At the same time many companies are asking their IT departments to do more with less. IT is asked to work faster and harder (often stated as “work hard play hard” to soften the blow). They’re asked to set up better automated deployment or change the project methodology to something leaner. But IT is very often just trying to keep the basic operations of all the systems under their remit running. So, the business finds a way which may involve spreadsheets, loading files, extracting files, or even standing up a simple web portal. Almost all of this technical debt which only adds to IT’s workload.

I observed a payroll process involving a spreadsheet of pay, extracted from a mainframe system and manually modified before attempting to load it into PeopleSoft. And the source for the paychecks going out? The spreadsheet itself, after loading it to PeopleSoft, was sent to the payroll company. Possibly after a few more “adjustments”. This makes digital transformation of payroll difficult if not impossible and is a daunting challenge for integration to tackle.

In some cases, IT has tried to get ahead of the curve by starting a cloud migration journey. Shifting assets and resources to a cloud provider has helped eliminate some of the overhead associated with maintaining all the servers and databases in the data center. The cloud can also help IT projects provision, start up, and shut down servers in minutes or hours. This sounds and works great until the request comes in to integrate this new system in the cloud with the systems still in the data center.

What can be done if complex, siloed organizations and siloed data and technical debt have stymied innovation in the IT department and the business leadership just showed up with a new idea to transform lagging sales and underperforming quarterly reports with digital innovation? Let’s look next at the capabilities and services in the MuleSoft platform which help address these issues and then why APIs are so important in the modern approach to integrating systems across the business.

The Architectural capabilities of MuleSoft

The MuleSoft Anypoint platform can be used to deliver integration solutions architected on premises, in the cloud, or a combination of on premise and cloud. The latter is referred to as hybrid IPaaS. Note the individual services and capabilities available in the Anypoint platform depend on the subscription purchased.

In this section, we will look at how the Anypoint Platform is organized. We will look at the options available in the platform on where to run integration applications and where to manage the platform. We will take a brief look into some of the reasons why organizations will choose one option over another.

Planes of operations

There are two logical “planes”, Control Plane and Runtime Plane, within which the components and services of the MuleSoft Anypoint Platform exist and operate. In the Anypoint Platform high level diagram shown in Figure 1.5 we can see the services included in the Control plane in the top half and the Runtime plane in the lower half.

Figure 1.5 - Anypoint Platform High Level view

Figure 1.5 - Anypoint Platform High Level view

The control plane refers to all the components used to:

  • Manage the platform
  • Develop code
  • Design and publish API specifications
  • Collaborate with other developers
  • Configure and manage runtime settings
  • View logs
  • Manage APIs

Essentially the control plane is where the platform itself is managed along with every API or Mule application you developed and are running somewhere. The “running somewhere” is the domain of the runtime plane.

The runtime plane is made of many components or services responsible for running the MuleSoft applications but at its heart is a Java Virtual Machine (JVM). This includes the Mule runtime engine, but it also includes the services where the runtime engine can be hosted including Runtime Fabric and CloudHub. The runtime plane also includes services used by a MuleSoft application while it is running. This includes:

  • Virtual Private Cloud (VPC)
  • Virtual Private Network (VPN)
  • Object Store V2
  • Anypoint MQ
  • Connectors and any associated drivers
  • DataGraph
  • Dedicated Load Balancers (DLB)

When you open the Design Center in Anypoint, you are operating within the control plane. When you deploy and start your MuleSoft application either through the Runtime Manager interface in Anypoint or using an Anypoint CLI command, you are operating components in the control plane. The running applications and all the services they can use, along with the runtime engine are part of the runtime plane.

The screen shown in Figure 1.6 is running in the control plane and hosts all the services seen here. When you log in to your trial Anypoint platform you may see a few components are missing. Components such as Anypoint MQ and Partner Manager are items requiring additional licensing.

Figure 1.6 - Anypoint Control Plane and available components and services

Figure 1.6 - Anypoint Control Plane and available components and services

The figure shows many of the services available in MuleSoft including:

  • Anypoint Code Builder (ACB) – A new developer IDE offering based on Visual Code, able to operate fully in the cloud without any installation required.
  • Design Center – A cloud based development tool for building API specifications in RAML or OAS and AsyncAPI
  • Exchange – A catalog of artifacts including API’s, Connectors, Templates, Examples, Snippets, Dataweave, all of which can be shared with other consumers in order to support reuse.
  • Management Center (and all the controls therein) – All of the controls and management capabilities of the platform including:
    • Access management
    • API Management
    • Runtime Management
    • Governance
    • Anypoint MQ management
    • Visualizer for creating views of APIs
    • Monitoring
    • Secrets management

We will learn more about these different services and more about the runtime plane options in the next chapter. But first let’s consider the architectural advantages of having these two planes.

  • Fault tolerance: When the control plane is down, whether because of an outage or upgrades, the runtime plane and all the running applications can continue to operate. Likewise, when the control plane identifies an application is not running for some reason, the control plane can restart the service or attempt to run it in a different availability zone.
  • Hosting flexibility and regulation compliance: Organizations can choose to run the control plane in the cloud, with everything installed, hosted, and managed by MuleSoft. Some EU organizations may require all their assets, data, and metadata to be operated strictly within the EU. This is supported by with an EU hosted Control Plane. Organizations requiring FedRAMP compliance can run the Control Plan in the MuleSoft hosted AWS GovCloud. Some organizations may have requirements or regulations preventing them from having any assets, data, and/or metadata in the cloud. For these organizations, the Anypoint Platform supports running the control plane in your own data center.
  • The Runtime Plane can be fully managed by MuleSoft on the AWS public cloud, on an AWS VPC, and on the AWS GovCloud. There are also options for run on Customer-hosted infrastructure running the Mule runtime engine either directly on servers specially provisioned for the Mule Runtime or as an appliance in container deployments. Depending on the deployment selection for the control panel and runtime panel, the availability of Anypoint service configurations may change.

In the next section we will explore where the Control plane can be hosted, where the Runtime plane can be hosted, and the impact the hosting decision on one plane impacts the options of the other plane.

Platform deployment options

One of the jobs of the architect is to examine and understand the current state of the organization’s systems landscape. Has the organization embraced a cloud managed services approach. Does the organization have critical systems still in their own data center. These next sections will describe the options available for hosting the planes and how they relate to each other.

Control plane hosting

Currently, there are 2 deployment options available from MuleSoft for running the control plane, the first of which has 3 deployment locations:

  • Hosted and managed by MuleSoft
    • US AWS cloud
    • EU AWS cloud
    • AWS GovCloud
  • Running control plane functionality locally on the organizations own hardware, hosted and managed by the organization.

Running the Control Plane on the organization’s own infrastructure uses a product called Anypoint Platform Private Cloud Edition (PCE). Installing this product requires working with MuleSoft professional services during the installation.

Runtime plane

The runtime plane as described earlier, is where your programs run. This is where the HTTP connector you added to a flow, listens to the port you told it to listen to. It’s where the Dataweave transformation logic sits waiting for a payload to reshape. It is also the domain of Anypoint Object Store V2 and Anypoint MQ.

In order to run these components, we must have compute resources, memory resources, storage and network resources. MuleSoft provides four options where the MuleSoft runtime engine can execute:

  • CloudHub
  • CloudHub 2.0
  • Runtime Fabric
  • Standalone

Later chapters will take a deeper look a each of these runtime hosting options. In Chapters 9 and 10 we look at the MuleSoft hosted options, CloudHub and CloudHub 2.0. In Chapter 11 we look at the nuances of containerizing the runtime plane with Runtime Fabric. And finally in Chapter 12 we will look at the self-hosted standalone option of deploying the MuleSoft runtime engine on infrastructure obtained and managed by the organization.

But just because each of these can be a host for the runtime engine, does not mean all the other runtime plane components are supported in each of these environments. Table 1.1 is a matrix showing the runtime components available for each hosting option for the runtime plane.

Table 1.1 - Hosting Options for Runtime Plane components

In this table we can see that DataGraph is not available in CloudHub 2.0, Runtime Fabric, or Standalone servers. We can also see that Object Store v2 is not available in Runtime Fabric or Standalone servers. We can also see the most flexible runtime hosting options is CloudHub and CloudHub 2.0.

In the next section we will see what combinations of these runtime hosting options can be used with the different control plane options.

Combining Control Plane hosts and Runtime Hosts

The control plane as we have said is responsible for managing all the things we have running in the runtime plane. Therefore, the deployment choices we make to host the control plane will impact where we can host the runtime plane.

Table 1.2 is a matrix showing the relationship between the Control Plane hosting option and the Runtime hosting option.

Table 1.2 - Control Plane deployments and runtime hosting options matrix

We can see in the table If we are hosting our control plane in the US or EU, we can host the runtime plane in any of the 4 hosting options identified earlier. However, if you are hosting the control plane in the GovCloud the only hosting options are CloudHub and Standalone servers. CloudHub 2.0 and Runtime Fabric are not available in GovCloud. If you must host the control plane on your own servers and infrastructure (PCE) then you can only host the runtime plane on your own servers and infrastructure.

Sometimes an organization has to make difficult choices in deciding where to deploy both the control plane and the runtime plane. Federal and State governments often need to follow regulations and may require software solutions to be FedRAMP compliant. MuleSoft has a solution for this called Government Cloud but it also comes with other limiting factors and can be more expensive. When using Government Cloud the Runtime plane must be CloudHub, standalone MuleSoft Runtimes, or a combination of the two (Hybrid).

Likewise, European organizations may need to keep all software entirely within EU datacenters. MuleSoft provides an option to use an EU hosted control plane. This control plane will also limit CloudHub deployments to be in the EU region and in EU Availability Zones.

For organizations with a mandate to retain all IT infrastructure and systems under their direct control and within their data center, there is the option to install Private Cloud Edition and run both the control plane and runtime plane on your own hardware.

The architect’s job in these situations is to understand how to navigate the differences. If the runtime host does not support Anypoint DataGraph, or Object Store v2, what options do we have instead? Do we even need DataGraph or Object Store v2? Moreover, if an organization is operating from the US or EU control plane and have every option open to them, what business criteria or IT policies or organizational resourcing constraints or any other of a host of environmental variables would prompt an architect to choose CloudHub over Runtime Framework. What would drive the architect to recommend combining CloudHub 2.0 with a standalone instance of the runtime engine.

To answer these questions, we need to understand each of these MuleSoft delivery approaches in a little more detail and see what they can do and what they can not. Where our applications will execute and what runtime plane capabilities are available is important. Part 3 of this book will examine these details so we as architects can be equipped to answer these questions and make our recommendations. In the next section we will take a first look at the specific capabilities and components availabel in the platform.

MuleSoft capabilities and components

As we have seen already, the Anypoint platform provides the components to design, deploy, and manage the APIs we build. The platform also provides the components, and in many cases, the infrastructure to run or execute these applications. Let’s review the different components that support each of these phases in the lifecycle of API first development.

Discover Capability

Anypoint Exchange can be thought of as a catalog or registry of all the different reusable components, or assets, available to use in your solution. This catalog can be searched for specific phrases or filtered based on pre-defined asset types or asset categories. You can list assets from your own organization or assets developed by MuleSoft. You can even filter based on the lifecycle stage the asset is in.

In Figure 1.7, We can see Exchange searching for all the assets provided by MuleSoft.

Figure 1.7 - Anypoint Exchange assets from MuleSoft

Figure 1.7 - Anypoint Exchange assets from MuleSoft

Looking at the different assets in this figure we can see the types of assets that can be registered or published to Anypoint Exchange has grown dramatically over the past 4 years. Initially Exchange was limited to Connectors and APIs. Anypoint Exchange is now home to 11 different types as of the time of this writing:

  • Connectors: components, developed with the Mule SDK, which developers can use in MuleSoft flows to connect and interact natively with different systems.
  • Dataweave Libraries: Developers can build Dataweave transformations which can be reused in other integrations.
  • Examples: assets which provide an example application such as “How to Batch ETL with Snowflake”
  • Policies: policies which govern the security of APIs, typically running in service mesh
  • API Spec Fragments: reusable segments of RAML or OAS to be reused in the development of new API specifications.
  • REST APIs: API specifciations for implementations which can used or consumed by other approved applications
  • RPA Activity Templates: templates for the MuleSoft Robotic Process Automation (RPA) capability
  • Rule sets: rules for API governance
  • SOAP APIs: APIs which can used or consumed by other approved applications
  • Templates: templates which can be copied and configured for your environment
  • Custom (e.g. accelerators)

Design capability

API design begins with the API Specification. Whether the organization has standardized on the the Open API Specification (OAS, formerly known as Swagger) or RESTful API Modeling Language (RAML), the specification can be created in Anypoint Platform Design Center. Design Center allows you to design specifications, fragments, and AsyncAPI specs as shown in Figure 1.8 Design Center API Specification.

Figure 1.8 - Design Center API specification

Figure 1.8 - Design Center API specification

In this screen, MuleSoft can provide a guide or “scaffolding”, making the development of the specification “low-code/no-code”. This capability is also on the cloud so designers can get started without needing to install anything on their laptop or desktop.

API design specifications can also be created inside the two primary development tools, Anypoint Studio and ACB. Both tools allow you to synchronize your API specification with Design Center. Likewise, for any API specifications started in Design Center, you can open and work on these in Studio or ACB.

Management capability

The Anypoint Management Center provides several Anypoint capabilities focused on the operations and administration of the platform as well as the API applications running.

The screen in Figure 1.9 shows the options in the Management Center.

Figure 1.9 - Anypoint Management Center

Figure 1.9 - Anypoint Management Center

The list of management capabilities in Figure 1.9 are defined here.

  • Access Management: provides the capability to create Anypoint Platform users or set up an external identity provider (IdP) to manage users of the platform and the permissions they have been granted to the platform. It also exposes the audit logs which capture anything done on the platform from deploying an application to publishing an API.
  • API Manager: provides the capabilities of applying policies, setting up service level agreements (SLAs), and securing APIs. The API Manager is also used to manage two types of API alerts. Alerts for API request/response/policy and alerts for contract changes or violations. API Manager also manages and deploys any proxy API runtimes you need to associate with an API to manage it.
  • Runtime Manager: provides capabilities for the management of Anypoint VPCs, Anypoint VPNs or Private Spaces, MuleSoft applications, load balancers, standalone servers, and flex gateways across different environments. Additionally, application deployment, setup, and management can all be controlled from the Runtime Manager.
  • API Governance: gives architects the capability to define rule sets for API conformance and monitor compliance and send notifications to developers to resolve compliance issues.
  • The Visualizer: provides a graphical view of the APIs deployed to a Runtime plane, as observed through three lenses: Architecture, Troubleshooting, and Policies. The application view shows the relationship of the applications, the nodes, in your application network. The troubleshooting view includes metrics information for all the nodes in the application network. The policy view allows you to see an overview of the policies applied to the MuleSoft applications in your network.
  • Monitoring: provides the capability to view performance dashboards, review logs and cpu/memory/disk consumption, and look for trends across deployed APIs. Monitoring also allows operations to look for stability issues of any APIs based on performance and usage findings.
  • Secrets Manager: provides a secure vault for keeping certificates, key stores and trust stores, and username/password key value pairs safe. Security Manager is part of a Security capability which supports edge policies and tokenization in the Runtime Framework runtime plane.

We have now looked at some of the primary capabilities of the Anypoint platform, many of which we will go into more detail later. With a historical backdrop of the different approaches to integration, and now with all these new iPaaS platform capabilities available, let’s take a look at some of the difficulties, new and old, organizations are facing today.

Why are APIs so important in delivering modern integrations?

APIs fundamentally change the architecture discussion from asynchronous messaging to synchronous conversational integration. It shifts the paradigm from production of data to a consumption model.

Many newer commercial off the shelf systems (COTS) already ship with this in mind, offering customers some form of integration interface for accessing processes and data within their system. Cloud based SaaS systems such as Salesforce, Fiserv, and Workday also expose their processes and data through a robust API. Other SaaS systems built a SOAP web services interface to access their data.

EDI transactions continue to simplify B2B partner integration. And many business partners are enriching that partner experience by offering a dedicated API allowing their partners to develop their own solutions rather than forcing them to use their web portal.

Unfortunately, this list does not account for the hundreds of legacy systems across the enterprise. These were often developed as an island, with no consideration of integrating with other systems. Being able to build safe, secure, but effective APIs to unlock the data in these legacy systems is an important first step in delivering modern integration architecture to an enterprise.

The MuleSoft platform supports this approach to integration by providing the services and components needed to build a consistent way of accessing system data across all sources of truth be it Cloud based SaaS, B2B, COTS n-tier designed software, or even legacy mainframe software. Publication of these APIs in turn increases its reusability across new and future development efforts. Fine grained control of the deployment of APIs including firewalls, port security, certificates, SSL, and policies, allows architects to share access to a systems data in a safe and secure way that complies with governance, protocols, policies, and in some cases even laws.

Being able to design and develop these APIs using Low-code, no-code tooling also helps IT be more efficient in delivering integration solutions to the business. Particularly if the business users and IT team lack the skills required to develop complex, fault tolerant, performant, scalable application solutions to integration.

The late, great, Tina Turner wrote a song entitled “We don’t need another hero” and this comes to mind when I ask other architects, do we need another integration platform architecture? We are going to begin to answer this question throughout the rest of this book. We will look in more detail at the platform architecture of MuleSoft. We will dig into the MuleSoft resources, accelerators, design approach, and strategies for deployment. We will examine how operations and management of the platform are as much the responsibility of the architect as design diagrams and code reviews.

In short, the rest of this book will serve as your personal guide to MuleSoft platform architecture and an API approach to addressing the modern challenges of enterprise application integration.

Summary

The chapter started by providing an overview of the MuleSoft and looking at what makes an iPaaS.

The chapter continued by looking back in some detail at the history of integration and how architectural approaches have evolved from point to point to SOA and REST.

The chapter then took a look at the building blocks, capabilities, and services available in MuleSoft. A brief examination was made of the control plane and the runtime plane and the options that support different configurations and hosting options for each one.

The chapter then looked at the current challenges facing integration. It examined the issues businesses have with digital transformation and the constraints IT often faces when trying to keep systems operational while at the same time, evolve or transform systems to meet the growing demands of the business.

The chapter then ended by looking at how APIs in general and the MuleSoft platform in particular is important to delivering modern integration solutions, unlocking system data in secure ways while supporting reusability.

In the next chapter, we will examine in closer detail, the foundation components and the underlying architecture.

Questions

  1. What are the two planes of operation on which the Anypoint Platform runs?
  2. What are the four hosting options available in the runtime plane?
  3. Can you name the two capabilities that are unavailable when using the Runtime Fabric for hosting the runtime plane?
  4. How does iPaaS differ from other approaches to integration? From Point-to-Point? From RPC? From SOA?
  5. What issues have you or your organization faced that makes integration a challenge?

Answers

  1. Control Plane and Runtime plane are the two planes the Anypoint Platform operates on.
  2. CloudHub, CloudHub 2.0, Runtime Fabric (RTF), Stand Alone servers are the current options for hosting the MuleSoft Runtime Plane.
  3. Object Store v2 and Anypoint DataGraph are not available when running in RTF.
  4. iPaaS is a comprehensive suite of cloud services enabling integration development and governance connecting on-premises applications and data with cloud-based processes, services, apps, and data. Point to point involves one application reaching into another application directly to collect data directly. RPC is a kind of point-to-point architecture that involves invoking remote calls in another application and getting data back. SOA is a service-oriented architecture without any globally agreed upon standards, which provides an approach to object interaction through predefined services.
  5. Your answer should vary here depending on the experiences of your organization. I have seen organizations that had older mainframe systems with complex processing of overnight batch data. This of course impacts the ability to have a realtime view of data.

Further Reading

Left arrow icon Right arrow icon

Key benefits

  • Discover Anypoint Platform's capabilities for creating high-availability, high-performance APIs
  • Learn about AnyPoint architecture and platform attributes for Mule app deployment
  • Explore best practices, tips, and tricks that will help you tackle challenging exam topics and achieve MuleSoft certification
  • Purchase of the print or Kindle book includes a free PDF eBook

Description

We’re living in the era of digital transformation, where organizations rely on APIs to enable innovation within the business and IT teams are asked to continue doing more with less. Written by Jim Andrews, a Mulesoft Evangelist, and Jitendra Bafna, a Senior Solution Architect with expertise in setting up Mulesoft, this book will help you deliver a robust, secure, and flexible enterprise API platform, supporting any required business outcome. You’ll start by exploring Anypoint Platform’s architecture and its capabilities for modern integration before learning how to align business outcomes with functional requirements and how non-functional requirements shape the architecture. You'll also find out how to leverage Catalyst and Accelerators for efficient development. You'll get to grips with hassle-free API deployment and hosting in CloudHub 1.0/2.0, Runtime Fabric Manager, and hybrid environments and familiarize yourself with advanced operating and monitoring techniques with API Manager and Anypoint Monitoring. The final chapters will equip you with best practices for tackling complex topics and preparing for the MuleSoft Certified Platform Architect exam. By the end of this book, you’ll understand Anypoint Platform’s capabilities and be able to architect solutions that deliver the desired business outcomes.

Who is this book for?

This book is for technical and infrastructure architects with knowledge of integration and APIs who are looking to implement these solutions with MuleSoft’s Anypoint Platform. Architects enrolled in the platform architect course who want to understand the platform's capabilities will also find this book helpful. The book is also a great resource for MuleSoft senior developers transitioning to platform architect roles and planning to take the MuleSoft Platform Architect exam. A solid understanding of MuleSoft API development, ideally 3 to 5 years of experience with the platform, is necessary.

What you will learn

  • Understand Anypoint Platform's integration architecture with core components
  • Discover how to architect a solution using Catalyst principles
  • Explore best practices to design an application network
  • Align microservices, application networks, and event architectures with Anypoint Platform's capabilities
  • Identify non-functional requirements that shape the architecture
  • Perform hassle-free application deployment to CloudHub using the Mule Maven plugin, CLI, and Platform API
  • Understand how to manage the API life cycle for MuleSoft and non-MuleSoft APIs

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Jul 31, 2024
Length: 498 pages
Edition : 1st
Language : English
ISBN-13 : 9781805129622
Vendor :
Mulesoft
Category :
Concepts :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning

Product Details

Publication date : Jul 31, 2024
Length: 498 pages
Edition : 1st
Language : English
ISBN-13 : 9781805129622
Vendor :
Mulesoft
Category :
Concepts :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 151.97
MuleSoft Platform Architect's Guide
$44.99
Solutions Architect's Handbook
$59.99
MuleSoft for Salesforce Developers
$46.99
Total $ 151.97 Stars icon

Table of Contents

20 Chapters
Chapter 1: What is the MuleSoft Platform? Chevron down icon Chevron up icon
Chapter 2: Platform Foundation Components and the Underlying Architecture Chevron down icon Chevron up icon
Chapter 3: Leveraging Catalyst and the MuleSoft Knowledge Hub Chevron down icon Chevron up icon
Chapter 4: An Introduction to Application Networks Chevron down icon Chevron up icon
Chapter 5: Speeding with Accelerators Chevron down icon Chevron up icon
Chapter 6: Aligning Desired Business Outcomes to Functional Requirements Chevron down icon Chevron up icon
Chapter 7: Microservices, Application Networks, EDA, and API-led Design Chevron down icon Chevron up icon
Chapter 8: Non-Functional Requirements Influence in Shaping the API Architecture Chevron down icon Chevron up icon
Chapter 9: Hassle-free Deployment with Anypoint iPaaS (CloudHub 1.0) Chevron down icon Chevron up icon
Chapter 10: Hassle-Free Deployment with Anypoint iPaaS (CloudHub 2.0) Chevron down icon Chevron up icon
Chapter 11: Containerizing the Runtime Plane with Runtime Fabric Chevron down icon Chevron up icon
Chapter 12: Deploying to Your Own Data Center Chevron down icon Chevron up icon
Chapter 13: Government Cloud and the EU Control Plane – Special Considerations Chevron down icon Chevron up icon
Chapter 14: Functional Monitoring, Alerts, and Operation Monitors Chevron down icon Chevron up icon
Chapter 15: Controlling API Sprawl with Universal API Management Chevron down icon Chevron up icon
Chapter 16: Addressing Non-Functional Requirements – from a Thought to an Operation Chevron down icon Chevron up icon
Chapter 17: Prepare for Success Chevron down icon Chevron up icon
Chapter 18: Tackling Tricky Topics Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Top Reviews
Rating distribution
Full star icon Full star icon Full star icon Full star icon Full star icon 5
(7 Ratings)
5 star 100%
4 star 0%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Top Reviews

Filter reviews by




nahomibj Oct 21, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is an amazing resource to start or improve your architecture journey. The authors are very well-known in the MuleSoft Community and the fact that they wrote a book with their insights has great value! I can hear Jim and Jacky's voice in my head while reading the book and I love it 😄Thank you guys for taking the time and energy to write this to share your knowledge. I can for sure say I'm an architect after reading this 😁
Amazon Verified review Amazon
Ashish Pardhi Sep 17, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
From the moment I cracked open the cover, I was impressed by the depth and clarity of the content. Whether you're an architect shaping MuleSoft solutions, a developer diving into integration challenges, or simply curious about the platform, this guide has something valuable to offer.🚀 The authors take us on a journey through the intricacies of MuleSoft, covering topics like API design, integration patterns, security, and scalability. What sets this book apart is its practical approach—each concept is illustrated with real-world examples and best practices. Whether you're setting up Anypoint Platform from scratch or optimizing existing solutions, you'll find actionable guidance here.🙏 I want to extend a heartfelt thank you to Jitendra Bafna for sharing his expertise and knowledge with the MuleSoft community. His passion shines through in every chapter. And Jim Andrews, your contributions add immense value to this essential guide.In summary, "Exploring the 'MuleSoft Platform Architect's Guide'" is a must-read for anyone serious about mastering MuleSoft. It's not just a book; it's a roadmap to success in the MuleSoft ecosystem. Highly recommended! 👏📖Thanks, Ashish Pardhi
Amazon Verified review Amazon
Juan Cruz Basso Sep 16, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
I really like the way the book is introducing the different points of view in the concepts.It starts from the very basic concepts and drills down into the technical details.The examples are very clear and it helps you ,as a guideline, to prepare your self with the MuleSoft certification.
Amazon Verified review Amazon
Ravi Tamada Aug 31, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This one of the Master Piece Book, out from Jitendra's & Jim's knowledge/brain shelf.Last week I received a Hard copy of this book.I read most parts of this book. Every section of the book (#18 sections) is well-crafted, which helps an Architect to understand the concepts very well (from Basic to Advanced).Both "Jacky & Jim" have tried to put their vast architectural/integration experience, in the form of this Great Book.Jacky is trying to uplift aspirant 'MuleSoft Platform Architects', by offering this Book as a mechanical 'JACK'.I strongly recommend this book to those who want to Quickly start his/her career as "MuleSoft Platform Architect".
Amazon Verified review Amazon
Boris Sep 05, 2024
Full star icon Full star icon Full star icon Full star icon Full star icon 5
Mulesoft Platform is a complex and constantly evolving set of technologies, best practices and patterns. It became a leading API Management and Integration Platform and the amount of knowledge and information it generated is immense. It is a very hard undertaking to describe in one book all the information that was accumulated in the community. One approach would be to cover this platform in depth, and this would require writing multiple volumes. Another approach is to gather all the major foundational concepts and principles in one volume, covering the breads of the platform and guiding the reader to explore various topics in more depth on their own, through the official documentation and multiple informational sources. I believe the authors chose the second approach. The book covers all the major foundational topics integration architects will need to familiarize themselves with the platform. It describes major capabilities of the platform, deployment topologies, best practices and patterns. The book also covers Mulesoft Catalyst – Mulesoft outcome-based methodology that is designed to help customers drive their API Enablement initiatives. I think it can be best used as a reference point, familiarizing readers with the main concepts and principles, and allowing them to explore the topics of interest further. It is a tremendous resource for Integration and API architects, with lots of information and very high quality of presentation.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.