Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Enterprise API Management

You're reading from   Enterprise API Management Design and deliver valuable business APIs

Arrow left icon
Product type Paperback
Published in Jul 2019
Publisher
ISBN-13 9781787284432
Length 300 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Luis Weir Luis Weir
Author Profile Icon Luis Weir
Luis Weir
Arrow right icon
View More author details
Toc

The journey of API platforms - from proxies to microgateways

As organizations continue to embrace cloud computing as the means to realize business benefits, for example, TCO reduction, business agility, and digital transformation, an inevitable side effect also takes place: information becomes more and more federated.

The rationale is simple and we will take a look at a typical on-premise system: Enterprise Resource Planning (ERP). A typical ERP system encompasses not one, but several business capabilities (often referred to as modules, for example, finance, HR, SCM, and so on.) all supported by a single infrastructure (a monolith).

Figure 2.1: Monolithic systems

Now, because of this, all modules within the same monolith are integrated out of the box, mainly because they all share a single database. Therefore, this simplifies (at least a bit) the integration landscape. This also means, though, that if the common infrastructure is affected, all business capabilities will be too (all eggs in one basket). Customizing, extending, patching, and scaling a monolith, therefore, has to be done extremely carefully, as the entire system could be affected, impacting business operations heavily.

When it comes to the cloud (either Software as a Service (SaaS), Platform as a Service (PaaS), or Infrastructure as a Service (IaaS)) however, business and technical capabilities don't have to reside in a single cloud application. In fact, in most cases they don't. Instead, capabilities are scattered across distinctive (smaller) cloud "services," all of which can be implemented individually.

Please refer to the following document for the official National Institute of Standards and Technology (NIST) definition of cloud computing (Saas, PaaS, and IaaS):
http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-145.pdf
Figure 2.2: Capabilities scattered among cloud services

Given the practical and granular nature of cloud services, hundreds, if not thousands, of cloud vendors (especially in SaaS) have emerged, therefore giving organizations several options to choose from. This has (fortunately or unfortunately, depending from what angle you look at it) led to many organizations adopting multi-vendor cloud strategies, in many cases without actually even realizing it, as the adoption is driven at a departmental/business unit level, and not as a corporate-wide IT initiative.

Unavoidably, information assets also become scattered (federated) across different cloud services. The more diverse and distinct an organization's cloud adoption is, the more federated the information becomes.

Moreover, in a highly competitive market, arguably dominated by digital disruptors (as mentioned in Chapter 1, The Business Value of APIs), such as Amazon, eBay, and Netflix, more traditional organizations are forced to also come up with innovative digital, customer-centric, and multichannel strategies in order to remain relevant and competitive. Needless to say, access to information (now federated) in a standard, consistent, and secure way, across all digital channels, is a key requirement of any digital strategy.

Organizations that rush into creating multichannel strategies, without first defining a solution to provide access to key information assets, will most likely end up with lots of ad hoc solutions that will not only complicate the architectural landscape, but ultimately prevent the company from realizing the promised goals of the digital strategy.

Figure 2.4: Accidental cloud architecture (cloud spaghetti)

In order to address this, a generally accepted approach is to implement a hybrid Integration Platform as a Service (iPaaS) solution, capable of providing access to information assets regardless of where they are. The iPaaS platform should be capable of connecting to any cloud service and/or on-premise system, and delivering access to APIs.

The use of APIs as the means to deliver standard, secured, and real-time access to information enables multichannel applications to consume the assets as and when they need them.

Figure 2.5: iPaaS solution with API management capabilities
Recommended reading: iPaaS, what is it exactly?
http://www.soa4u.co.uk/2017/03/ipaas-what-is-it-exactly-is-it-on.html

Although this may seem like the obvious answer, the truth is that unless the hybrid iPaaS solution delivers robust API management capabilities, it will struggle to address the aforementioned needs. An API has to be as close as possible to the source of information. This not being the case can cause unforeseen issues, such as latency and higher exposure to network problems, and even security threads, such as man-in-the-middle attacks. If information is federated among many different clouds and on-premise applications, so must the APIs be.

To put things into perspective, it is important to understand the main motivations leading to the evolution of (integration) middleware technologies into what this book refers to as third generation.

Generation zero

Remember when the first Enterprise Service Bus (ESB) came out? Although the term was first used in 2002, it wasn't until a few years later that their adoption and popularity began, eventually overtaking the proprietary-based Enterprise Application Integration (EAI) solutions.

Recommended reading: Enterprise service bus history. https://en.wikipedia.org/wiki/Enterprise_service_bus#History

One of the key reasons that ESBs became so popular is because of their relation to service-oriented architectures (SOAs) and the view that implementing an ESB was fundamental to realizing SOA.

The ability of ESBs to adopt open standards, such as web service standards (WS-*), and act as integration hubs capable of connecting to multiple systems and exposing Simple Object Access Protocol (SOAP) web services, differentiated them from traditional Enterprise Integration Architecture (EIA) solutions.

For a full list of WS-* you may refer to the following link:
http://servicetechspecs.com/ws
Additionally, for a simple definition of SOAP web services, refer to:
http://servicetechspecs.com/xml/soap2

During this period, if a web service had to be accessed from outside the internal networks, typically web proxies would be implemented in Demilitarized Zones (DMZs), to proxy the HTTP traffic to the ESB, and also implement transport security (HTTPS). Web proxies, however, offered very basic capabilities.

Figure 2.6: Generation zero – it all starts with ESBs

ESBs offered many capabilities, of which it is worth highlighting basic security, message routing, data transformation, and protocol translation, along with adapters to connect to multiple backend system using different protocols (for example, SQLNET, HTTP, FTP, and SMTP). ESBs also allowed exposing functionality and access to information as standard SOAP web services. ESBs were able to receive HTTP/SOAP requests, transform the message payloads, perform message validations, and then route the calls to a given backend in the required protocol.

During this period, most ESB implementations were pretty straightforward. As the following diagram suggests, the amount of business logic implemented in an ESB was limited and constrained by the previously mentioned capabilities. The majority (if not all) of the business logic (for example, orchestration, content validation, business rules, and so on) resided in the client side or the backend system that the ESB was connecting to.

Figure 2.7: Logic distribution in generation zero

At a time when the industry lacked open standards for integration and the majority of products implemented proprietary protocols, ESBs were widely used.

First generation

As SOAs became more prevalent in enterprises and ESB technologies continued to mature, several new capabilities were also added to ESBs, mainly in support of SOAs. For example, the adoption of Service Component Architecture (SCA) as a standard and the introduction of gateways as a more robust capability, to securely expose web services to external networks.

For details on SCA refer to: http://www.oasis-opencsa.org/sca

Gateways manifested themselves as either XML accelerators (running as black-box appliances) or add-ons to existing SOA/ESB infrastructure (commonly known as service gateways).

XML accelerators were ideal for DMZs because of their robust capabilities to secure SOAP web services and protect against external threads, such as the ones listed in the OWASP Top Ten Project. This made these appliances perfect as a form of first-line defense.

Recommended reading on the OWASP Top Ten Project: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

Service gateways, on the other hand, were well-suited to securing services internally (second-line defense) and supported the implementation of standards such as WS-Security, WS-Trust, and WS-Policy.

Figure 2.8: First-generation XML appliances

SCA introduced the concept of composite applications. In a nutshell, a composite application is a piece of software that assembled multiple web services together into a single deployment unit, in order to deliver a specific business solution, which was also exposed as a web service. Composite applications ran in specific integration middleware that extended the ESB. Although SCA is a standard, in practice, each software vendor implemented its own flavor of it.

Although, as a concept, adopting composite applications seemed like a sound idea, in practice, the implementation of business logic, either as complex business process execution language (BPEL) orchestrations, as human workflows, or even as business rules, became common. It goes without saying that the footprint of underlying integration middleware platforms increased exponentially, in order to cope with the amount of processing logic introduced in the composite applications.

Figure 2.9: Logic distribution in first generation

As the number of web services increased, as well as the underlying technology footprints, so did the amount of people required to keep the initiative going. As the complexity and cost of SOA solutions continued to increase, SOA governance emerged as "the" discipline that could bring back control and maximize the chances of success, by aligning people, process, and tools toward commons goals, typically centered around key business benefits. Backed by specialized software, SOA governance became a big trend and thousands of organizations worldwide attempted its adoption.

For further information on SOA governance, concepts, and principles please refer to:
http://www.soa4u.co.uk/2013/11/oracle-soa-governance-for-business.html

Towards the end of the period, an inflection point occurred that completely changed the ball game in the entire IT industry. The first wave of cloud computing, combined with the launch of smart devices, such as the iPod and soon after the iPhone, completely disrupted the market and the IT industry. Thousands of mobile apps could now be easily found and installed via incorporated app stores, creating a huge marketplace, which is worth billions of dollars.

Now, because the mobile apps ran natively inside the mobile device, a new form of remote access to information and/or functionality available in backend systems was required. It had to be simple, lightweight, and secure given the constraints in terms of compute capacity of such devices.

Broadly based on the Representational State Transfer (REST) architectural style, a new flavor of APIs emerged.

Recommended reading: History of APIs.
http://history.apievangelist.com/

These APIs (referred to as either REST or web APIs) offered a much simpler and lightweight alternative to SOAP-based web services, especially when combined with JavaScript objects based on JSON to handle data payloads.

JavaScript Object Notation (JSON) is a lightweight data-interchange format. It is easy for humans to read and write, and easy for machines to parse and generate. It is based on a subset of the JavaScript programming language. For further information, refer to: http://www.json.org/

In no time, REST APIs became the main and most popular mechanism to implement backend code, and deliver remote access to information and functionality.

Second generation

As the number of smartphones rocketed, so did the number of mobile apps. Organizations of all sizes – many of which, at this point, were now rushing to have a mobile presence – had to quickly come up with solutions to deliver the so-called "APIs" and therefore give mobile apps access to information, either locked in backend systems and/or only accessible via SOAP web services.

It is worth noting that most organizations, having already made considerable investments in the adoption of traditional SOA solutions, understandably were (and many still are) keen to leverage their existing capabilities (not just in terms of technology but also in terms of people) in order to also satisfy these emerging requirements.

The reality was that, at the time, the vast majority of traditional SOA middleware, although very rich in capabilities to handle XML/SOAP-based payloads, lacked basic capabilities to handle REST/JSON services.

Figure 2.10: Second-generation API management is born

Another important difference that started to emerge was around governance. For mobile app developers, speed was the main factor. Their approach to governance (if any) was lightweight. Emphasis was given to adopting techniques to produce code quickly and encouraging developers to collaborate among themselves, as opposed to introducing heavy processes requiring a lot of policing in order to ensure that standards were adhered to.

SOA governance, on the other hand, backed by specialized (and expensive) software that was difficult to implement, didn't really stand to its promise, inevitably leading to criticism industry-wide. At this point, SOA governance, both as a discipline and specialized software, was deemed a failure.

As the industry's interest in API-related capabilities increased, software vendors in response rapidly adapted/enhanced their traditional SOA stacks (for example, ESBs) to add RESTful/JSON processing capabilities.

Furthermore, as API management started to develop as a discipline, to manage APIs across their full life cycle, new technical capabilities were required in order to make this task a lot simpler. To this end, many SOA governance vendors also quickly adapted and/or simplified heavily their offerings, in order to make them suitable to manage APIs.

Some vendors went beyond adapting their SOA stacks to even changing their brand names to reflect this change of direction:
https://sdtimes.com/akana/soa-software-changes-name-akana/

The adaptation of first-generation ESBs, XML gateway appliances, and SOA governance tooling in support of API-specific capabilities, including their management, is referred to as second-generation API platforms. Now, because of this, second-generation API platforms can be easily identified, as API capabilities tend to be just add-ons on top of the vendor's existing ESB and/or XML gateway offering.

Figure 2.11: First- and second-generation API platforms' architectures compared

Another point worth highlighting is the definition of services as semi-decoupled. According to the diagram, a service is where business-logic-related capabilities, such as orchestration and business rules, are implemented. APIs, on the other hand, are the interface of such a service (which could be in any protocol, for example, SOAP or REST) and where policies, such as authentication and authorization, are applied. In first and second generation, APIs and services are coupled as one thing and deployed into the same stack. An API and a service tend to be a single deployment unit.

For further information on the semi-decoupled service definition, refer to the Open Modern Software Architecture Project (OMESA):
omesa.io

At this point, the tendency to implement business logic across the different layers of the integration middleware continued. Multiple reasons can be blamed for this. Sometimes, it was because it was simpler to just use the integration layer as a sort of "hammer for all nails," and other times, it was because of lack of best practice and architecture governance. Note that during this period, the word "governance" was sort of prohibited, given the bad reputation that SOA governance had earned.

Figure 2.12: Logic distribution in second generation

This tendency of integration stacks becoming thicker and thicker was being heavily criticized by the then rapidly emerging communities of developers promoting microservice architectures. Challenges in horizontally scaling integration platforms, complex inter-dependencies when releasing software into production, a lack of flexibility when selecting technologies; and last but not least, a common practice of making the middleware fat by implementing business logic in the integrations were some of the most notable criticisms raised.

Recommended reading: Microservices, a definition of this new architectural term with emphasis on section Smart endpoints and dumb pipes:
https://martinfowler.com/articles/microservices.html

Application Services Governance

Towards the end of this period, Gartner came up with the concept of Application Services Governance. Gartner's view was that API management would eventually become part of SOA governance. The combination of the two is what Gartner named Application Services Governance.

Further details on Application Services Governance is available in the following link:
https://www.akana.com/solutions/application-services-governance
Figure 2.13: Gartner's Application Services Governance

In practice, instead of API management and SOA combining, the traditional way of realizing SOA was, as mentioned earlier, heavily challenged by the emerging communities promoting the microservices architectural style. These communities did not just put into question the use of traditional (monolithic) SOA stacks (for example, ESBs) but, broadly speaking, also regarded their use as a bad practice.

Third generation

What we see today, in the majority of organizations worldwide, is a big push to adopt cloud, achieve digital transformation, and also become customer-centric. Businesses are taking serious steps in order to achieve these goals. At this point, it starts to become evident that (monolithic) second-generation API platforms aren't suitable for, or capable of, delivering the capabilities required to achieve such goals.

To elaborate further, the following is an explanation of what these goals actually mean to IT and why/how they relate to API platforms.

Cloud adoption

Cloud adoption means moving on-premise applications into the cloud (IaaS, PaaS, or SaaS) or simply building applications directly into and for the cloud (a term known as cloud-native). As mentioned earlier, some of the drivers could be lowering operations costs, but also gaining more flexibility and agility. To this end, most organizations have taken (or are taking) a best-of-breed (multi-vendor) approach to cloud, as opposed to moving all of their applications to a single cloud provider.

Cloud adoption manifests itself in three ways:

  • Workload migration: Lifting and shifting an on-premise workload (for example, databases, Java applications, or packaged applications) into an IaaS or PaaS cloud.
  • Cloud transformation: Adopting one or many SaaS applications as a replacement for an on-premise one. It also means using cloud-native capabilities (typically PaaS) to extend the SaaS application when applicable. In this case, there is no lift-shift, but rather data migration and integration.
  • Cloud reengineering: When a monolithic system is rewritten from scratch in the cloud using cloud-native capabilities (typically in PaaS).

Digital transformation

In plain English, digital transformation means enabling the business to offer its products and services through as many digital channels as applicable (web, mobile apps, kiosks, partner online stores, bots, and so on). However, in order to do so, access to up-to-date information in real time (information which now happens to be federated across multiple cloud data centers and/or on-premise systems) becomes absolutely critical. Without this access, a digital strategy will simply not succeed.

For example, a basic requirement for organizations undergoing digital transformation is mobility. Mobility means many things, but for some organizations it could be just giving co-workers the ability to execute business processes while on the move – via multiple devices. For this to work, access to multiple systems of records via APIs must be in place.

Another key requirement that arises in digital transformations is the need to give businesses the agility and speed for new products and services, to be taken to market quicker and cheaper, but also the ability to fail fast, and fail cheaply, so new concepts can be tried without spending millions.

As Adrian Cockcroft during his time at Netflix said, "Speed wins in the marketplace."
https://www.nginx.com/blog/adopting-microservices-at-netflix-lessons-for-team-and-process-design/

However, the majority of systems (especially monolithic ones) aren't suited to handling the load (or unpredictable peaks) that accessing information in real time demands. Also, changing these systems is complex, time-consuming, and risky.

This is where microservice architectures become so compelling and this is one of the reasons why they have become so popular. In short, they offer an approach to both software engineering and software architecture that enables the (end-to-end) implementation of business capabilities in a fully decoupled manner. This is not only in terms of the software development life cycle, but also in terms of technology, as each microservice is completely independent at runtime, and implements mechanisms to decouple itself from other systems and/or microservices that it may need to interact with. However, the proliferation of microservices also means that information becomes even more federated and is at a more granular level.

Recommended reading: Microservices and SOA.
https://www.slideshare.net/capgemini/microservices-and-soa

Customer-centricity

This means collecting, consolidating, and analyzing information about a customer's brand interactions/behavior, interests, purchasing patterns and history, personal details, and others, in order to create personalized, rich, and positive experiences for them. In turn, the expectation is that by delivering better and more tailored experiences, customer loyalty will increase and so will sales.

Although the concept is easy to understand, achieving it is a different story. This is because in the majority of organizations, customer information doesn't reside in one system but is scattered among several systems (many of which are legacy), which can be internal, external, or, often, belonging to business partners.

Common denominators

In order to provide the capabilities needed to achieve the aforementioned goals, a more sophisticated API platform (a third generation) is required. It must be one that:

  • Allows the implementation of APIs everywhere (any cloud and/or on-premise), yet without introducing an operations nightmare and huge costs.
  • Empowers communities of developers by letting them discover and subscribe to APIs via a self-service developer portal.
  • Gives developers the tools they need to rapidly design and create APIs that are well-documented and easy to consume – API first.
  • Gives information owners full visibility and control over their information, by letting them decide who by and how their information assets, exposed via APIs, are accessed.
  • Delivers strong security to protect information assets against all major threats (for example, OWASP Top Ten).
  • Is lightweight, appliance-less/ESB-less and suitable for microservice architectures.
  • Is elastic, meaning that gateways can:
    • Scale in or out without manual intervention.
    • Integrate with registries to dynamically determine active service endpoints.
  • Is centrally managed, regardless of the number of gateways, APIs, and their location.
  • Enables meaningful collection and use of statistics, so operations data can be used to gain business insight and not just for monitoring and troubleshooting purposes.
  • Is consumption-based, typically with no CPU-based licensing.
Figure 2.14: Third-generation APIs are everywhere

As the monoliths are broken down into smaller pieces and reimplemented as discrete cloud applications, either in SaaS or PaaS, the business logic and information contained in such monoliths also gets distributed. The tendency of integration middleware to become bigger and bigger seems to be reversing, almost like a big bubble that bursts into many smaller ones.

Recommended reading: An Ode to Middleware.
http://www.openlegacy.com/blog/an-ode-to-middleware/
Figure 2.15: Logic distribution in third generation

Third-generation API platforms truly mark an inflection point for software architecture. Unlike their predecessors, because of the federated nature of such platforms, it is difficult to depict them in architectural layers. This is better appreciated by looking at the following diagram.

Figure 2.16: Second- and third-generation API platforms' architectures compared

Although further architectural details will be covered in subsequent chapters, there are some fundamental characteristics that set the third generation apart from previous generations:

  • The management of APIs is fully decoupled from the service implementation itself; hence, by design, there is a separation of concerns.
  • At this point, it is very important to apprehend how to distinguish an API from a service, as although they complement each other, they are also distinct
    A service is an independently deployable software unit (an application) that encapsulates business logic and can be accessed via a standard interface, for example, via a REST or SOAP endpoint. Services can be fully decoupled (a microservice) or semi-decoupled (implemented in a common integration stack).
    A service endpoint(s), managed through an API platform is referred to as a managed API.
    Therefore, a service endpoint that is not managed through an API platform is an unmanaged API.
    To avoid confusion, this book refers to managed APIs as simply APIs.
  • APIs, just like the information itself, are also federated. Therefore, APIs are implemented as close as possible to the source of information, regardless of where it resides (cloud and/or on-premise). Not doing so means that APIs could be exposed to latency and other network problems, including increased exposure to security threats.
  • Even with APIs being federated, full control and visibility over who can access/when they can access the APIs is possible.
  • APIs are discoverable via a developer portal. Through the portal, developers can search and subscribe to the APIs to which their role allows them to.
  • The entire API platform operations are centralized and cloud-based. Therefore, this allows administrators to deploy APIs to multiple locations from a central location and with little effort.
  • APIs are managed via lightweight (meaning small footprint), independently deployable, and scalable API gateways that can run anywhere. Therefore, these are not ordinary gateways. They can handle extremely large volumes, as they run on highly scalable platforms that support asynchronous, non-blocking I/O threading models, for example, NGINX, Vertx, Netty, Grizzly, Node.js/Express, to name a few.
  • A term that is becoming increasingly popular when referring to this specific type of gateways is API microgateways.

In summary, API are:

  • Non-monolithic, appliance-less, and ESB-less. They should be lightweight and very easy to implement (anywhere) – ideally using containers.
  • Self-sufficient and should be responsible for retrieving APIs, policies, and even system patches from a central management unit.
  • Stateless: no state should be managed for any transaction.
  • Capable of rapidly scaling in and out dynamically, without manual intervention.
  • Limited in their functionality by only delivering capabilities expected of a gateway, therefore preventing them from becoming fat, as experienced in second generation.
You have been reading a chapter from
Enterprise API Management
Published in: Jul 2019
Publisher:
ISBN-13: 9781787284432
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime