We looked at various aspects of Apache Ignite. In this section, we are going to explore different system architectures and how Apache Ignite can be integrated into our existing system to help us build a scalable architecture.
Refactoring the architecture
Achieving High Performance
In traditional web application architecture, we deploy our application into multiple nodes and each node connects to a relational database to store data. The following diagram depicts the traditional system architecture; different clients (desktop, mobile, tabs, laptops, smart devices, and so on) are communicating with the system. There are multiple JVMs/nodes to handle the traffic (the load balancer is removed for brevity), but there is only one database instance to store data. DB operations are relatively slow as they interact with the file IO, so this architecture may create a bottleneck if the client requests come faster than the DB prepossessing rate. The database ensures data atomicity, consistency, transaction isolation, and durability, we just cannot run multiple DB instances or replace it:
Adding a new Apache Ignite in-memory data grid layer to the existing N-tier architecture can improve the performance of the system many times over. The in-memory cluster can sit between the JVMs and the database. The JVMs/nodes will interact with the Ignite in-memory grid instead of the database, since the CRUD operations will be performed in-memory the performance will be way faster than direct database CRUDs. Data consistency, atomicity, isolation, and durability, and the transactional nature of operations, will be maintained by the Ignite cluster.
This new architectural style reduces the transaction time and system response time by moving the data closer to the application:
In Chapter 2, Understanding the Topologies and Caching Strategies, we will explore how to write code to interact with an in-memory data grid and then sync up data with a relational database.
Addressing High Availability and Resiliency
Load balancers are used to distribute user loads across the JVMs/nodes of an enterprise application. Load balancers use sticky sessions to route all the requests for a user to a particular server, which reduces session replication overhead. Session data is kept in the server; in the case of server failures, the user data is lost. It impacts the availability of the system. Web session clustering is a mechanism to move session data out of application servers, to the Apache Ignite data grid. It increases system scalability and availability; if we add more servers, the system can handle more users. Even if a server goes down, the user data will still be intact.
The following diagram depicts web session clustering with the Apache Ignite in-memory data grid:
A Load Balancer can route user requests to any server based on the load on the server; it doesn't have to remember the server-session affinity mapping as the user sessions are kept in the Ignite grid. Suppose a user's requests were being processed by App server 3, and his session is kept in the Apache Ignite session grid Session 3 in the previous diagram. Now, if App server 3 is busy or down, then the load balancer can route the user request to App Server N. App Server N can still process the user request as the user session is present in the Ignite grid.
You don't have to change code to share user sessions between servers through the Apache Ignite grid. We will configure web session clustering in Chapter 3, Working with Data Grids.
Sharing Data
Cache as a Service (CaaS) is a new computing buzzword. CaaS is used to share data between applications and it builds a common data access layer across an organization. In the healthcare domain, Charges & Services, Claims, Scheduling, Reporting, and Patient Management are some of the important modules. Organizations can develop them in any programming language the team is comfortable with, in a Microservice fashion. The applications can still share data using Apache Ignite's in-memory data grid. There is no need to create a local caching infrastructure for each application:
Moving Computation To Data
Microservices offer so many advantages over a traditional monolithic architecture. One of the main disadvantages of distributed Mircoservice-based deployment is service-to-service communication for data access. Apache Ignite provides a mechanism to move applications closer to the data and process requests faster. Microservices can be deployed directly to Apache Ignite nodes as it works faster than an app server filesystem-based deployment.
We are going to cover many more in-memory grid architecture refactoring styles and use cases in details.
Now, it is time for getting your hands dirty with Apache Ignite.