Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning AWS

You're reading from   Learning AWS Design, build, and deploy responsive applications using AWS Cloud components

Arrow left icon
Product type Paperback
Published in Feb 2018
Publisher Packt
ISBN-13 9781787281066
Length 412 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (2):
Arrow left icon
Amit Shah Amit Shah
Author Profile Icon Amit Shah
Amit Shah
Aurobindo Sarkar Aurobindo Sarkar
Author Profile Icon Aurobindo Sarkar
Aurobindo Sarkar
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Cloud 101 – Understanding the Basics FREE CHAPTER 2. Designing Cloud Applications 3. Introducing AWS Components 4. Designing for and Implementing Scalability 5. Designing for and Implementing High Availability 6. Designing for and Implementing Security 7. Deploying to Production and Going Live 8. Designing a Big Data Application 9. Implementing a Big Data Application 10. Deploying a Big Data System 11. Other Books You May Enjoy

Understanding cloud-based workloads

In this section, we will discuss various workloads being deployed on the cloud. These could be in-premise systems moved to the cloud, on-premise product versions replaced by cloud-based offerings, and new applications being developed for cloud-only environments.

Migrating on-premise applications to the cloud

There are several reasons for organizations wanting to migrate their applications to the cloud. These reasons typically include driving cost efficiency, improving productivity, supporting faster go-to-market strategies, achieving better operational efficiency, and others. Additionally, there are also several different strategies employed to move a portfolio of applications to the cloud.

One of the most commonly used approaches is the lift-and-shift, or rehosting, existing applications in the cloud. This approach can lead to some cost savings, especially if the infrastructure is right-sized and expensive commercial licenses of proprietary products replaced with cloud-based services (from the cloud service provider or third-party service providers) or using equivalent open-source products.

This approach is very popular compared to other approaches as it can be quicker to implement, and some benefits may be realized right away. However, design limitations and application inefficiencies in the existing in-premise application also get migrated to the cloud along with the application. Typically, steady-state applications that are service-oriented, loosely coupled, and with minimal inter dependencies with other applications are the best candidates for using this approach.

A rehosting strategy can lead to disappointments when a changeover to a cloud environment does not yield the expected levels of cost savings or a simpler operating environment. This may be because the full benefits of the cloud are fully realized only when cloud-native designs are implemented for various parts of the architecture. However, resizing infrastructure as per application requirements or replatforming the application to use cloud services or open-source products will definitely lead to increased cost advantages but also take longer to implement.

Most times, subscribing to a product's cloud-based offering or shifting to another equivalent or better cloud product can prove to be an advantageous strategy. For example, shifting to cloud-based offerings of SAP or shifting over to Salesforce for CRM functionality is increasingly becoming a favored strategy in many organizations. Finally, for some systems, it is best to re-factor and/or re-architect the application for deriving the maximum benefits of a migration to the cloud. Whatever the reasons and the strategy for migrating systems to the cloud, it needs to be a well-planned exercise that includes infrastructure, application, and data migration, with significant verification and validation effort at each step in the process.

Typically, migration projects start with an analysis of the existing portfolio of applications to figure out the sequence and strategy for each system to be migrated. Additionally, the speed of such projects picks up as a result of increased exposure to the cloud environment based on the initial set of migrations. The overall strategy in many cases is a mass lift-and-shift followed by iterative improvements introduced in the application architecture over a period of time. Sometimes these migrations are timed to avoid expensive lease and license renewals, and/or hardware refreshes. The portfolio analysis exercise often consolidates and/or rationalizes the hardware and software stacks used in an organization, identifies applications that can be retired at specific points along the journey, and other applications that will never be migrated due to regulatory or other concerns.

Building cloud-native applications

Cloud-native applications are specifically designed and implemented to operate in cloud-only environments. The nature of the application, infrastructure requirements, and data volumes can significantly influence the decision to use the cloud. Smaller organizations and startups often use the cloud for all their infrastructure and software/applications needs. Many such organizations also offer their products on a subscription-based licensing model to their customers (SaaS model).

Applications having wide variability in their usage patterns are great candidates for the cloud. The infrastructure costs in such cases can be reduced significantly by scaling up resources to match the increased demand, and scaling down subsequently to serve lighter loads. Similarly, it is common to scale up for a specific task, such as training a machine learning model (at certain intervals) instead of maintaining high capacity infrastructure, continuously. Specialized workloads requiring high memory, short bursts of high-compute server usage, GPUs, and so on, can leverage the ability to provision resources on demand (as per the requirements). For example, running large-scale deep learning workloads typically require GPU-based instances for quicker turnaround times. These server instances can be spun up and used, only when they are actually required.

Both streaming applications with incoming data at very high velocities and batch systems with very high data volumes, can benefit from easy availability and scalability of cloud resources. Additionally, applications using unstructured data such as vast document corpuses, image repositories, and audio and video libraries, can leverage the storage and processing power available on-tap in the cloud. The variety and number of ready-to-use cloud services that are available (via simple APIs) to developers, allows them to build applications without having to worry about the complexities of the underlying service.

We would like to conclude our introduction to cloud computing by getting you started on AWS, right away. The next section will help you set up your AWS account and familiarize you with the AWS management console.

You have been reading a chapter from
Learning AWS - Second Edition
Published in: Feb 2018
Publisher: Packt
ISBN-13: 9781787281066
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image