Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
OpenStack for Architects

You're reading from   OpenStack for Architects Design production-ready private cloud infrastructure

Arrow left icon
Product type Paperback
Published in May 2018
Publisher Packt
ISBN-13 9781788624510
Length 256 pages
Edition 2nd Edition
Arrow right icon
Authors (2):
Arrow left icon
Michael Solberg Michael Solberg
Author Profile Icon Michael Solberg
Michael Solberg
Ben Silverman Ben Silverman
Author Profile Icon Ben Silverman
Ben Silverman
Arrow right icon
View More author details
Toc

Table of Contents (12) Chapters Close

Preface 1. Introducing OpenStack 2. Architecting the Cloud FREE CHAPTER 3. Planning for Failure and Success 4. Building the Deployment Pipeline 5. Building to Operate 6. Integrating the Platform 7. Securing the Cloud 8. OpenStack Use Cases 9. Containers 10. Conclusion 11. Other Books You May Enjoy

Common OpenStack use cases

In spite of immense interest, huge investment, and public success, we've seen a number of cases where well-intentioned OpenStack projects fail or are at least perceived as a failure by the people who have funded them. When OpenStack projects fail, the technology itself is rarely the root cause. Thomas Bittman at Gartner noticed this trend and wrote an influential blog post entitled Why are Private Clouds Failing? in September 2014.

Bittman's findings echo many of our experiences from the field. In short, the reason that most private cloud projects fail is that improper expectations were set from the beginning, and the business goals for the cloud weren't realized by the end result.

First and foremost, OpenStack deployments should be seen as an investment with returns and not a project to reduce operational costs. Although we've certainly seen dramatic reductions in operational workloads through the automation that OpenStack provides, it is difficult to accurately quantify those reductions in order to justify the operational investment required to run an efficient cloud platform. Organizations that are entirely focused on cutting costs through automation should first look at automating existing virtual environments instead of deploying new environments.

We've also seen a lot of projects that had poorly quantified goals. OpenStack is an enabler of use cases and not an IT panacea. If the use cases are not agreed upon before investment in the platform begins, it will prove very difficult to justify the investment in the end. This is why the role of the Architect is so critical in OpenStack deployments—it is their job to ensure that concrete requirements are written upfront so that all of the stakeholders can quantify the success of the platform once deployed.

With this in mind, let's take a look at some typical use cases for OpenStack deployments.

Public hosting

As we mentioned before, OpenStack was originally created with code contributions from NASA and Rackspace. NASA's interest in OpenStack sprang from their desire to create a private elastic compute cloud, whereas the primary goal for Rackspace was to create an open source platform that could replace their public shared hosting infrastructure. As of April 2015, the Rackspace Public Cloud offering had been ported to OpenStack and had passed the OpenStack Powered Platform certification.

The Rackspace implementation offers both Compute and Object Storage services, but some implementations may choose to offer only Compute or Object Storage and receive certifications for those services. DreamHost, another public OpenStack-based cloud provider, for example, has chosen to break their managed services down into DreamCompute and DreamObjects, which implement the services separately. The DreamObjects service was implemented and offered first as a compliment to DreamHost's existing shared web hosting, and the DreamCompute service was introduced later.

Most public hosting providers focus primarily on the Compute service, and many do not yet offer software-defined networking through the Neutron network service (DreamCompute being a notable exception). Architects of hosting platforms will focus first on tenancy issues, second on chargeback issues, and finally on scale.

High-performance computing

The first production deployment of OpenStack outside NASA and Rackspace was at a Canadian not-for-profit organization named Cybera. Cybera deployed OpenStack as a technology platform in 2011 for its DAIR program, which provides free compute and storage to Canadian researchers, entrepreneurs, and small businesses.

Architects at Cybera, NASA, and CERN have all commented on how their services have much of the same concerns as in the public hosting space. They provide compute and storage resources to researchers and don't have much insight into how those resources will actually be used. Thus, concerns about secure multitenancy will apply to these environments just as much as they do in the hosting space.

HPC clouds will have an added focus on performance, though. Although hosting providers will look to economize on commodity hardware, research clouds will look to maximize performance by configuring their compute, storage, and network hardware to support high volume and throughput operations. Where most clouds will work best by growing low-to-mid range hardware horizontally with commodity hardware, high-performance clouds tend to be very specific about the performance profiles of their hardware selection. Cybera has published performance benchmarks, comparing its DAIR platform to EC2. Architects of research clouds may also look to use hardware pass-through capabilities or other low-level hypervisor features to enable specific workloads.

Rapid application development

Over the past couple of years, a third significant use case has emerged for OpenStack—enterprise application development environments. While public hosting and high-performance compute implementations may have huge regions with hundreds of compute nodes and thousands of cores, enterprise implementations tend to have regions of 20 to 50 compute nodes. Enterprise adopters have a strong interest in software-defined networking.

The primary driver for enterprise adoption of OpenStack has been the increasing use of continuous integration and continuous delivery in the application development workflow. A typical Continuous Integration and Continuous Delivery (CI/CD) workflow will deploy a complete application on every developer commit, which passes basic unit tests in order to perform automated integration testing. These application deployments live as long as it takes to run the unit tests, and then an automated process tears down the deployment once the tests pass or fail. This workflow is easily facilitated with a combination of OpenStack compute and network services.

While Architects of hosting or High-Performance Computing (HPC) clouds spend a lot of time focusing on tenancy and scaling issues, Architects of enterprise deployments will spend a lot of time focusing on how to integrate OpenStack compute into their existing infrastructure. Enterprise deployments will frequently leverage existing service catalog implementations and identity management solutions. Many enterprise deployments will also need to integrate with existing IPAM and asset tracking systems.

Network Function Virtualization

One of the largest areas for development and deployment of the OpenStack platform has been in the telecommunications industry. Network Functions Virtualization (NFV) provides a common IaaS platform for that industry, which is in the process of replacing the purpose-built hardware devices that provide network services with virtualized appliances that run on commodity hardware. Some of these services are routing, proxies, content filtering, as well as packet core services and high-volume switching. Most of these appliances have intense compute requirements, and they are largely stateless. These workloads are well-suited for the OpenStack compute model.

NFV use cases typically leverage hardware features, which can directly attach compute instances to physical network interfaces on compute nodes. Instances are also typically very sensitive to CPU, and memory topology (NUMA) and virtual cores tend to be mapped directly to physical cores. Orchestration either through Heat or TOSCA has also been a large focus for these deployments.

Architects of NFV solutions will focus primarily on virtual instance placement and performance issues and less on tenancy and integration issues.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime