Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning Docker

You're reading from   Learning Docker Build, ship, and scale faster

Arrow left icon
Product type Paperback
Published in May 2017
Publisher
ISBN-13 9781786462923
Length 300 pages
Edition 2nd Edition
Tools
Arrow right icon
Authors (3):
Arrow left icon
Jeeva S. Chelladhurai Jeeva S. Chelladhurai
Author Profile Icon Jeeva S. Chelladhurai
Jeeva S. Chelladhurai
Pethuru Raj Pethuru Raj
Author Profile Icon Pethuru Raj
Pethuru Raj
Vinod Singh Vinod Singh
Author Profile Icon Vinod Singh
Vinod Singh
Arrow right icon
View More author details
Toc

Table of Contents (13) Chapters Close

Preface 1. Getting Started with Docker FREE CHAPTER 2. Handling Docker Containers 3. Building Images 4. Publishing Images 5. Running Your Private Docker Infrastructure 6. Running Services in a Container 7. Sharing Data with Containers 8. Orchestrating Containers 9. Testing with Docker 10. Debugging Containers 11. Securing Docker Containers 12. The Docker Platform – Distinct Capabilities and Use Cases

The key drivers for Dockerization

The first and foremost driver for Docker-enabled containerization is to competently and completely overcome the widely expressed limitations of the virtualization paradigm. Actually, we have been working on proven virtualization techniques and tools for quite a long time now in order to realize the much-demanded software portability. That is, with the goal of eliminating the inhibiting dependency between software and hardware there have been several right initiatives that include the matured and stabilized virtualization paradigm. Virtualization is a kind of beneficial abstraction, accomplished through the incorporation of an additional layer of indirection between hardware resources and software components. Through this freshly introduced abstraction layer (hypervisor or Virtual Machine Monitor (VMM)), any kind of software applications can run on any underlying hardware without any hitch or hurdle. In short, the longstanding goal of software portability is trying to achieve through this middleware layer. However, the much-published portability target is not fully met even by the virtualization technique. The hypervisor software from different vendors gets in the way of ensuring the much-needed application portability. Further, the distribution, version, edition, and patching differences of operating systems and application workloads hinder the smooth portability of workloads across systems and locations.

Similarly, there are various other drawbacks attached to the virtualization paradigm. In data centers and server farms, the virtualization technique is typically used to create multiple Virtual Machines (VMs) out of physical machines and each VM has its own Operating System (OS). Through this solid and sound isolation enacted through automated tools and controlled resource-sharing, multiple and heterogeneous applications are being accommodated in a physical machine. That is, the hardware-assisted virtualization enables disparate applications to be run simultaneously on a single physical server. With the virtualization paradigm, various kinds of IT infrastructure (server machines, storage appliances, and networking solutions) become open, programmable, remotely monitorable, manageable, and maintainable. However, because of the verbosity and bloatedness (every VM carries its own OS), VM provisioning typically takes a few minutes and this longer duration is not acceptable for production environments.

The other widely expressed drawback closely associated with virtualization is that the performance of virtualized systems also goes down due to the excessive usage of precious and expensive IT resources (processing, memory, storage, network bandwidth, and so on). Besides the longer runtime, the execution time of VMs is on the higher side because of multiple layers ranging from the guest OS, hypervisor, and the underlying hardware.

Finally, the compute virtualization has flourished, whereas the other closely associated network and storage virtualization concepts are just taking off; precisely speaking, building distributed applications and fulfilling varying business expectations mandate the faster and flexible provisioning, high availability, reliability, scalability, and maneuverability of all the participating IT resources. Computing, storage, and networking components need to work together in accomplishing the varying IT and business needs. This sharply increments the management complexity of virtual environments.

Enter the world of containerization. All the aforementioned barriers get resolved in a single stroke. That is, the evolving concept of application containerization coolly and confidently contributes to the unprecedented success of the software portability goal. A container generally contains an application. Along with the primary application, all of its relevant libraries, binaries, and other dependencies are stuffed and squeezed together to be packaged and presented as a comprehensive yet compact container to be readily shipped, run, and managed in any local as well as remote environments. Containers are exceptionally lightweight, highly portable, rapidly deployable, extensible, and so on. Further on, many industry leaders have come together to form a kind of consortium to embark on a decisive journey towards the systematic production, packaging, and delivery of industry-strength and standardized containers. This conscious and collective move makes Docker deeply penetrative, pervasive, and persuasive. The open source community is simultaneously spearheading the containerization conundrum through an assortment of concerted activities for simplifying and streamlining the containerization concept. These containerization life cycle steps are being automated through a variety of tools.

The Docker ecosystem is also growing rapidly in order to bring in as much automation as possible in the containerization landscape. Container clustering and orchestration are gaining a lot of ground; thus, geographically distributed containers and their clusters can be readily linked up to produce bigger and better application-aware containers. The distributed nature of cloud centers is, therefore, to get benefited immensely with all the adroit advancements gaining a strong foothold in the container space. Cloud service providers and enterprise IT environments are all set to embrace this unique technology in order to escalate the resource utilization and to take the much-insisted infrastructure optimization to the next level. On the performance side, plenty of tests demonstrate Docker containers achieving native system performance. In short, IT agility through the DevOps aspect is being guaranteed through the smart leverage of Dockerization, and this in turn leads to business agility, adaptivity, and affordability.

You have been reading a chapter from
Learning Docker - Second Edition
Published in: May 2017
Publisher:
ISBN-13: 9781786462923
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image