Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Why containers are driving DevOps

Save for later
  • 5 min read
  • 12 Jun 2017

article-image

It has been a long ride since the days where one application would just take a full room of computing hardware. Research and innovation in information technology (IT) have taken us far and will surely keep moving even faster every day. Let's talk a bit about the present state of DevOps, and how containers are driving the scene.

What are containers?

According to Docker (the most popular containers platform), a container is a stand-alone, lightweight package that has everything needed to execute a piece of software. It packs your code, runtime environment, systems tools, libraries, binaries, and settings. It's available for Linux and Windows apps. It runs the same everytime regardless of where you run it. It adds a layer of isolation, helping reduce conflicts between teams running different software on the same infrastructure.

Containers are one level deeper in the virtualization stack, allowing lighter environments, more isolation, more security, more standarization, and many more blessings. There are tons of benefits you could take advantage of. Instead of having to virtualize the whole operating system (like virtual machines [VMs] do), containers take the advantage of sharing most of the core of the host system and just add the required, not-in-the-host binaries and libraries; no more gigabytes of disk space lost due to bloated operating systems with repeated stuff. This means a lot of things: your deployments can go packed in a much more smaller image than having to run it alone in a full operating system, each deployment boots up way faster, the idling resource usage is lower, there is less configuration and more standarization (remember "Convention over configuration"), less things to manage and more isolated apps means less ways to screw something up, therefore there is less attack surface, which subsequently means more security. But keep in mind, not everything is perfect and there are many factors that you need to take into account before getting into the containerization realm.

Considerations

It has been less than 10 years since containerization started, and in the technology world that is a lot, considering how fast other technologies such as web front-end frameworks and artificial intelligence [AI] are moving. In just a few years, development of this widely-deployed technology has gone mature and production-ready, coupled with microservices, the boost has taken it to new parts in the DevOps world, being now the defacto solution for many companies in their application and services deployment flow. Just before all this exciting movement started, VMs were the go-to for the many problems encountered by IT people, including myself. And although VMs are a great way to solve many of these problems, there was still room for improvement. Nowadays, the horizon seems really promising with the support of top technology companies backing tools, frameworks, services and products, all around containers, benefiting most of the daily code we develop, test, debug, and deploy on a daily basis.

These days, thanks to the work of many, it's possible to have a consistent all-around lightweight way to run, test, debug, and deploy code from whichever platform you work from. So, if you code in Linux using VIM, but your coworker uses Windows using VS code, both can have the same local container with the same binaries and libraries where code is ran. This removes a lot of incompatibility issues and allows teams to enjoy production environments in their own machine, not having to worry about sharing the same configuration files, misconfiguration, versioning hassles, etc.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at AU $24.99/month. Cancel anytime

It gets even better. Not only is there no need to maintain the same configuration files across the different services: there is less configuration to handle as a whole. Templates do most of the work for us, allowing you and your team to focus on creating and deploying your products, improving and iterating your services, changing and enhancing your code. In less than 10 lines you can specify a working template containing everything needed to run a simple Node.js service, or maybe a Ruby on Rails application, and how about a Scala cron job. Containerization supports most, if not all languages and stacks.

Containers and virtualization

Virtualization has allowed for acceleration in the speed in which we build things for many years. It will continue to provide us with better solutions as time goes by. Just as we went from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and finally Software as a Service (SaaS) and others (Anything as a Service? AaaS?), I am certain that we will find more abstraction beyond containers, making our life easier everyday. As most of today's tools, many virtualization and containerization ones are open source, with huge communities around them and support boards, but keep the trust in good'ol Stack Overflow. So remember to give back something to the amazing community of open source, open issues, report bugs, share the best about it and help fix and improve the lacking parts. But really, just try to learn these new and promising technologies that give us IT people a huge bump in efficiency in pretty much all aspects.

About the author

Diego Rodriguez Baquero is a full stack developer specializing in DevOps and SysOps. He is also a WebTorrent core team member. He can be found at https://diegorbaquero.com/.