Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Learning CoreOS
Learning CoreOS

Learning CoreOS:

Arrow left icon
Profile Icon Smiler. S Profile Icon Shantanu Agrawal
Arrow right icon
$24.99 $35.99
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.5 (2 Ratings)
eBook Mar 2016 190 pages 1st Edition
eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m
Arrow left icon
Profile Icon Smiler. S Profile Icon Shantanu Agrawal
Arrow right icon
$24.99 $35.99
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.5 (2 Ratings)
eBook Mar 2016 190 pages 1st Edition
eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m
eBook
$24.99 $35.99
Paperback
$43.99
Subscription
Free Trial
Renews at $19.99p/m

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Table of content icon View table of contents Preview book icon Preview Book

Learning CoreOS

Chapter 1. CoreOS, Yet Another Linux Distro?

As more and more applications move toward the cloud with server virtualization, there is a clear necessity for deploying user applications and services very fast and reliably with assured SLA by deploying the services in the right set of servers. This becomes more complex when these services are dynamic in nature, which results in making these services auto-provisioned and auto-scaled over a set of nodes. The orchestration of the user application is not limited to deploying the services in the right set of servers or virtual machines, rather to be extended to provide network connectivity across these services to provide Infrastructure as a Service (IaaS). Compute, network, and storage are the three main resources to be managed by the cloud provider in order to provide IaaS. Currently, there are various mechanisms to handle these requirements in a more abstract fashion. There are multiple cloud orchestration frameworks that can manage compute, storage, and networking resources. OpenStack, Cloud Stack, and VMware vSphere are some of the cloud platforms that perform orchestration of these resource pools and provide IaaS. For example, the Nova service in OpenStack manages the compute resource pool and creates VMs; the Neutron service provides the required information to provide virtual network connectivity across VMs; and so on.

The IaaS cloud providers should provide all three resources on-demand to the customers, which provide a pay-as-you-go model. The cloud provider maintains these resources as a pool and allocates the resource to a customer on-demand. This provides flexibility for the customer to start and stop the services based on their business needs and can save their OPEX. Typically, in an IaaS model, the cloud service provider offers these resources as a virtualized resource, that is, a virtual machine for compute, a virtual network for network, and virtual storage for storage. The hypervisor running in the physical server/compute nodes provides the required virtualization.

Typically, when an end user requests an IaaS offering with a specific OS, the cloud provider creates a new VM (Virtual Machine) with the OS requested by the user in their cloud server infrastructure. The end user can install their application in this VM. When the user requests more than one VM, the cloud provider should also provide the necessary network connectivity across these VMs in order to provide connectivity across the services running inside these VMs. The cloud orchestration framework takes care of instantiating the VMs in one of the available compute nodes in the cluster, along with associated services like providing virtual network connectivity across these VMs. Once the VM has been spawned, configuration management tools like Chef or Puppet can be used to deploy the application services over these VMs. Theoretically, this works very well.

There are three main problems with this approach:

  • All the VMs in the system should run their own copy of the operating system with their own memory management and virtual device drivers. Any application or services deployed over these VMs will be managed by the OS running in the VM. When there are multiple VMs running in a server, all the VMs run a separate copy of OS, which results in overhead with respect to CPU and memory. Also, as the VMs run their own operating system, the time taken to boot/bring up a VM is very high.
  • The operating system doesn't provide service-level virtualization that is running a service/application over a set of VMs which are part of cluster. The OS running in the VM is a general purpose operating system that lacks the concept of clustering and deploying the application or service over this cluster. In short, the operating system provides machine-level virtualization and not service-level virtualization.
  • The management effort required to deploy a service/software from a development to a production environment is very high. This is because each software package typically has dependencies with other software. There are thousands of packages; each package comes with a different set of configuration, with most combinations of configurations having dependency with respect to performance and scaling.

CoreOS addresses all these problems. Before looking into how CoreOS solves these problems, we will look at a small introduction to CoreOS.

Introduction to CoreOS

CoreOS is a lightweight cloud service orchestration operating system based on Google's Chrome OS. CoreOS is developed primarily for orchestrating applications/services over a cluster of nodes. Every node in the cluster runs CoreOS and one of the CoreOS nodes in the cluster will be elected as the master node by the etcd service. All the nodes in the cluster should have connectivity to the master node. All the slave nodes in the system provide information about the list of services running inside their system, along with the configuration parameter to the master node. In order to do this, we may have to configure fleet units in such a way that when we start a fleet unit with the fleetctl command, it should push its details such as IP and port to the etcd service. It is the responsibility of the master node to receive the service information and publish to all the other nodes in the cluster. In normal circumstances, the slave nodes won't talk to each other regarding service availability. The etcd service running in all the nodes in the cluster is responsible for electing the master node. All nodes in the system interact with the etcd service of the master node to get the service and configuration information of the services running in all other nodes. The following diagram depicts the CoreOS cluster architecture, wherein all the nodes in the cluster run CoreOS and other vital components of CoreOS like etcd, systemd, and so on. The etcd and fleet services are used for service discovery and cluster management respectively. In this, all three nodes are configured with the same cluster ID, so that all these nodes can be part of a single cluster. It is not possible for a node to be part of multiple clusters.

Introduction to CoreOS

CoreOS cluster

All the applications or services are deployed as a Linux container in the CoreOS. The Linux container provides a lightweight server virtualization infrastructure without running its own operating system or any hypervisor. It uses the operating system-level virtualization techniques provided by the host OS using the namespace concept. This provides drastic improvements in terms of scaling and performance of virtualization instances running over the physical server. This addresses the first issue of running the application inside a VM.

The following diagram depicts the difference between applications running inside a VM and applications running in an LXC container. In the following diagram, the VM way of virtualization has a guest OS installed in the VM along with the host OS. In a Linux container-based implementation, the container doesn't have a separate copy of the operating system; rather, it uses the service provided by the host operating system for all the OS-related functionalities.

Introduction to CoreOS

Virtual Machine versus Linux Container

CoreOS extends the existing services provided by Linux to work for a distributed cluster and not limited to a single node. As an example, CoreOS extends the system management service provided by most of the Linux distribution for starting, stopping, or restarting any applications/services to run on a cluster of nodes rather than a single node using the fleet tool. Instead of running an application limited to its own node, the services are submitted to fleet, which acts as a cluster manager and instantiates the service in any one of the nodes in the cluster. It is also possible to launch the container in a specific set of nodes by applying a constraint. This addresses the second issue with using VMs, discussed earlier in this chapter.

CoreOS uses Docker/Rocket as a container to deploy services inside the CoreOS cluster. Docker provides an easy way of bundling a service and its dependent module as a single monolithic image that can be shipped from development. In the deployment, the DevOps person can simply fetch the docker container from the development person and can deploy directly into the CoreOS nodes without performing any operations like building a compilation or build environment and rebuilding the image on the target platform and so on. This bridges the gap between the development and deployment of a service. This addresses the third issue with using VM, discussed earlier in this chapter.

CoreOS versus other Linux distributions

Even though CoreOS is yet another Linux distribution like Fedora/Centos, the key difference between CoreOS and other standard Linux distributions are as follows:

  • CoreOS is not designed to run any applications or services directly. Any application to be run inside CoreOS should be deployed as a container (which can either be Docker/Rocket). So it is not possible to install any software packages in CoreOS and hence CoreOS doesn't have any installation software packages like yum, apt, and so on. In short, CoreOS is a stripped-down version of a Linux distribution that doesn't have any inbuilt user applications or library installed.
  • Most of the Linux distributions are meant to run as a host operating system either in a data center server or in a typical desktop PC. They are not developed to manage a cluster of nodes/the cloud; rather, they will be part of the cloud that is being managed by other cloud orchestration platforms. However, CoreOS is a Linux distribution that is builtout for the management of a massive server infrastructure with clustering. The CoreOS cluster is a group of physical or virtual machines that runs CoreOS with the same cluster ID. The services running in the cluster nodes are managed by fleet, which is the CoreOS orchestration tool. Software updates in a traditional Linux distribution are done by updating the packages one by one. However, CoreOS supports a scheme called fast patch, wherein the entire CoreOS OS is updated once. The CoreUpdate program is used for updating CoreOS in a server, cluster, or complete data center.
  • CoreOS is extremely lightweight when compared to traditional Linux distributions.

CoreOS high-level architecture

The CoreOS node in a cluster comprises the following main components:

  • etcd
  • systemd
  • fleet
  • Docker/Rocket containers
    CoreOS high-level architecture

    CoreOS High-level Architecture

The CoreOS node runs etcd, systemd, and the fleet service in all of the nodes in the cluster. etcd, which is running in all the nodes, talk to each other and elects one node as the master node. All the services running inside the node will be advertised to this master node, which makes etcd provide a service discovery mechanism. Similarly, fleetd running in different nodes maintains the list of services running in different nodes in its service pool, which provides service-level orchestration. fleetctl and etcdctl are command-line utilities to configure the fleet and etcd utilities respectively.

Refer to subsequent sections of this chapter to understand the functionality of each component in detail.

These components together provide three main functionalities for CoreOS as follows:

  • Service discovery
  • Cluster management
  • Container management

Service discovery

In the CoreOS environment, all user applications are deployed as services inside a container that can either be a Docker container or a Rocket container. As different applications/services are running as separate containers in the CoreOS cluster, it is inevitable to announce the services provided by each node to all the nodes in the cluster. Along with service availability, it is also required that each service advertises the configuration parameters to other services. This service advertisement is very important when the services are tightly coupled and dependent on each other. For example, the web service should know details about the database services, about the connection string, or type of database and so on. CoreOS provides a way for each service to advertise its service and configuration information using the etcd service. The data announced to the etcd service will be given/announced to all the nodes in the cluster by the master node.

etcd

etcd is a distributed key value store that stores data across the CoreOS cluster. The etcd service is used for publishing services running on a node to all the other nodes in the cluster, so that all the services inside the cluster discover other services and configuration details of other services. etcd is responsible for electing the master node among the set of nodes in the cluster. All nodes in the cluster publish their services and configuration information to the etcd service of the master node, which provides this information to all the other nodes in the cluster.

Container management

The key element of the CoreOS building block is a container that can either be Docker or Rocket. The initial version of CoreOS officially supports Docker as the means for running any service application in the CoreOS cluster. In the recent version, CoreOS supports a new container mechanism called Rocket, even though CoreOS maintains backward compatibility with Docker support. All customer applications/services will be deployed as a container in the CoreOS cluster. When multiple services are running inside a server for different customers, it is inevitable to isolate the execution environment from one customer to another customer. Typically, in a VM-based environment, each customer will be given a VM and inside this VM the customer can run their own service, which provides complete isolation of the execution environment between customers. The container also provides a lightweight virtualization environment without running a separate copy of the VM.

Linux Container

Linux Container (LXC) is a lightweight virtualization environment provided by the Linux kernel to provide system-level virtualization without running a hypervisor. LXC provides multiple virtualized environments, each of them being inaccessible and invisible from the other. Thus, an application that is running inside one Linux container will not have access to the other containers.

LXC combines three main concepts for resource isolation as follows:

  • Cgroups
  • Namespaces
  • Chroot

The following diagram explains in detail about LXC and the utilities required to provide LXC support:

Linux Container

Linux Containers

Libvirt is a 'C' library toolkit that is used to interact with the virtualization capabilities provided by the Linux kernel. It acts as a wrapper layer for accessing the APIs exposed by the virtualization layer of the kernel.

cgroups

Linux cgroups is a feature provided by the kernel to restrict access to system resource for a process or set of processes. The Linux cgroup provides a way to reserve or allocate resources, such as CPU, system memory, network bandwidth and so on, to a group of processes/tasks. The administrator can create a cgroup and set the access levels for these resources and bind one or more processes to these groups. This provides fine-grained control over the resources in the system to different processes. This is explained in detail in the next diagram. The resources mentioned on the left-hand side are grouped into two different cgroups called cgroups-1 and cgroups-2. task1 and task2 are assigned to cgroups-1, which makes only the resources allocated for cgroups-1 available for task1 and task2.

cgroups

Linux cgroups

Managing cgroups consists of the following steps:

  1. Creation of cgroups.
  2. Assign resource limit to the cgroup based on the problem statement. For example, if the administrator wants to restrict an application not to consume more that 50 percent of CPU, then he can set the limit accordingly.
  3. Add the process into the group.

As the creation of cgroups and allocation of resources happens outside of the application context, the application that is part of the cgroup will not be aware of cgroups and the level of resource allocated to that cgroup.

Namespace

namespace is a new feature introduced from Linux kernel version 2.6.23 to provide resource abstraction for a set of processes. A process in a namespace will have visibility only to the resources and processes that are part of that namespace alone. There are six different types of namespace abstraction supported in Linux as follows:

  • PID/Process Namespace
  • Network Namespace
  • Mount Namespace
  • IPC Namespace
  • User Namespace
  • UTS Namespace

Process Namespace provides a way of isolating the process from one execution environment to another execution environment. The processes that are part of one namespace won't have visibility to the processes that are part of other namespaces. Typically, in a Linux OS, all the processes are maintained in a tree with a child-parent relationship. The root of this process tree starts with a specialized process called init process whose process-id is 1. The init process is the first process to be created in the system and all the process that are created subsequently will be part of the child nodes of the process tree. The process namespace introduces multiple process trees, one for each namespace, which provides complete isolation of the processes running across different namespaces. This also brings the concept of a single process to have two different pids: one is the global context and other in the namespace context. This is explained in detail in the following diagram.

In the following diagram, for namespace, all processes have two process IDs: one in the namespace context and the other in the global process tree.

Namespace

Process Namespace

Network Namespace provides isolation of the networking stack provided by the operating system for each container. Isolating the network stack for each namespace provides a way to run multiple same services, say a web server for different customers or a container. In the next diagram, the physical interface that is connected to the hypervisor is the actual physical interface present in the system. Each container will be provided with a virtual interface that is connected to the hypervisor bridging process. This hypervisor bridging process provides inter-container connectivity across the container, which provides a way for an application running in one container to talk to another application running in another container.

Namespace

Network Namespace

Chroot

Chroot is an operation supported by Linux OS to change the root directory of the current running process, which apparently changes the root directory of its child. The application that changes the root directory will not have access to the root directory of other applications. Chroot is also called chroot jail.

Combining the cgroups, namespace, and chroot features of the Linux kernel provides a sophisticated virtualized resource isolation framework with clear segregation of the data and resources across various processes in the system.

In LXC, the chroot utility is used to separate the filesystem, and each filesystem will be assigned to a container that provides each container with its own root filesystem. Each process in a container will be assigned to the same cgroup with each cgroup having its own resources providing resource isolation for a container.

Docker

Docker provides a portable way to deploy a service in any Linux distribution by creating a single object that contains the service. Along with the service, all the dependent services can be bundled together and can be deployed in any Linux-based servers or virtual machine.

Docker is similar to LXC in most aspects. Similar to LXC, Docker is a lightweight server virtualization infrastructure that runs an application process in isolation, with resource isolation, such as CPU, memory, block I/O, network, and so on. But along with isolation, Docker provides "Build, Ship and Run" modeling, wherein any application and its dependencies can be built, shipped, and run as a separate virtualized process running in a namespace isolation provided by the Linux operating system.

Dockers can be integrated with any of the following cloud platforms: Amazon Web Services, Google Cloud Platform, IBM Bluemix, Jelastic, Jenkins, Microsoft Azure, OpenStack Nova, OpenSVC, and configuration tools such as Ansible, CFEngine, Chef, Puppet, Salt, and Vagrant. The following are the main features provided by Docker.

The main objective of Docker is to support micro-service architecture. In micro-service architecture, a monolithic application will be divided into multiple small services or applications (called micro-services), which can be deployed independently on a separate host. Each micro-service should be designed to perform specific business logic. There should be a clear boundary between the micro-services in terms of operations, but each micro-service may need to expose APIs to different micro-services similar to the service discovery mechanism described earlier. The main advantage of micro-service is quick development and deployment, ease of debugging, and parallelism in the development for different components in the system. One of the main advantage of micro-services is based on the complexity, bottleneck, processing capability, and scalability requirement; every micro-service can be individually scaled.

Docker versus LXC

Docker is designed for deploying applications, whereas LXC is designed to deploy a machine. LXC containers are treated as a machine, wherein any applications can be deployed and run inside the container. Docker is designed to run a specific service or application to provide container as an application. However, when an application or service has a dependency with other services, these services can also be packed along with the same Docker image. Typically, the docker container doesn't provide all the services that will be provided by any OS, such as init systems, syslog, cron, and so on. As Docker is more focused on deploying applications, it provides tools to create a docker container and deploy the services using source code.

Docker containers are designed to have a layered architecture with each layer containing changes from the previous version. The layered architecture provides the docker to maintain the version of the complete container. Like any typical version control tools like Git/CVS, docker containers are maintained with a different version with operations like commit, rollback, version tracking, version diff, and so on. Any changes made inside the docker application will be made as a read-only layer until it is committed.

Docker-hub contains more than 14,000 containers available for various well-known services that can be downloaded and deployed very easily.

Docker provides an efficient mechanism for chaining different docker containers, which provides a good service chaining mechanism. Different docker containers can be connected to each other via different mechanisms as follows:

  • Docker link
  • Using docker0 bridge
  • Using the docker container to use the host network stack

Each mechanism has its own benefits. Refer to Chapter 7, Creating a Virtual Tenant Network and Service Chaining Using OVS for more information about service chaining.

Docker uses libcontainer, which accesses the kernel's container calls directly rather than creating an LXC.

Docker versus LXC

Docker versus LXC

Rocket

Historically, the main objective of CoreOS is to run the services as a lightweight container. Docker's principle was aligning with the CoreOS service requirement with simple and composable units as container. Later on, Docker adds more and more features to make the Docker container provide more functionality than standard containers inside a monolithic binary. These functionalities include building overlay networks, tools for launching cloud servers with clustering, building images, running and uploading images, and so on. This makes Docker more like a platform rather than a simple container.

With the previously mentioned scenario, CoreOS started working on a new alternative to Docker with the following objectives:

  • Security
  • Composability
  • Speed
  • Image distribution

CoreOS announced the development of Rocket as an alternative to Docker to meet the previously mentioned requirements. Along with the development of Rocket, CoreOS also started working on an App Container Specification. The specification explains the features of the container such as image format, runtime environment, container discovery mechanism, and so on. CoreOS launched its first version of Rocket along with the App Container Specification in December 2014.

CoreOS cluster management:

Clustering is the concept of grouping a set of machines to a single logical system (called cluster) so that the application can be deployed in any one machine in the cluster. In CoreOS, clustering is one of the main features provided by CoreOS by running different services/docker container over the cluster of the machine. Historically, in most of the Linux distribution, services can be managed using the systemd utility. CoreOS extends the systemd service from a single node to a cluster using fleet utility. The main reason for CoreOS to choose fleet to orchestrate the services across the CoreOS cluster is as follows:

  • Performance
  • Journal support
  • Rich syntax in deploying the services

It is also possible to have a CoreOS cluster with a combination of a physical server and virtual machines as long as all the nodes in the cluster are connected to each other and reachable. All the nodes that want to participate in the CoreOS cluster should run CoreOS with the same cluster ID.

systemd

systemd is an init system utility that is used to stop, start, and restart any of the Linux services or user programs. systemd has two main terminologies or concepts: unit and target. unit is a file that contains the configuration of the services to be started, and target is a grouping mechanism to group multiple services to be started at the same time.

fleet

fleet emulates all the nodes in the cluster to be part of a single init system or system service. fleet controls the systemd service at the cluster level, not in the individual node level, which allows fleet to manage services in any of the nodes in the cluster. fleet not only instantiates the service inside a cluster but also manages how the services are to be moved from one node to another when there is a node failure in the cluster. Thus, fleet guarantees that the service is running in any one of the nodes in the cluster. fleet can also take care of restricting the services to be deployed in a particular node or set of nodes in a cluster. For example, if there are ten nodes in a cluster and among the ten nodes a particular service, say a web server, is to be deployed over a set of three servers, then this restriction can be enforced when fleet instantiates a service over the cluster. These restrictions can be imposed by providing some information about how these jobs are to be distributed across the cluster. fleet has two main terminologies or concepts: engine and agents. For more information about systemd and fleet, refer to chapter Creating Your CoreOS Cluster and Managing the Cluster.

CoreOS and OpenStack

Is CoreOS yet another orchestration framework like OpenStack/CloudStack? No, it is not. CoreOS is not a standalone orchestration framework like OpenStack/CloudStack. In most server orchestration frameworks, the framework sits external to the managed cloud. But in CoreOS, the orchestration framework sits along with the existing business solution.

OpenStack is one of the most widely used cloud computing software platforms to provide IaaS. OpenStack is used for orchestrating the compute, storage, and network entities of the cloud, whereas CoreOS is used for service orchestration. Once the compute, storage, or network entities are instantiated, OpenStack doesn't have any role in instantiating services inside these VMs.

Combining the orchestration provided by OpenStack and CoreOS provides a powerful IaaS, wherein the cloud provider will have fine-grained control until the service orchestration. So CoreOS can co-exist with OpenStack, wherein OpenStack can instantiate a set of VMs that run the CoreOS instance and form a CoreOS cluster. That is, OpenStack can be used to create a CoreOS cluster as infrastructure. The CoreOS that is running inside the VM forms as a cluster and instantiates the service inside any one of the nodes in the cluster.

CoreOS and OpenStack

OpenStack and CoreOS

In the preceding diagram, OpenStack is used to manage the server farm that consists of three servers: server1, server2, and server3. When a customer is requested for a set of VMs, OpenStack creates the necessary VM in any one of these servers, as an IaaS offering. With CoreOS, all these VMs run the CoreOS image with the same cluster ID, and hence can be part of the same cluster. In the preceding diagram, there are two CoreOS clusters, each allocated for different customers. The services/applications to be run on these VMs will be instantiated by the fleet service of CoreOS, which takes care of instantiating the service in any one of the VMs in the cluster. At any point in time, OpenStack can instantiate new VMs inside the cluster in order to scale up the cluster capacity by adding new VMs running the CoreOS image with the same cluster ID, which will be a candidate for CoreOS to run new services.

Summary

CoreOS and Docker open up a new era for deploying the services in a cluster to streamline easy development and deployment of applications. CoreOS and Docker bridge the gap between the process of developing a service and deploying the service in production and make the server and service deployment less effort and less intensive work. With lightweight containers, CoreOS provides very good performance and provides an easy way to auto-scale the application with less overhead from the operator side. In this chapter, we have seen the basics of containers, Docker, and the high-level architecture of CoreOS.

In the next few chapters, we are going to see the individual building blocks of CoreOS in detail.

Left arrow icon Right arrow icon

Key benefits

  • Understand the features of CoreOS and learn to administrate and secure a CoreOS environment
  • Develop, test, and deploy cloud services and applications more quickly and efficiently inside lightweight containers using CoreOS
  • This is a complete tutorial on CoreOS, which is the preferred OS for cloud computing as it contains components that facilitate cloud management

Description

CoreOS is an open source operating system developed upon the Linux kernel. The rise of CoreOS is directly related to the rise of Docker (a Linux container management system). It is a minimal operating system layer and takes a different approach to automating the deployment of containers. The major difference between CoreOS and other Linux distributions is that CoreOS was designed to deploy hundreds of servers. CoreOS immensely helps the users to create systems, which are easy to scale and manage, making life easier for all, be it developer, QA, or deployer. This book is all about setting up, deploying, and using CoreOS to manage clusters and clouds. It will help you understand what CoreOS is and its benefits as a cloud orchestration platform. First, we’ll show you how to set up a simple CoreOS instance with single node in the cluster and how to run a Docker container inside the CoreOS instance. Next, you’ll be introduced to Fleet and systemd, and will deploy and distribute Docker services across different nodes in cluster using Fleet. Later, you’ll be briefed about running services in a cluster with constraints, publishing the services already running on the cluster to new services, and making your services interact with each other. We conclude by teaching you about advanced container networking. By the end of the book, you will know the salient features of CoreOS and will be able to deploy, administrate, and secure a CoreOS environment.

Who is this book for?

This book is for cloud or enterprise administrators and application developers who would like to gain knowledge about CoreOS to deploy a cloud application or micro-services on a cluster of cloud servers. It is also aimed at administrators with basic networking experience. You do not need to have any knowledge of CoreOS.

What you will learn

  • Understand the benefits of CoreOS as a cloud orchestration platform
  • Learn about lightweight containers and various container frameworks such as Docker and RKT in CoreOS
  • Manage services and containers across cluster using Fleet and systemd
  • Set up a CoreOS environment using Vagrant
  • Create and manage CoreOS clusters
  • Discover the service parameters using etcd
  • Find out about chaining services running on the cluster using Flannel / Rudder and Weave
  • Create a virtual tenant network and service chaining using OVS

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Mar 22, 2016
Length: 190 pages
Edition : 1st
Language : English
ISBN-13 : 9781785886935
Languages :
Tools :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want

Product Details

Publication date : Mar 22, 2016
Length: 190 pages
Edition : 1st
Language : English
ISBN-13 : 9781785886935
Languages :
Tools :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just $5 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total $ 98.98
Learning CoreOS
$43.99
Learning Docker
$54.99
Total $ 98.98 Stars icon

Table of Contents

9 Chapters
1. CoreOS, Yet Another Linux Distro? Chevron down icon Chevron up icon
2. Setting Up Your CoreOS Environment Chevron down icon Chevron up icon
3. Creating Your CoreOS Cluster and Managing the Cluster Chevron down icon Chevron up icon
4. Managing Services with User-Defined Constraints Chevron down icon Chevron up icon
5. Discovering Services Running in a Cluster Chevron down icon Chevron up icon
6. Service Chaining and Networking Across Services Chevron down icon Chevron up icon
7. Creating a Virtual Tenant Network and Service Chaining Using OVS Chevron down icon Chevron up icon
8. What Next? Chevron down icon Chevron up icon
Index Chevron down icon Chevron up icon

Customer reviews

Rating distribution
Full star icon Full star icon Half star icon Empty star icon Empty star icon 2.5
(2 Ratings)
5 star 0%
4 star 50%
3 star 0%
2 star 0%
1 star 50%
vijey18 Aug 05, 2016
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
Folks who are reading to keep up to date with the latest IT news would have certainly heard about CoreOs. Lots of people still have no idea what it is and how CoreOs differs from conventional Linux flavored OSes. I was one among the novices who came across this book with the same agenda. For a first timer who is picking up the nuances of Dockers, this book is a good first read.This is a useful background info for anyone trying to hack their way into the world of containers What is it in this book that a google search cannot answer was the my obvious question. In retrospect, the background of why containers are used in CoreOs and the differentiation between CoreOs and regular flavors of Linux Distributions is explained in detail.The steps detailed in setting up CoreOs as a standalone host, along with examples on CoreOs clustering will be useful for a beginner as well as an expert.The book is concise (170 pages from Start to End)for the amount of information and the number of examples it packs.If I were to nitpick, it would be on the textbook like approach of the book. For folks who are used to the Dummies and O'Reilly style of Narration, this book is a departure with its straight and linear approach to the topic of CoreOs.
Amazon Verified review Amazon
Kevin Nabity Aug 12, 2016
Full star icon Empty star icon Empty star icon Empty star icon Empty star icon 1
My concerns with this book were confirmed. Maybe there just isn't enough about CoreOS to make a book just about CoreOS. That could be a good thing for CoreOS. While there is a bit about CoreOS, It seems like most of this book is mostly about Vagrant, Docker, and provisioning coreOS VMs with Vagrant. I also wish there were a repository to copy the code from so that I didn't have to retype it all in myself manually. I often find it a problem to figure out if I mistyped something or the book had another syntax typo that I am unfamiliar with like the ExecStartPre missing an equals sign in ExecStart docker on 1316 at the time of this writing. I wasted a lot of time retyping/finding mistakes in this book that prevented examples from working. I'm sorry to say I would not recommend this book, and it was definitely not worth the $20 I spent on it.
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.