Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Learning Powershell DSC

You're reading from   Learning Powershell DSC Get started with the fundamentals of PowerShell DSC and utilize its power to automate deployment and configuration of your servers

Arrow left icon
Product type Paperback
Published in Oct 2015
Publisher
ISBN-13 9781783980703
Length 268 pages
Edition 1st Edition
Languages
Arrow right icon
Author (1):
Arrow left icon
James Pogran James Pogran
Author Profile Icon James Pogran
James Pogran
Arrow right icon
View More author details
Toc

Table of Contents (9) Chapters Close

Preface 1. Introduction to PowerShell DSC FREE CHAPTER 2. DSC Architecture 3. DSC Configuration Files 4. DSC Resources 5. Pushing DSC Configurations 6. Pulling DSC Configurations 7. Example Scenarios Index

Why do we need Configuration Management?

Whether you manage a few servers or several thousand, the traditional methods of server and software installation and deployment are failing to address your current needs. These methods treat servers as special singular entities that have to be protected and taken care of, with special configurations that may or may not be documented, and if they go down, they take the business with it.

For a long while, this has worked out. But as the number of servers and applications grow, and the number of configuration points grows, it becomes untenable to keep it all in your head, or consistently documented by a set of people. New patches are released, feature sets changed, employee turnover, poorly documented software; all these reasons introduce variance and change into the system. If not accounted for and handled, these "special" servers become ticking time bombs that will explode the moment a detail is missed.

Written installation or configuration specifications that have to be performed by humans error-free time and time again on numerous servers are increasingly self-evident as brittle and error prone affairs. To further complicate things, despite the obvious interdependence of software development and other IT-related departments, software developers are often isolated from the realities faced by IT professionals during the deployment and maintenance of the software.

The answer to this is automation: defining a repeatable process that configures servers the right way, every time. Servers move from being special snowflakes to being disposable numbers on a list that can be created and destroyed without requiring someone to remember the specific incantation to make it work. Instead of a golden image that has to be kept up-to-date with all the complexities of image storage and distribution, there is instead a set of steps to bring all servers to compliance regardless of whether they are a fresh installation or a number of years old.

What is being described is Configuration Management (CM). CM ensures that the current design and build state of a system is a known good state. It ensures trust by not relying on the knowledge of one person or a team of people; it's an objective truth that can be verified at any time. It also provides a historical record of what was changed, which is useful not only for reporting purposes (like for management), but also for troubleshooting purposes (this file used to be there, now it's not…). CM detects variance between builds, so changes to the environment are both easily apparent and well known to all who work on the system. It allows anyone to see what the given state of the system is at any time, at any granularity, whether on one system or over the span of thousands. If a target system fails, it's a matter of re-running the CM build on a fresh installation to bring the system back to a steady state.

CM is part of a set of ideas called Infrastructure as code. It requires that every step in provisioning an environment is automated and written down in files that can be run any time to bring the environment to a known good state. While CM is infrastructure automation (replicating steps multiple times on any amount of target nodes), Infrastructure as code takes things one step further and codifies every step required to get an entire environment running. It encompasses the knowledge of server provisioning, server configuration, and server deployment into a format that is readable by sysadmins, developers, and other technology staff. Like CM, Infrastructure as code uses existing best practices from software development such as source control, automated code testing, and continuous integration to ensure a redundant and repeatable process.

The approaches being described are not that new and are part of a larger movement that has been slowly accepted among companies as the optimal way of managing servers and software, called DevOps.

What is DevOps?

The set of concepts we have been describing is collectively termed DevOps and is a part of a larger process called "continuous delivery". DevOps is a shortened form of development operations and describes a close working relationship between the development of software and the deployment and operation of that software. Continuous delivery is a set of practices that enable software to be developed and continuously deployed to production systems on a frequent basis, usually in an automatic fashion that happens multiple times a week or day.

Each year, a company called Puppet Labs surveys over 4,000 IT operations professionals and developers about their operations procedures. Of those surveyed companies that have implemented DevOps practices report improved software deployment quality and more frequent software releases. Their report states that these companies shipped code 30 times faster and completed those deployments 8,000 times faster than their peers. They had 50% fewer failures and restored service 12 times faster than their peers.

Results like the ones shown in the Puppet Labs survey show that organizations that adopt DevOps are up to five times more likely to be high-performing than those who have not. It's a cumulative effect; the longer you practice, the greater the results from adoption and the easier it is to continue doing so. How DevOps enables high performance centers around deployment frequency.

To define and explain the entirety of DevOps and continuous delivery is out of the scope of this book, but for the purposes of this book, the goals can be summarized as the following: to improve the deployment frequency, to lower the failure rate of new releases, and to shorten the recovery time if a new release is faulty. Even though the term implies strict developer and operations roles as the only ones involved, the concept really applies to any person or department involved in the development, deployment, and maintenance of the product and the servers it runs on.

These goals work toward one end: minimizing the risk of software deployment by making changes safe through automation. The root cause of poor quality is variation, whether that be in the system, software settings, or in the processes performing actions on the system or software. The solution to variation is repeatability. By figuring out how to perform an action repeatedly, you have removed the variation from the process and can continually make small changes to the process without causing unforeseen problems.

You have been reading a chapter from
Learning Powershell DSC
Published in: Oct 2015
Publisher:
ISBN-13: 9781783980703
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image