Why do we need Puppet anyway?
Managing applications and services in production is hard work, and there are a lot of steps involved. To start with, you need some servers to serve the services. Luckily, these are readily available from your local cloud provider, at low, low prices. So you've got a server, with a base operating system installed on it, and you can log into it. So now what? Before you can deploy, you need to do a number of things:
- Add user accounts and passwords
- Configure security settings and privileges
- Install all the packages needed to run the app
- Customize the configuration files for each of these packages
- Create databases and database user accounts; load some initial data
- Configure the services that should be running
- Deploy the app code and static assets
- Restart any affected services
- Configure the machine for monitoring
That's a lot to do—and for the next server you build, you'll need to do the exact same things all over again. There's something not right about that. Shouldn't there be an easier solution to this problem?
Wouldn't it be nice if you could write an executable specification of how the server should be set up, and you could apply it to as many machines as you liked?
Keeping the configuration synchronized
Setting up servers manually is tedious. Even if you're the kind of person who enjoys tedium, though, there's another problem to consider. What happens the next time you set up a server, a few weeks or months later?
Your careful notes will no longer be up to date with reality. While you were on vacation, the developers installed a couple of new libraries that the app now depends on—I guess they forgot to tell you! They are under a lot of schedule pressure, of course. You could send out a sternly worded email demanding that people update the build document whenever they change something, and people might even comply with that. But even if they do update the documentation, no-one actually tests the new build process from scratch, so when you come to do it, you'll find it doesn't work anymore. Turns out that if you just upgrade the database in place, it's fine, but if you install the new version on a bare server, it's not.
Also, since the build document was updated, a new version of a critical library was released upstream. Because you always install the latest version as part of the build, your new server is now subtly different to the old one. This will lead to subtle problems which will take you three days, or three bottles of whiskey, to debug.
By the time you have four or five servers, they're all a little different. Which is the authoritative one? Or are they all slightly wrong? The longer they're around, the more they will drift apart. You wouldn't run four or five different versions of your app code at once, so what's up with that? Why is it acceptable for server configuration to be in a mess like this?
Wouldn't it be nice
if the state of configuration on all your machines could be regularly checked and synchronized with a central, standard version?
Repeating changes across many servers
Humans just aren't good at accurately repeating complex tasks over and over; that's why we invented robots. It's easy to make mistakes, miss things out, or be interrupted and lose track of what you've done.
Changes happen all the time, and it becomes increasingly difficult to keep things up to date and in sync as your infrastructure grows. Again, when you make a change to your app code, you don't go and make that change manually with a text editor on each server. You change it once and roll it out everywhere. Isn't your firewall setup just as much part of your code as your user model?
Wouldn't it be nice if you only had to make changes in one place, and they rolled out to your whole network automatically?
Self-updating documentation
In real life, we're too busy to stop every five minutes and document what we just did. As we've seen, that documentation is of limited use anyway, even if it's kept fanatically up-to-date.
The only reliable documentation, in fact, is the state of the servers themselves. You can look at a server to see how it's configured, but that only applies while you still have the machine. If something goes wrong and you can't access the machine, or the data on it, your only option is to reconstruct the lost configuration from scratch.
Wouldn't it be nice if you had a clear, human-readable build procedure which was independent of your servers, and was guaranteed to be up to date, because the servers are actually built from it?
Version control and history
When you're making manual, ad hoc changes to systems, you can't roll them back to a point in time. It's hard to undo a whole series of changes; you don't have a way of keeping track of what you did and how things changed.
This is bad enough when there's just one of you. When you're working in a team, it gets even worse, with everybody making independent changes and getting in each other's way.
When you have a problem, you need a way to know what changed and when, and who did it. And you also need to be able to set your configuration back to any previously stable state.
Wouldn't it be nice if you could go back in time?
Why not just write shell scripts?
Many people manage configuration with shell scripts, which is better than doing it manually, but not much. Some of the problems with shell scripts include the following:
- Fragile and non-portable
- Hard to maintain
- Not easy to read as documentation
- Very site-specific
- Not a good programming language
- Hard to apply changes to existing servers
Why not just use containers?
Containers! Is there any word more thrilling to the human soul? Many people feel as though containers are going to make configuration management problems just go away. This feeling rarely lasts beyond the first few hours of trying to containerize an app. Yes, containers make it easy to deploy and manage software, but where do containers come from? It turns out someone has to build and maintain them, and that means managing Dockerfiles, volumes, networks, clusters, image repositories, dependencies, and so on. In other words, configuration. There is an axiom of computer science which I just invented, called The Law of Conservation of Pain. If you save yourself pain in one place, it pops up again in another. Whatever cool new technology comes along, it won't solve all our problems; at best, it will replace them with refreshingly different problems.
Yes, containers are great, but the truth is, container-based systems require even more configuration management. You need to configure the nodes that run the containers, build and update the container images based on a central policy, create and maintain the container network and clusters, and so on.
Why not just use serverless?
If containers are powered by magic pixies, serverless architectures are pure fairy dust. The promise is that you just push your app to the cloud, and the cloud takes care of deploying, scaling, load balancing, monitoring, and so forth. Like most things, the reality doesn't quite live up to the marketing. Unfortunately, serverless isn't actually serverless: it just means your business is running on servers you don't have direct control over, plus, you have higher fixed costs because you're paying someone else to run them for you. Serverless can be a good way to get started, but it's not a long-term solution, because ultimately, you need to own your own configuration.