What does all this mean for our Infrastructure-as-Code deployments?
So far, we have spoken a lot about some of the approaches and journeys people take to get to the point where they are considering using Infrastructure as Code, so before we look at some of the toolings in Chapter 2, Ansible and Terraform beyond the Documentation, let’s talk about some of the actual use cases.
In my opinion, the most significant advantage of using Infrastructure as Code is consistency – if you need to repeat a process or deployment more than once, then define your deployment as Infrastructure as Code.
This will make sure that resources are deployed the same every time, no matter who is deploying them; if everyone is using the same set of code, then it stands to reason that the outputs will be the same (apart from variables you allow to override the values on such as SKUs, resource names, etc.).
An Infrastructure-as-Code approach not only gives you consistency between team members deploying the code but also between environments. Before I started defining my deployments as Infrastructure as Code, configuration drift between environments was quite a significant issue – environments were online for so long that tweaks were being applied and not carried through, so when code moved between my development, test, and finally, production environments, unexpected things would start to happen.
Next up is collaboration; as your infrastructure is defined in code, you can use the same development workflows you use for your applications. I am sure that most of you use a version control system for your code, more than likely Git via hosted services such as GitHub, GitLab, BitBucket, or Azure DevOps – if so, you have everything in place to track changes and collaborate on your infrastructure configuration.
You can also extend this further by introducing branching and pull requests based on your existing procedures to encourage change and testing, making the ongoing maintenance and development of your Infrastructure-as-Code projects genuinely collaborative.
Once you have your Infrastructure as Code hosted in version control, you can also take advantage of automation, again using the same processes and pipelines you use to build your application – using services such as GitHub Actions or Azure DevOps Pipelines.
Using services such as these gives you the ability to execute tasks from a single location that is covered by the service’s role-based access control, rather than being reliant on each member of the team downloading and running the Infrastructure-as-Code deployments locally.
If a team member would be running it locally, then that would mean that each team member who needs access to deploy would also need quite a high level of access to target resources – such as the public cloud you are deploying to.
Using automation solutions such as the ones mentioned previously means that you can allow people to use credentials in their pipelines without them having to know what the credentials are. This means you can grant the individuals a lower level of access to your resources – such as read-only – as they only need to view resources rather than manage them.
One significant side effect of this approach is that because people don’t have a level of access outside of the automation, they won’t be tempted to quickly jump into the portal and make a change to fix something manually and instead will need to update the code and do a deployment, meaning that the change is tracked and the execution logged, so you know who did what, when, and why.
Finally, something that we have already mentioned – cost savings. If you have your Infrastructure-as-Code deployments in version control and automated, then it’s not a stretch to deploy your infrastructure as needed rather than running it 24/7.
For example, if you have a pipeline to build your application, once that pipeline has successfully executed, then it can trigger, which builds the infrastructure – once built, that in turn triggers a deployment, and from there, your tests can run against the deployment and freshly deployed resources. The results of the test can be stored, and the infrastructure is then torn down as it is no longer needed.
This end-to-end process may take half an hour – but that’s that half an hour’s worth of resource cost versus paying for 24/7 resource costs – which I am sure you will agree is quite a saving.