Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - DevOps

49 Articles
article-image-xamarin-test-cloud-for-api-monitoring-tutorial
Gebin George
16 Jul 2018
7 min read
Save for later

Xamarin Test Cloud for API Monitoring [Tutorial]

Gebin George
16 Jul 2018
7 min read
Xamarin Test Cloud can help us identify applications' functionality-related issues on real devices. It is a great source of application monitoring in terms of testing on different mobile devices and with different versions of operating systems. Getting a detailed analysis of various applications' functions is very important to make sure our application is running as expected on our target devices. With that being said, it is also critical to the application to be able to run on different operating system versions, and to analyze how it performs and how much memory usage it has. In this mobile DevOps tutorial, we will discuss how to use Xamarin Test Cloud and the analytics after running an application on different sets of devices. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. We will be using two different applications here to see the monitoring analytics and compare them, to get a better understanding of how this helps us identify various performance and functionality-related issues in our application. Below are the applications we will be using: PhoneCallApp Xamarin Store PhoneCallApp Let's go through some steps to see how to monitor our PhoneCallApp: Go to https://testcloud.xamarin.com/. Click on the PhoneCallApp icon to get to the details of the test runs: On the next page, you'll see a list of tests run for the application: Now, because we have only run one test so far, Test Cloud does not provide us with the graphical metrics shown in the preceding screenshot. In other examples we'll see next, you'll be able to see a more detailed comparison of different test runs. Click on the test run from the list to see its results: The test run listed is the one we ran earlier in previous chapters and uploaded from our machine to Xamarin Test Cloud using the command line. To get an idea of this interface, let's have a look at different parts of Xamarin Test Cloud's interface. Now, this is an overview screen that shows a summary of all the tests run for this application: This screen shows summary details, such as how many tests failed from the total number of tests run, how many times the app ran on a device, how many devices these tests were run on, and much more. This screen is very useful to get a brief idea when you want to get a report on how your application is doing on different devices and OS versions. The next thing you'll see in the left pane is the list of UITests included in the test run: This screen basically has a list of all the Xamarin.UITests that you included in your project. You can click on these different tests to see their respective results on the right side of the screen. Let's click on the test from the list in the preceding screen. This will take us to the next screen, which has detailed reports for the test run: Have a close look at the left pane on this screen. It gives us some steps of the test run on the device. These steps are only what we had written previously in the code to take a screenshot of every activity the test does. The steps are as mentioned (we are using the screens of the test code written in previous chapters here): App started: Take a screenshot when the app starts; this was written in the BeforeEachTest() method in the Tests.cs file: Call button pressed: This step is when the Xamarin.UITest presses the call button to make a call: Failed step (the assert): This is the last step and is shown to provide proof of the failed step, so you can see the outcome that we received and compare it with what was expected. This was the final assert that decides whether the test passes or not, based on the outcome in the Assert.IsTrue() condition. You can click on each of these steps in the left pane and analyze the screenshots taken to see exactly what went on during the test. This is a great way to see exactly what went wrong when the test failed. Now, sometimes the screenshots are not enough to identify the issue. For a more detailed analysis, Test Cloud also provides us with Device Log, as shown in the following screenshot: Device logs are a great way to see what's going on under the hood and get more detailed information about the application's behavior and how the device itself behaves when the application is run on it. This can help pinpoint the issues when a test fails on the device; logs are always a savior in that sort of scenario. Click on the Device Log and you can see step-by-step logs for each screenshot on the same screen: When a test fails, Test Cloud provides us with one more option, to see the Test Failures: It's very useful for automated test developers to see the exception information when a test fails. Last but not least, there is also a Test Log option, which can be used to get a consolidated log of the entire test run: Xamarin Store app Now that we have seen different options provided by Test Cloud to monitor our application and its functionality using test runs, let's see how the dashboard and tests look when we have multiple test runs on various physical devices with different OS versions. This will give us a better idea of how comparative monitoring can be done on Test Cloud to analyze an application's behavior on different devices, and compare them with one another. The Xamarin Store application is a sample application provided by Test Cloud on its platform to help understand the platform and get an idea of the dashboard. Let's go through the steps to understand how to monitor your application running on multiple devices, and how to compare different test runs: Go to the Test Cloud home page, just like in the previous example, and click on the Xamarin Store icon: On the next screen, you'll see a graphical representation of different test runs and brief information about how many tests failed of the total tests run, what the application size is, and its peak memory usage information during different test runs: This gives us a nice comparative look at how our application is performing on different test runs. It is possible that the application was performing fine during the first run, and then some code changes made some functionality fail. So, this graph is very useful to monitor a timeline of changes that affected application functionality. You can further click on the graph or the test run to see an overview of it. Now, this screen gives us a great view of how an application running on different devices can be monitored. It's a very nice way to keep track of the application on different devices and OS versions: Let's click on one of the steps to see the results of the step on multiple devices: The red icon shows failed tests. This page shows all the devices you chose to run the test on; it shows all the devices the test passed on, and shows a red flag on failed devices. You can further click on each device to get device-specific screens and logs. To summarize, we performed API monitoring efficiently using Xamarin Test Cloud. If you found this post useful, do check out the book Mobile DevOps, to deliver continuous integration and delivery for Mobile applications. API Gateway and its Need API and Intent-Driven Networking What is Azure API Management?
Read more
  • 0
  • 0
  • 2590

article-image-create-a-teamcity-project-tutorial
Gebin George
12 Jul 2018
3 min read
Save for later

Create a TeamCity project [Tutorial]

Gebin George
12 Jul 2018
3 min read
TeamCity is one of the most prominent tools used by DevOps professionals to perform continuous integration and delivery, effectively. It plays an important role when it comes to Mobile-level DevOps implementation. In this article, we will see how to create a TeamCity Project. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Once the installation is done, the TeamCity web user interface will open in the browser and we can create a new TeamCity project there. To do so, follow these steps: Once you have logged in to TeamCity UI, click on Create project: To connect to our project from GitHub, click on From GitHub on the next screen: This will open a popup with instructions to add a TeamCity application to your GitHub account: Click on the register TeamCity link and it should take you to the GitHub page where you can register a new OAuth app. Give the details of the application, homepage URL, and callback URL, as shown in the following screenshot, and register the OAuth app: Once you register, on the next screen you'll get a Client ID and Client Secret; copy those details since they will be required for the TeamCity project: Go back to TeamCity, put the Client ID and Client Secret in the required fields, and click Save: Next, you need to do a one-time sign in to allow TeamCity to use GitHub repositories. Click on Sign in to GitHub: Authorize the TeamCity app to use GitHub by clicking on Authorize app: Once authorized, select the PhoneCallApp repository from the list of repositories shown on TeamCity: On the next screen, TeamCity will offer to create a new project from the URL selected. Give it a name and click Proceed: This should create two things. The first is a trigger in TeamCity for each code check-in you do; each will trigger a build. The second is a build step from the repository automatically: We need to configure the build steps manually and use the build scripts described in the Creating a build script section. Use those scripts, described sequentially in previous steps, to create the build steps in TeamCity. Finally, your build steps should look like the following screenshot, consisting of all the steps mentioned in the Creating a build script section: Now, your TeamCity continuous build is ready, and a trigger is already configured to perform this build on each code check-in, or whenever it finds any code changes in the repository. This finally provides you with an Android package that is ready to be distributed. To summarize, we created a TeamCity project for Mobile DevOps. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your application development lifecycle. Introduction to TeamCity Getting Started with TeamCity Jenkins 2.0: The impetus for DevOps Movement
Read more
  • 0
  • 0
  • 3139

article-image-debugging-xamarin-application-on-visual-studio-tutorial
Gebin George
11 Jul 2018
6 min read
Save for later

Debugging Xamarin Application on Visual Studio [Tutorial]

Gebin George
11 Jul 2018
6 min read
Visual Studio is a great IDE for debugging any application, whether it's a web, mobile, or a desktop application. It uses the same debugger that comes with the IDE for all three, and is very easy to follow. In this tutorial,  we will learn how to debug a mobile application using Visual studio. This article is an excerpt from the book, Mobile DevOps,  written by Rohin Tak and Jhalak Modi. Using the output window The output window in Visual Studio is a window where you can see the output of what's happening. To view the output window in Visual Studio, follow these steps: Go to View and click Output: This will open a small window at the bottom where you can see the current and useful output being written by Visual Studio. For example, this is what is shown in the output windows when we rebuild the application: Using the Console class to show useful output The Console class can be used to print some useful information, such as logs, to the output window to get an idea of what steps are being executed. This can help if a method is failing after certain steps, as that will be printed in the output window. To achieve this, C# has the Console class, which is a static class. This class has methods such as Write() and WriteLine() to write anything to the output window. The Write() method writes anything to the output window, and the WriteLine() method writes the same way with a new line at the end: Look at the following screenshot and analyze how Console.WriteLine() is used to break down the method into several steps (it is the same Click event method that was written while developing PhoneCallApp): Add Console.WriteLine() to your code, as shown in the preceding screenshot. Now, run the application, perform the operation, and see the output written as per your code: This way, Console.WriteLine() can be used to write useful step-based outputs/logs to the output window, which can be analyzed to identify issues while debugging. Using breakpoints As described earlier, breakpoints are a great way to dig deep into the code without much hassle. They can help check variables and their values, and the flow at a point or line in the code. Using breakpoints is very simple: The simplest way to add a breakpoint on a line is to click on the margin, which is on the left side, in front of the line, or click on the line and hit the F9 key: You'll see a red dot in the margin area where you clicked when the breakpoint is set, as shown in the preceding screenshot. Now, run the application and perform a call button click on it; the flow should stop at the breakpoint and the line will turn yellow when it does: At this point, you can inspect the values of variables before the breakpoint line by hovering over them: Setting a conditional breakpoint You can also set a conditional breakpoint in the code, which is basically telling Visual Studio to pause the flow only when a certain condition is met: Right-click on the breakpoint set in the previous steps, and click Conditions: This will open a small window over the code to set a condition for the breakpoint. For example, in the following screenshot, a condition is set to when phoneNumber == "9900000700". So, the breakpoint will only be hit when this condition is met; otherwise, it'll not be hit. Stepping through the code When a breakpoint has been reached, the debug tools enable you to get control over the program's execution flow. You'll see some buttons in the toolbar, allowing you to run and step through the code: You can hover over these buttons to see their respective names: Step Over (F10): This executes the next line of code. Step Over will execute the function if the next line is a function call, and will stop after the function: Step Into (F11): Step Into will stop at the next line in the case of a function call, allowing you to continue line-by-line debugging of the function. If the next line is not a function, it will behave the same as Step Over: Step Out (Shift + F11): This will return to the line where the current function was called: Continue: This will continue the execution and run until the next breakpoint is reached: Stop Debugging: This will stop the debugging process: Using a watch A watch is a very useful function in debugging; it allows us to see the values, types, and other details related to variables, and evaluate them in a better way than hovering over the variables. There are two types of watch tools available in Visual Studio: QuickWatch QuickWatch is similar to watch, but as the name suggests, it allows us to evaluate the values at the time. Follow these steps to use QuickWatch in Visual Studio: Right-click on the variable you want to analyze and click on QuickWatch: This will open a new window where you can see the type, value, and other details related to the variable: This is very useful when a variable has a long value or string that cannot be read and evaluated properly by just hovering over the variable. Adding a watch Adding a watch is similar to QuickWatch, but it is more useful when you have multiple variables to analyze, and looking at each variable's value can take a lot of time. Follow these steps to add a watch on variables: Right-click on the variable and click Add Watch: This will add the variable to watch and show you its value always, as well as reflect any time it changes at runtime. You can also see these variable values in a particular format for different data types, so you can have an XML value shown in XML format, or a JSON object value shown in .json format: It is a lifesaver when you want to evaluate a variable's value in each step of the code, and see how it changes with every line. To summarize, we learned how to debug a Xamarin application using Visual Studio. If you found this post useful, do check out the book Mobile DevOps, to continuously improve your mobile application development process. Debugging Your.Net Debugging in Vulkan Debugging Your .NET Application
Read more
  • 0
  • 0
  • 10321
Banner background image

article-image-sdlc-puts-process-at-the-center-of-software-engineering
Richard Gall
22 May 2018
7 min read
Save for later

SDLC puts process at the center of software engineering

Richard Gall
22 May 2018
7 min read
What is SDLC? SDLC stands for software development lifecycle. It refers to all of the different steps that software engineers need to take when building software. This includes planning, creating, building and then deploying software, but maintenance is also crucial too. In some instances you may need to change or replace software - that is part of the software development lifecycle as well. SDLC is about software quality and development efficiency SDLC is about more than just the steps in the software development process. It's also about managing that process in a way that improves quality while also improving efficiency. Ultimately, there are numerous ways of approaching the software development lifecycle - Waterfall and Agile are the too most well known methodologies for managing the development lifecycle. There are plenty of reasons you might choose one over another. What is most important is that you pay close attention to what the software development lifecycle looks like. It sounds obvious, but it is very difficult to build software without a plan in place. Things can get chaotic very quickly. If it does, that's bad news for you, as the developer, and bad news for users as well. When you don't follow the software development lifecycle properly, you're likely to miss user requirements, and faults will also find their way into your code. The stages of the software development lifecycle (SDLC) There are a number of ways you might see an SDLC presented but the core should always be the same. And yes, different software methodologies like Agile and Waterfall outline very different ways of working, but broadly the steps should be the same. What differs between different software methodologies is how each step fits together. Step 1: Requirement analysis This is the first step in any SDLC. This is about understanding everything that needs to be understood in as much practical detail as possible. It might mean you need to find out about specific problems that need to be solved. Or, there might be certain things that users need that you need to make sure are in the software. To do this, you need to do good quality research, from discussing user needs with a product manager to revisiting documentation on your current systems. It is often this step that is the most challenging in the software development lifecycle. This is because you need to involve a wide range of stakeholders. Some of these might not be technical, and sometimes you might simply use a different vocabulary. It's essential that you have a shared language to describe everything from the user needs to the problems you might be trying to solve. Step 2: Design the software Once you have done a requirement analysis you can begin designing the software. You do this by turning all the requirements and software specifications into a design document. This might feel like it slows down the development process, but if you don't do this, not only are you wasting the time taken to do your requirement analysis, you're also likely to build poor quality or even faulty software. While it's important not to design by committee or get slowed down by intensive nave-gazing, keeping stakeholders updated and requesting feedback and input where necessary can be incredibly important. Sometimes its worth taking that extra bit of time, as it could solve a lot of problems later in the SDLC. Step 3: Plan the project Once you have captured requirements and feel you have properly understood exactly what needs to be delivered - as well as any potential constraints - you need to plan out how you're going to build that software. To do this you'll need to have an overview of the resources at your disposal. These are the sorts of questions you'll need to consider at this stage: Who is available? Are there any risks? How can we mitigate them? What budget do we have for this project? Are there any other competing projects? In truth, you'll probably do this during the design stage. The design document you create should, of course, be developed with context in mind. It's pointless creating a stunning design document, outlining a detailed and extensive software development project if it's simply not realistic for your team to deliver it. Step 4: Start building the software Now you can finally get down to the business of actually writing code. With all the work you have done in the previous steps this should be a little easier. However, it's important to remember that imperfection is part and parcel of software engineering. There will always be flaws in your software. That doesn't necessarily mean bugs or errors, however; it could be small compromises that need to be made in order to ensure something works. The best approach here is to deliver rapidly. The sooner you can get software 'out there' the faster you can make changes and improvements if (or more likely when) they're needed. It's worth involving stakeholders at this stage - transparency in the development process is a good way to build collaboration and ensure the end result delivers on what was initially in the requirements. Step 5: Testing the software Testing is, of course, an essential step in the software development lifecycle. This is where you identify any problems. That might be errors or performance issues, but you may find you haven't quite been able to deliver what you said you would in the design document. The continuous integration server is important here, as the continuous integration server can help to detect any problems with the software. The rise of automated software testing has been incredibly valuable; it means that instead of spending time manually running tests, engineers can dedicate more time to fixing problems and optimizing code. Step 6: Deploy the software The next step is to deploy the software to production. All the elements of the software should now be in place, and you want it to simply be used. It's important to remember that there will be problems here. Testing can never capture every issue, and feedback and insight from users are going to be much more valuable than automated tests run on a server. Continuous delivery pipelines allow you to deploy software very efficiently. This makes the build-test-deploy steps of the software development lifecycle to be relatively frictionless. Okay, maybe not frictionless - there's going to be plenty of friction when you're developing software. But it does allow you to push software into production very quickly. Step 7: Maintaining software Software maintenance is a core part of the day-to-day life of a software engineer. Its a crucial step in the SDLC. There are two forms of software maintenance; both are of equal importance. Evolutive maintenance and corrective maintenance. Evolutive maintenance As the name suggests, evolutive maintenance is where you evolve software by adding in new functionality or making larger changes to the logic of the software. These changes should be a response to feedback from stakeholders or, more importantly, users. There may be times when business needs dictate this type of maintenance - this is never ideal, but it is nevertheless an important part of a software engineer's work. Corrective maintenance Corrective maintenance isn't quite as interesting or creative as evolutive maintenance - it's about fixing bugs and errors in the code. This sort of maintenance can feel like a chore, and ideally you want to minimize the amount of time you spend doing this. However, if you're following SDLC closely, you shouldn't find too many bugs in your software. The benefits of SDLC are obvious The benefits of SDLC are clear. It puts process at the center of software engineering. Without those processes it becomes incredibly difficult to build the software that stakeholders and users want. And if you don't care about users then, really, why build software at all. It's true that DevOps has done a lot to change SDLC. Arguably, it is an area that is more important and more hotly debated than ever before. It's not difficult to find someone with an opinion on the best way to build something. Equally, as software becomes more fragmented and mutable, thanks to the emergence of cloud and architectural trends like microservices and serverless, the way we design, build and deploy software has never felt more urgent. Read next DevOps Engineering and Full-Stack Development – 2 Sides of the Same Agile Coin
Read more
  • 0
  • 0
  • 2792

article-image-what-is-multi-layered-software-architecture
Packt Editorial Staff
17 May 2018
7 min read
Save for later

What is a multi layered software architecture?

Packt Editorial Staff
17 May 2018
7 min read
Multi layered software architecture is one of the most popular architectural patterns today. It moderates the increasing complexity of modern applications. It also makes it easier to work in a more agile manner. That's important when you consider the dominance of DevOps and other similar methodologies today. Sometimes called tiered architecture, or n-tier architecture, a multi layered software architecture consists of various layers, each of which corresponds to a different service or integration. Because each layer is separate, making changes to each layer is easier than having to tackle the entire architecture. Let's take a look at how a multi layered software architecture works, and what the advantages and disadvantages of it are. This has been taken from the book Architectural Patterns. Find it here. What does a layered software architecture consist of? Before we get into a multi layered architecture, let's start with the simplest form of layered architecture - three tiered architecture. This is a good place to start because all layered software architecture contains these three elements. These are the foundations: Presentation layer: This is the first and topmost layer which is present in the application. This tier provides presentation services, that is presentation, of content to the end user through GUI. This tier can be accessed through any type of client device like desktop, laptop, tablet, mobile, thin client, and so on. For the content to the displayed to the user, the relevant web pages should be fetched by the web browser or other presentation component which is running in the client device. To present the content, it is essential for this tier to interact with the other tiers that are present preceding it. Application layer: This is the middle tier of this architecture. This is the tier in which the business logic of the application runs. Business logic is the set of rules that are required for running the application as per the guidelines laid down by the organization. The components of this tier typically run on one or more application servers. Data layer: This is the lowest tier of this architecture and is mainly concerned with the storage and retrieval of application data. The application data is typically stored in a database server, file server, or any other device or media that supports data access logic and provides the necessary steps to ensure that only the data is exposed without providing any access to the data storage and retrieval mechanisms. This is done by the data tier by providing an API to the application tier. The provision of this API ensures complete transparency to the data operations which are done in this tier without affecting the application tier. For example, updates or upgrades to the systems in this tier do not affect the application tier of this architecture. The diagram below shows how a simple layered architecture with 3 tiers works:   These three layers are essential. But other layers can be built on top of them. That's when we get into multi layered architecture. It's sometimes called n-tiered architecture because the number of tiers or layers (n) could be anything! It depends on what you need and how much complexity you're able to handle. Multi layered software architecture A multi layered software architecture still has the presentation layer and data layer. It simply splits up and expands the application layer. These additional aspects within the application layer are essentially different services. This means your software should now be more scalable and have extra dimensions of functionality. Of course, the distribution of application code and functions among the various tiers will vary from one architectural design to another, but the concept remains the same. The diagram below illustrates what a multi layered software architecture looks like. As you can see, it's a little more complex that a three-tiered architecture, but it does increase scalability quite significantly: What are the benefits of a layered software architecture? A layered software architecture has a number of benefits - that's why it has become such a popular architectural pattern in recent years. Most importantly, tiered segregation allows you to manage and maintain each layer accordingly. In theory it should greatly simplify the way you manage your software infrastructure. The multi layered approach is particularly good for developing web-scale, production-grade, and cloud-hosted applications very quickly and relatively risk-free. It also makes it easier to update any legacy systems - when you're architecture is broken up into multiple layers, the changes that need to be made should be simpler and less extensive than they might otherwise have to be. When should you use a multi layered software architecture? Clearly, the argument for a multi layered software architecture is pretty clear. However, there are some instances when it is particularly appropriate: If you are building a system in which it is possible to split the application logic into smaller components that could be spread across several servers. This could lead to the design of multiple tiers in the application tier. If the system under consideration requires faster network communications, high reliability, and great performance, then n-tier has the capability to provide that as this architectural pattern is designed to reduce the overhead which is caused by network traffic. An example of a multi layered software architecture We can illustrate the working of an multi layered architecture with the help of an example of a shopping cart web application which is present in all e-commerce sites. The shopping cart web application is used by the e-commerce site user to complete the purchase of items through the e-commerce site. You'd expect the application to have several features that allow the user to: Add selected items to the cart Change the quantity of items in their cart Make payments The client tier, which is present in the shopping cart application, interacts with the end user through a GUI. The client tier also interacts with the application that runs in the application servers present in multiple tiers. Since the shopping cart is a web application, the client tier contains the web browser. The presentation tier present in the shopping cart application displays information related to the services like browsing merchandise, buying them, adding them to the shopping cart, and so on. The presentation tier communicates with other tiers by sending results to the client tier and all other tiers which are present in the network. The presentation tier also makes calls to database stored procedures and web services. All these activities are done with the objective of providing a quick response time to the end user. The presentation tier plays a vital role by acting as a glue which binds the entire shopping cart application together by allowing the functions present in different tiers to communicate with each other and display the outputs to the end user through the web browser. In this multi layered architecture, the business logic which is required for processing activities like calculation of shipping cost and so on are pulled from the application tier to the presentation tier. The application tier also acts as the integration layer and allows the applications to communicate seamlessly with both the data tier and the presentation tier. The last tier which is the data tier is used to maintain data. This layer typically contains database servers. This layer maintains data independent from the application server and the business logic. This approach provides enhanced scalability and performance to the data tier. Read next Microservices and Service Oriented Architecture What is serverless architecture and why should I be interested?
Read more
  • 0
  • 1
  • 42591

article-image-microsofts-azure-container-service-acs-is-now-azure-kubernetes-services-aks
Savia Lobo
09 May 2018
2 min read
Save for later

Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS)

Savia Lobo
09 May 2018
2 min read
At the Build 2018, Microsoft announced that its Azure Container Service (ACS), its managed Kubernetes service is now Azure Kubernetes Service (AKS), which is currently in preview and will soon be generally available. AKS is also a part of the Kubernetes Conformance Program, which is a certification program run by the Cloud Native Computing Foundation. The Azure Kubernetes Service (AKS) adds automated support for upgrades and scaling capabilities. It also includes self-healing aspects that aims to make spinning up containers on Kubernetes easier for developers. Developers now have added advantages with AKS, which include: A DevOps Project support for AKS : Now, with a few clicks developers can create a new AKS cluster, containerize their applications, deploy with a VSTS CI/CD pipeline, and view integrated App Insights telemetry with the DevOps project. New Azure Portal experience for AKS : This includes AKS create and browse experiences inside the Azure Portal, which makes it easier for cluster operators to configure and manage Kubernetes. Some features of AKS Custom VNET with Azure CNI : AKS now supports deploying Kubernetes nodes into custom VNETs using Azure CNI, with configurable IP ranges for Kubernetes networking components. Integration with Azure Monitor : AKS is now integrated directly into Azure Monitor for control plane telemetry, log aggregation, and container health monitoring. This provides operational visibility into one’s Kubernetes environment directly from the Azure portal. HTTP application routing : AKS also supports exposing public applications natively, using an Azure-integrated Kubernetes ingress controller. With this, customers can access their applications without having to configure DNS records and nameservers. Microsoft has also introduces a new Dev Spaces capability. With the AKS and the Dev Spaces, all a new developer needs is their IDE and the Azure CLI. The developers can simply create a new Dev Space inside AKS and can begin working on any component of their microservice environment safely, without impeding production traffic flows. Dev Spaces for AKS makes developing against a complex microservices environment simple. It is now available in private preview. To know more about the AKS in detail, visit Microsoft Azure Blog. Here is a quick recap of what happened at Day 1 of the Microsoft Build Conference 2018, if you are interested.   Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes Kubernetes 1.10 released The key differences between Kubernetes and Docker Swarm
Read more
  • 0
  • 0
  • 8766
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-building-docker-images-using-dockerfiles
Aarthi Kumaraswamy
12 Apr 2018
8 min read
Save for later

Building Docker images using Dockerfiles

Aarthi Kumaraswamy
12 Apr 2018
8 min read
Docker images are read-only templates. They give us containers during runtime. Central to this is the concept of a 'base image'. Layers then sit on top of this base image. For example, you might have a base image of Fedora or Ubuntu, but you can then install packages or make modifications over the base image to create a new layer. The base image and new layer can then be treated as a completely  new image. In the image below, Debian is the base image and emacs and Apache are the two layers added on top of it. They are highly portable and can be shared easily: Source: Docker Image layers Layers are transparently laid on top of the base image to create a single coherent filesystem. There are a couple of ways to create images, one is by manually committing layers and the other way is through Dockerfiles. In this recipe, we'll create images with Dockerfiles. Dockerfiles help us in automating image creation and getting precisely the same image every time we want it. The Docker builder reads instructions from a text file (a Dockerfile) and executes them one after the other in order. It can be compared as Vagrant files, which allows you to configure VMs in a predictable manner. Getting ready A Dockerfile with build instructions. Create an empty directory: $ mkdir sample_image $ cd sample_image Create a file named Dockerfile with the following content: $ cat Dockerfile # Pick up the base image FROM fedora # Add author name MAINTAINER Neependra Khare # Add the command to run at the start of container CMD date How to do it… Run the following command inside the directory, where we created Dockerfile to build the image: $ docker build . We did not specify any repository or tag name while building the image. We can give those with the -toption as follows: $ docker build -t fedora/test . The preceding output is different from what we did earlier. However, here we are using a cache after each instruction. Docker tries to save the intermediate images as we saw earlier and tries to use them in subsequent builds to accelerate the build process. If you don't want to cache the intermediate images, then add the --no-cache option with the build. Let's take a look at the available images now: How it works… A context defines the files used to build the Docker image. In the preceding command, we define the context to the build. The build is done by the Docker daemon and the entire context is transferred to the daemon. This is why we see the Sending build context to Docker daemon 2.048 kB message. If there is a file named .dockerignore in the current working directory with the list of files and directories (new line separated), then those files and directories will be ignored by the build context. More details about .dockerignore can be found at https://docs.docker.com/reference/builder/#the-dockerignore-file. After executing each instruction, Docker commits the intermediate image and runs a container with it for the next instruction. After the next instruction has run, Docker will again commit the container to create the intermediate image and remove the intermediate container created in the previous step. For example, in the preceding screenshot, eb9f10384509 is an intermediate image and c5d4dd2b3db9 and ffb9303ab124 are the intermediate containers. After the last instruction is executed, the final image will be created. In this case, the final image is 4778dd1f1a7a: The -a option can be specified with the docker images command to look for intermediate layers: $ docker images -a There's more… The format of the Dockerfile is: INSTRUCTION arguments Generally, instructions are given in uppercase, but they are not case sensitive. They are evaluated in order. A # at the beginning is treated like a comment. Let's take a look at the different types of instructions: FROM: This must be the first instruction of any Dockerfile, which sets the base image for subsequent instructions. By default, the latest tag is assumed to be: FROM  <image> Alternatively, consider the following tag: FROM  <images>:<tag> There can be more than one FROM instruction in one Dockerfile to create multiple images. If only image names, such as Fedora and Ubuntu are given, then the images will be downloaded from the default Docker registry (Docker Hub). If you want to use private or third-party images, then you have to mention this as follows:  [registry_hostname[:port]/][user_name/](repository_name:version_tag) Here is an example using the preceding syntax: FROM registry-host:5000/nkhare/f20:httpd MAINTAINER: This sets the author for the generated image, MAINTAINER <name>. RUN: We can execute the RUN instruction in two ways—first, run in the shell (sh -c): RUN <command> <param1> ... <pamamN> Second, directly run an executable: RUN ["executable", "param1",...,"paramN" ] As we know with Docker, we create an overlay—a layer on top of another layer—to make the resulting image. Through each RUN instruction, we create and commit a layer on top of the earlier committed layer. A container can be started from any of the committed layers. By default, Docker tries to cache the layers committed by different RUN instructions, so that it can be used in subsequent builds. However, this behavior can be turned off using --no-cache flag while building the image. LABEL: Docker 1.6 added a new feature to the attached arbitrary key-value pair to Docker images and containers. We covered part of this in the Labeling and filtering containers recipe in Chapter 2, Working with Docker Containers. To give a label to an image, we use the LABEL instruction in the Dockerfile as LABEL distro=fedora21. CMD: The CMD instruction provides a default executable while starting a container. If the CMD instruction does not have an executable (parameter 2), then it will provide arguments to ENTRYPOINT. CMD  ["executable", "param1",...,"paramN" ] CMD ["param1", ... , "paramN"] CMD <command> <param1> ... <pamamN> Only one CMD instruction is allowed in a Dockerfile. If more than one is specified, then only the last one will be honored. ENTRYPOINT: This helps us configure the container as an executable. Similar to CMD, there can be at max one instruction for ENTRYPOINT; if more than one is specified, then only the last one will be honored: ENTRYPOINT  ["executable", "param1",...,"paramN" ] ENTRYPOINT <command> <param1> ... <pamamN> Once the parameters are defined with the ENTRYPOINT instruction, they cannot be overwritten at runtime. However, ENTRYPOINT can be used as CMD, if we want to use different parameters to ENTRYPOINT. EXPOSE: This exposes the network ports on the container on which it will listen at runtime: EXPOSE  <port> [<port> ... ] We can also expose a port while starting the container. We covered this in the Exposing a port while starting a container recipe in Chapter 2, Working with Docker Containers. ENV: This will set the environment variable <key> to <value>. It will be passed all the future instructions and will persist when a container is run from the resulting image: ENV <key> <value> ADD: This copies files from the source to the destination: ADD <src> <dest> The following one is for the path containing white spaces: ADD ["<src>"... "<dest>"] <src>: This must be the file or directory inside the build directory from which we are building an image, which is also called the context of the build. A source can be a remote URL as well. <dest>: This must be the absolute path inside the container in which the files/directories from the source will be copied. COPY: This is similar to ADD.COPY <src> <dest>: COPY  ["<src>"... "<dest>"] VOLUME: This instruction will create a mount point with the given name and flag it as mounting the external volume using the following syntax: VOLUME ["/data"] Alternatively, you can use the following code: VOLUME /data USER: This sets the username for any of the following run instructions using the following syntax: USER  <username>/<UID> WORKDIR: This sets the working directory for the RUN, CMD, and ENTRYPOINT instructions that follow it. It can have multiple entries in the same Dockerfile. A relative path can be given which will be relative to the earlier WORKDIR instruction using the following syntax: WORKDIR <PATH> ONBUILD: This adds trigger instructions to the image that will be executed later, when this image will be used as the base image of another image. This trigger will run as part of the FROM instruction in downstream Dockerfile using the following syntax: ONBUILD [INSTRUCTION] See also Look at the help option of docker build: $ docker build -help The documentation on the Docker website https://docs.docker.com/reference/builder/ You just enjoyed an excerpt from the book, DevOps: Puppet, Docker, and Kubernetes by Thomas Uphill, John Arundel, Neependra Khare, Hideto Saito, Hui-Chuan Chloe Lee, and Ke-Jou Carol Hsu. To master working with Docker containers, images and much more, check out this book today! Read other posts: How to publish Docker and integrate with Maven Building Scalable Microservices How to deploy RethinkDB using Docker  
Read more
  • 0
  • 0
  • 4961

article-image-gitlab-new-devops-solution
Erik Kappelman
17 Jan 2018
5 min read
Save for later

GitLab's new DevOps solution

Erik Kappelman
17 Jan 2018
5 min read
Can it be real? The complete DevOps toolchain integrated into one tool, one UI and one process? GitLab seems to think so. GitLab has already made huge strides in terms of centralizing the DevOps process into a single tool. Up until now, most of the focus has been on creating a seamless development system and operations have not been as important. What’s new is the extension of the tool to include the operating side of DevOps as well as the development side. Let's talk a little bit about what DevOps is in order to fully appreciate the advances offered by GitLab. DevOps is basically a holistic approach to software development, quality assurance, and operations. While each of these elements of software creation is distinct, they are all heavily reliant on the other elements to be effective. The DevOps approach is to acknowledge this interdependence and then try to leverage the interdepence to increase productivity and to enhance the final user experience. Two of the most talked about elements of DevOps are continous integration and continuous deployment. Continuous integration and deployment Continuous integration and deployment are aimed at continuously integrating changes to a codebase, potentially from multiple sources, and then continuously deploying these changes into production. These tools require a pretty sophisticated automation and testing framework in order to be really effective. There are plenty of tools for one or the other, but the notion behind GitLab is essentially that if you can affect both of these processes from the same UI, these processes would be that much more efficient. GitLab has shown this to be true.  There is also the human side to consider, that is, coming up with what tasks need to be performed, assigning these tasks to developers and monitoring their progress. GitLab offers tools that help streamline this process as well. You can track issues, create issue boards to organize workflow and these issue boards can be sliced a number of different ways so that most imaginable human organizational needs can be met. Monitoring and delivery So far, we’ve seen that DevOps is about bringing everything together into a smooth process, and GitLab wants that process to occur in one place. GitLab can help you from planning to deployment and everywhere in between. But, GitLab isn’t satisfied with stopping at deployment, and they shouldn’t be. When we think about the three legs of DevOps, development, operations, and quality assurance and testing, what I’ve said about GitLab really only applies to the development leg. This is an unfortunately common problem with DevOps tools and organizational strategies. They seem to cater to developers and basically no one else. Maybe devs complain the most, I don’t know. GitLab has basically solved the DevOps problems between planning and deployment and, naturally, wants to move on to the monitoring and delivery of applications. This is a really exciting direction. After all, software is ultimately about making things happen. Sometimes it's easy to lose sight of this and only focus on the tools that make the software. It is sometimes tempting to view software development as being inherently important, but it's really not; it's a process of making stuff for people to use. If you get too far away from that truth, things can get sticky. I think this is part of the reason the Ops side of DevOps is often overlooked. Operations is concerned with managing the software out there in the wild. This includes dealing with network and hardware considerations and end users. GitLab wants operations to take place using the same UI as development. And why not? It’s the same application isn’t it? And in addition to technical performance, what about how the users are interacting with the application? If the application is somehow monetized, why shouldn’t that information also be available in the same UI as everything else having to do with this application? Again, it's still the same application. One tool to rule them all If you take a minute to step back and appreciate the vision of GitLab’s current direction, I think you can see why this is so exciting. If GitLab is successful in the long-term of extending out their reach into every element of an application's lifecycle including user interactions, productivity would skyrocket.  This idea isn’t really new. The ‘one tool to rule them all’ isn’t even that imaginative of a concept. It's just that no one has ever really created this ‘one tool.’ I believe we are about to enter, or have already entered, a DevOps space race. I believe GitLab is comfortably leading the pack, but they will need to keep working hard if they want it to stay that way. I believe we will be getting the one tool to rule them all, and I believe it is going to be soon. The way things are looking, GitLab is going to be the one to bring it to us, but only time will tell. Erik Kappelman wears many hats including blogger, developer, data consultant, economist, and transportation planner. He lives in Helena, Montana and works for the Department of Transportation as a transportation demand modeler.
Read more
  • 0
  • 0
  • 4547

article-image-cloud-and-devops-revolution
Packt
14 Aug 2017
12 min read
Save for later

The Cloud and the DevOps Revolution

Packt
14 Aug 2017
12 min read
Cloud and DevOps are two of the most important trends to emerge in technology. The reasons are clear - it's all about the amount of data that needs to be processed and managed in the applications and websites we use every day. The amount of data being processed and handled is huge. Every day, over a billion people visit Facebook, every hour 18,000 hours of video are uploaded to YouTube, every second Google process 40,000 search queries. Being able to handle such a staggering scale isn't easy. Through the use of the Amazon Web Services (AWS), you will be able to build out the key components needed to succeed at minimum cost and effort. This is an extract from Effective DevOps on AWS. Thinking in terms of cloud and not infrastructure The day I discovered that noise can damage hard drives. December 2011, sometime between Christmas and new year's eve. I started to receive dozens of alerts from our monitoring system. Apparently we had just lost connectivity to our European datacenter in Luxembourg. I rushed into network operating center (NOC) hopping that it's only a small glitch in our monitoring system, maybe just a joke after all, with so much redundancy how can everything go offline? Unfortunately, when I got into the room, the big monitoring monitors were all red, not a good sign. This was just the beginning of a very long nightmare. An electrician working in our datacenter mistakenly triggered the fire alarm, within seconds the fire suppression system set off and released its aragonite on top of our server racks. Unfortunately, this kind of fire suppression system makes so much noise when it releases its gas that sound wave instantly killed hundreds and hundreds of hard drives effectively shutting down our only European facility. It took months for us to be back on our feet. Where is the cloud when you need it! As Charles Philips said it the best: "Friends don't let friends build a datacenter." Deploying your own hardware versus in the cloud It wasn't long ago that tech companies small and large had to have a proper technical operations organization able to build out infrastructures. The process went a little bit like this: Fly to the location you want to put your infrastructure in, go tour the the different datacenters and their facilities. Look at the floor considerations, power considerations, HVAC, fire prevention systems, physical security, and so on. Shop for an internet provider, ultimately even so you are talking about servers and a lot more bandwidth, the process is the same, you want to get internet connectivity for your servers. Once that's done, it's time to get your hardware. Make the right decisions because you are probably going to spend a big portion of your company money on buying servers, switches, routers, firewall, storage, UPS (for when you have a power outage), kvm, network cables, the dear to every system administrator heart, labeler and a bunch of spare parts, hard drives, raid controllers, memory, power cable, you name it. At that point, once the hardware is bought and shipped to the datacenter location, you can rack everything, wire all the servers, power everything.Your network team can kick in and start establishing connectivity to the new datacenter using various links, configuring the edge routers, switches, top of the racks switches, kvm, firewalls (sometime), your storage team is next and is going to provide the much needed NAS or SAN, next come your sysops team who will image the servers, sometime upgrade the bios and configure hardware raid and finally put an OS on those servers. Not only this is a full-time job for a big team, it also takes lots of time and money to even get there. Getting new servers up and running with AWS will take us minutes.In fact, more than just providing a server within minutes, we will soon see how to deploy and run a service in minutes and just when you need it. Cost Analysis From a cost stand point, deploying in a cloud infrastructure such as AWSusually end up being a to a lot cheaper than buying your own hardware. If you want to deploy your own hardware, you have to pay upfront for all the hardware (servers, network equipment) and sometime license software as well. In a cloud environment you pay as you go. You can add and remove servers in no time. Also, if you take advantage of the PaaS and SaaS applications you also usually end up saving even more money by lowering your operating costs as you don't need as much staff to administrate your database, storage, and so on. Most cloud providers,AWS included, also offer tired pricing and volume discount. As your service gets bigger and bigger, you end up paying less for each unit of storage, bandwidth, and so on. Just on time infrastructure As we just saw, when deploying in the cloud,you pay as you go. Most cloud companies use that to their advantage to scale up and down their infrastructure as the traffic to their sites changes. This ability to add and remove new servers and services in no time and on demand is one of the main differentiator of an effective cloud infrastructure. Here is a diagram from a presentation from 2015 that shows the annual traffic going to https://www.amazon.com/(the online store): © 2016, Amazon Web Services, Inc. or its affiliates. All rights reserved. As you can see, with the holidays, the end of the year is a busy time for https://www.amazon.com/, their traffic triple. If they were hosting their service in an "old fashion" way, they would have only 24% of their infrastructure used in average every year but thanks to being able to scale dynamically they are able to only provision what they really need. © 2016, Amazon Web Services, Inc. or its affiliates. All rights reserved. Here at medium, we also see on a very regular basis the benefits from having fast auto scaling capabilities. Very often, stories will become viral and the amount of traffic going on medium can drastically change. On January 21st 2015, to our surprise, the White House posted the transcript of the State of the Union minutes before President Obama started his speech: https://medium.com/@WhiteHouse/ As you can see in the following graph, thanks to being in the cloud and having auto scaling capabilities, our platform was able to absorb the 5xinstant spike of traffic that the announcement made by doubling the amount of servers our front service uses. Later as the traffic started to naturally drain, we automatically removed some hosts from our fleet. The different layer of building a cloud The cloud computing is often broken up into 3 different type of service: Infrastructure as a Service (IaaS): It is the fundamental block on top of which everything cloud is built upon. It is usually a computing resource in a virtualized environment. It offers a combination of processing power, memory, storage, and network. The most common IaaS entities you will find are virtual machines (VM), network equipment like load balancers or virtual Ethernet interface and storage like block devices.This layer is very close to the hardware and give you the full flexibility that you would get deploying your software outside of a cloud. If you have any datacentre physical knowledge, this will mostly also apply to that layer. Platform as a Service (PaaS):It is where things start to get really interesting with the cloud. When building an application, you will likely need a certain number of common components such as a data store, a queue, and so on. The PaaS layer provides a number of ready to use applications to help you build your own services without worrying about administrating and operating those 3rd party services such as a database server. Sofware as a Service (SaaS):It is the icing on the cake. Similarly, to the PaaS layer you get access to managed services but this time those services are complete solution dedicated to certain purpose such as management or monitoring tools. When building an application, relying on those services make a big difference when compared to more traditional environment outside of a cloud. Another key element to succeed when deploying or migrating to a new infrastructure is to adopt a DevOps mind-set. Deploying in AWS AWS is on the forefront of the cloud providers. Launched in 2006 with SQS and EC2, Amazon quickly became the biggest IaaS provider. They have the biggest infrastructure, the biggest ecosystem and constantly add new feature and release new services. In 2015 they passed the cap of 1 million active customers. Over the last few years, they managed to change people's mind set about cloud and now deploying new services to the cloud is the new normal. Using the AWS managed tools and services is a drastic way to improve your productivity and keep your team lean. Amazon is continually listening to its customer's feedback and looking at the market trends therefore, as the DevOps movement started to get established, Amazon released a number of new services tailored toward implementing some of the DevOps best practices.We will also see how those services synergize with the DevOps culture. How to take advantage of the AWS ecosystem When you talk to applications architects, there are usually two train of train of thought. The first one is to stay as platform agnostic as possible. The idea behind that is that if you aren't happy with AWS anymore, you can easily switch cloud provider or even build your own private cloud. The 2nd train of thought is the complete opposite; the idea is that you are going to stick to AWS no matter what. It feels a bit extreme to think it that way but the reward is worth the risk and more and more companies agree with that. That's also where I stand. When you build a product now a day, the scarcity is always time and people. If you can outsource what is not your core business to a company that provides similar service or technology, support expertise and that you can just pay for it on SaaS model, then do so. If, like me you agree that using managed services is the way to go then being a cloud architect is like playing with Lego. With Lego, you have lots of pieces of different shapes, sizes, and colors and you assemble them to build your own MOC. Amazon services are like those Lego pieces. If you can picture your final product, then you can explore the different services and start combining them to build the supporting stack needed to quickly and efficiently build your product. Of course, in this case, the "If" is a big if and unlike Lego, understanding what each piece can do is a lot less visual and colorful than Lego pieces. How AWS synergize with the DevOps culture Having a DevOps culture is about rethinking how engineering teams work together by breaking out those developers and operations silos and bringing a new set of new tools to implement some best practices. AWS helps in many different accomplish that: For some developers, the world of operations can be scary and confusing but if you want better cooperation between engineers, it is important to expose every aspect of running a service to the entire engineering organization. As an operations engineer, you can't have a gate keeper mentality toward developers, instead it's better to make them comfortable accessing production and working on the different component of the platform. A good way to get started with that in the AWS console. While a bit overwarming, it is still a much better experience for people not familiar with this world to navigate that web interface than referring to constantly out of date documentations, using SSH and random plays to discover the topology and configuration of the service. Of course, as your expertise grows, as your application becomes and more complex and the need to operate it faster increase, the web interface starts to showing some weakness. To go around that issue, AWS provides a very DevOPS friendly alternative: an API. Accessible through a command-line tool and a number of SDK (which include Java, Javascript, Python, .net, php, ruby go, and c++) the SDKs let you administrate and use the managed services. Finally, AWS offers a number of DevOps tools. AWS has a source control service similar to GitHub called CodeCommit. For automation, in addition to allowing to control everything via SDKs, AWSprovides the ability to create template of your infrastructure via CloudFormation but also a configuration management system called OpsWork. It also knows how to scale up and down fleets of servers using Auto Scaling Groups. For the continuous delivery AWS provide a service called CodePipeline and for continuous deployment a service called CodeCommit. With regards to measuring everything, we will rely on CloudWatch and later ElasticSearch / Kibanato visualize metrics and logs. Finally, we will see how to use Docker via ECS which will let us create containers to improve the server density (we will be able to reduce the VM consumption as we will be able to collocate services together in 1 VM while still keeping fairly good isolation), improve the developer environment as we will now be able to run something closer to the production environment and improve testing time as starting containers is a lot faster than starting virtual machines.
Read more
  • 0
  • 0
  • 18758

article-image-devops-concepts-and-assessment-framework
Packt
05 Jul 2017
21 min read
Save for later

DevOps Concepts and Assessment Framework

Packt
05 Jul 2017
21 min read
In this article by Mitesh Soni, the author of the book DevOps Bootcamp we will discuss how to get quick understanding of DevOps from 10000 feet with real world examples on how to prepare for changing a culture. This will allow us to build the foundation of the DevOps concepts by discussing what our goals are, as well as getting buy-in from Organization Management. Basically, we will try to cover DevOps practices that can make application lifecycle management easy and effective. It is very important to understand that DevOps is not a framework, tool or any technology. It is more about culture of any organization. It is also a way people work in an organization using defined processes and by utilizing automation tools to make daily work more effective and less manual. To understand the basic importance of DevOps, we will cover following topics in this article: Need for DevOps How DevOps culture can evolve? Importance of PPT – People, Process, and Technology Why DevOps is not all about Tools DevOps Assessment Questions (For more resources related to this topic, see here.) Need for DevOps There is a famous quote by Harriet Tubman which you can find on (http://harriettubmanbiography.com). It says : Every great dream begins with a dreamer. Always remember, you have within you the strength, the patience, and the passion to reach for the stars to change the world Change is the law of life and that is also applicable to organization as well. And if any organization or individuals look only at the past or present patterns, culture, or practices then they are certain to miss the future best practices. In the dynamic IT world, we need to keep pace with the technology evolution. We can relate to George Bernard Shaw's saying: Progress is impossible without change, and those who cannot change their minds cannot change anything. Here we are focusing on changing the way we manage application lifecycle. Important question is whether we really need this change? Do we really need to go through the pain of this change? Answer is Yes. One may ask that such kind of change in business or culture must not be forceful. Agree. Let's understand the pain points faced by organizations in Application lifecycle management in modern world with the help of the following figure:   Considering the changing patterns and competitive environment is business, it is the need of an hour to improve application lifecycle management. Are there any factors that can be helpful in this modern times that can help us to improve application lifecycle management? Yes. Cloud Computing has changed the game. It has open doors for many path breaking solutions and innovations. Let's understand what Cloud Computing is and then we will see overview of DevOps and how Cloud is useful in DevOps. Overview of Cloud Computing Cloud computing is a type of computing that provides multi-tenant or dedicated computing resources such as compute, storage, and network which are delivered to Cloud consumers on demand. It comes in different flavors that includes Cloud Deployment Models and Cloud Service Models. The most important thing in this is the way its pricing model works that is pay as you go. Cloud Deployment Models describes the way Cloud resources are deployed such as behind the firewall and on the premise exclusively for a specific organization that is Private Cloud; or Cloud resources that are available to all organizations and individuals that is Public Cloud; or Cloud resources that are available to specific set of organizations that share similar types of interests or similar types of requirements that is Community Cloud; or Cloud resources that combines two or more deployment models that is known as Hybrid Cloud. Cloud Service Models describes the way Cloud resources are made available to Cloud consumers. It can be in form of pure Infrastructure where virtual machines are accessible and controlled by Cloud consumer or end user that is Infrastructure as a Service (IaaS); or Platform where runtime environments are provided so installation and configuration of all software needed to run application are already available and managed by Cloud Service Provider that is Platform as a Service; or Software as a Service where whole application is made available by Cloud Service Provider with responsibility of Infrastructure and Platform remains with Cloud Service Provider. There are many Service Models that have emerged during last few years but IaaS, PaaS, and SaaS are based on the National Institute of Standards and Technology (NIST) definition. Cloud computing has few characteristics which are significant such as Multi-Tenancy, Pay as you Use similar to electricity or Gas connection, On demand Self Service, Resource Pooling for better utilization of compute, storage and network resources, Rapid Elasticity for scaling up and scaling down resources based on needs in automated fashion and Measured Service for billing. Over the years, usage of different Cloud Deployment Models has varied based on use cases. Initially Public Cloud was used for applications that were considered non-critical while Private Cloud was used for critical application where security was a major concern. Hybrid and Public Cloud usage evolved over the time with experience and confidence in the services provided by Cloud Service Providers. Similarly, usage of different Cloud Service Models has varied based on the use cases and flexibility. IaaS was the most popular in early days but PaaS is catching up in its maturity and ease of use with enterprise capabilities. Overview of DevOps DevOps is all about a culture of an organization, processes, and technology to develop communication and collaboration between Development and IT Operations teams to manage application life-cycle more effectively than the existing ways of doing it. We often tend to work based on patterns to find reusable solutions from similar kind of problems or challenges. Over the years, achievements and failed experiments, Best practices, automation scripts, configuration management tools, and methodologies becomes integral part of Culture. It helps to define practices for a way of designing, a way of developing, a way of testing, a way of setting up resources, a way of managing environments, a way of configuration management, a way of deploying an application, a way of gathering feedback, a way of code improvements, and a way of doing innovations. Following are some of the visible benefits that can be achieved by implementing DevOps practices. DevOps culture is considered as innovative package to integrate Dev and Ops team in effective manner that includes components such as Continuous Build Integration, Continuous Testing, Cloud Resource Provisioning, Continuous Delivery, Continuous Deployment, Continuous Monitoring, Continuous Feedback, Continuous Improvement, and Continuous Innovation to make application delivery faster as per the demand of Agile methodology. However, it is not only about development and operations team that are involved. Testing team, Business Analysts, Build Engineers, Automation team, Cloud Team, and many other stakeholders are involved in this exercise of evolving existing culture. DevOps culture is not much different than the Organization culture which has shared values and behavioral aspect. It needs adjustment in mindsets and processes to align with new technology and tools. Challenges for Development and Operations Team There are some challenges why this scenario has occurred and that is why DevOps is going in upward direction and talk of the town in all Information Technology related discussions. Challenges for the Development Team Developers are enthusiastic and willing to adopt new technologies and approaches to solve problems. However they face many challenges including below: The competitive market creates pressure of on-time delivery They have to take care of production-ready code management and new feature implementation The release cycle is often long and hence the development team has to make assumptions before the application deployment finally takes place. In such a scenario, it takes more time to fix the issues that occurred during deployment in the staging or production environment Challenges for the Operations Team Operations team is always careful in changing resources or using any new technologies or new approaches as they want stability. However they face many challenges including below: Resource contention: It's difficult to handle increasing resource demands Redesigning or tweaking: This is needed to run the application in the production environment Diagnosing and rectifying: They are supposed to diagnose and rectify issues after application deployment in isolation Considering all the challenges faced by development and operations team, how should we improve existing processes, make use of automation tools to make processes more effective, and change people's mindset? Let's see in the next section on how to evolve DevOps culture in the organization and improve efficiency and effectiveness. How DevOps culture can evolve? Inefficient estimation, long time to market, and other issues led to a change in the waterfall model, resulting in the agile model. Evolving a culture is not a time bound or overnight process. It can be a step by step and stage wise process that can be achieved without dependencies on the other stages. We can achieve Continuous Integration without Cloud Provisioning. We can achieve Cloud Provisioning without Configuration Management. We can achieve Continuous Testing without any other DevOps practices. Following are different types of stages to achieve DevOps practices. Agile Development Agile development or the agile based methodology are useful for building an application by empowering individuals and encouraging interactions, giving importance to working software, customer collaboration—using feedback for improvement in subsequent steps—and responding to change in efficient manner. One of the most attractive benefits of agile development is continuous delivery in short time frames or, in agile terms, sprints. Thus, the agile approach of application development, improvement in technology, and disruptive innovations and approaches have created a gap between development and operations teams. DevOps DevOps attempts to fill these gaps by developing a partnership between the development and operations teams. The DevOps movement emphasizes communication, collaboration, and integration between software developers and IT operations. DevOps promotes collaboration, and collaboration is facilitated by automation and orchestration in order to improve processes. In other words, DevOps essentially extends the continuous development goals of the agile movement to continuous integration and release. DevOps is a combination of agile practices and processes leveraging the benefits of cloud solutions. Agile development and testing methodologies help us meet the goals of continuously integrating, developing, building, deploying, testing, and releasing applications. Build Automation An automated build helps us create an application build using build automation tools such as Gradle, Apache Ant and Apache Maven. An automated build process includes the activities such as Compiling source code into class files or binary files, Providing references to third-party library files, Providing the path of configuration files, Packaging class files or binary files into Package files, Executing automated test cases, Deploying package files on local or remote machines and Reducing manual effort in creating the package file. Continuous Integration In simple words, Continuous Integration or CI is a software engineering practice where each check-in made by a developer is verified by either of the following: Pull mechanism: Executing an automated build at a scheduled time and Push mechanism: Executing an automated build when changes are saved in the repository. This step is followed by executing a unit test against the latest changes available in the source code repository. Continuous integration is a popular DevOps practice that requires developers to integrate code into a code repositories such as Git and SVN multiple times a day to verify integrity of the code. Each check-in is then verified by an automated build, allowing teams to detect problems early. Cloud Provisioning Cloud provisioning has opened the door to treat Infrastructure as a Code and that makes the entire process extremely efficient and effective as we are automating process that involved manual intervention to a huge extent. Pay as you go billing model has made required resources more affordable to not only large organizations but also to mid and small scale organizations as well as individuals. It helps to go for improvements and innovations as earlier resource constraints were blocking organizations to go for extra mile because of cost and maintenance. Once we have agility in infrastructure resources then we can think of automating installation and configuration of packages that are required to run the application. Configuration Management Configuration management (CM) manages changes in the system or, to be more specific, the server run time environment. There are many tools available in the market with which we can achieve configuration management. Popular tools are Chef, Puppet, Ansible, Salt, and so on. Let's consider an example where we need to manage multiple servers with same kind of configuration. For example, we need to install Tomcat on each server. What if we need to change the port on all servers or update some packages or provide rights to some users? Any kind of modification in this scenario is a manual and, if so, error-prone process. As the same configuration is being used for all the servers, automation can be useful here. Continuous Delivery Continuous Delivery and Continuous Deployment are used interchangeably. However, there is a small difference between them. Continuous delivery is a process of deploying an application in any environment in an automated fashion and providing continuous feedback to improve its quality. Automated approach may not change in Continuous Delivery and Continuous Deployment. Approval process and some other minor things can change. Continuous Testing and Deployment Continuous Testing is a very important phase of end to end application lifecycle management process. It involves functional testing, performance testing, security testing and so on. Selenium, Appium, Apache JMeter, and many other tools can be utilized for the same. Continuous deployment, on the other hand, is all about deploying an application with the latest changes to the production environment. Continuous Monitoring Continuous monitoring is a backbone of end-to-end delivery pipeline, and open source monitoring tools are like toppings on an ice cream scoop. It is desirable to have monitoring at almost every stage in order to have transparency about all the processes, as shown in the following diagram. It also helps us troubleshoot quickly. Monitoring should be a well thought-out implementation of a plan. Let's try to depict entire process as continuous approach in the diagram below. We need to understand here that it is a phased approach and it is not necessary to automate every phase of automation at once. It is more effective to take one DevOps practice at a time, implement it and realize its benefit before implementing another one. This way we are safe enough to assess the improvements of changing culture in the organization and remove manual efforts from the application lifecycle management. Importance of PPT – People, Process, and Technology PPT is an important word in any organization. Wait! We are not talking about Powerpoint Presentation. Here, we are focusing on People, Processes, and Tools / Technology. Let's understand why and how they are important in changing culture of any organization. People As per the famous quote from Jack Canfield : Successful people maintain a positive focus in life no matter what is going on around them. They stay focused on their past successes rather than their past failures, and on the next action steps they need to take to get them closer to the fulfillment of their goals rather than all the other distractions that life presents to them. Curious question can be, why People matter? In one sentence, if we try to answer it then it would be: Because We are trying to change Culture. So? People are important part of any culture and only people can drive the change or change themselves to adapt to new processes or defining new processes and to learn new tools or technologies. Let's understand how and why with “Formula for Change“. David Gleicher created the “Formula for Change” in early 1960s as per references available in Wikipedia. Kathie Dannemiller refined it in 1980. This formula provides a model to assess the relative strengths affecting the possible success of organisational change initiatives. Gleicher (original) version: C = (ABD) > X, where: C = change, A = the status quo dissatisfaction, B = a desired clear state, D = is practical steps to the desired state, X = the cost of the change. Dannemiller version: D x V x F > R; where D, V, and F must be present for organizational change to take place where: D = Dissatisfaction with how things are now; V = Vision of what is possible; F = First, concrete steps that can be taken towards the vision; If the product of these three factors is greater than R = Resistance then change is possible. Essentially, it implies that there has to be strong Dissatisfaction with existing things or processes, Vision of what is possible with new trends, technologies, and innovations with respect to market scenario; concrete steps that can be taken towards achieving the vision. For More Details on 'Formula for change' you can visit this wiki page : https://en.wikipedia.org/wiki/Formula_for_change#cite_note-myth-1 If it comes to sharing an experience, I would say it is very important to train people to adopt new culture. It is a game of patience. We can't change mindset of people overnight and we need to understand first before changing the culture. Often I see Job Opening with a DevOps knowledge or DevOps Engineers and I feel that they should not be imported but people should be trained in the existing environment with Changing things gradually to manage resistance. We don't need special DevOps team, we need more communication and collaboration between developers, test teams, automation enablers, and cloud or infrastructure team. It is essential for all to understand pain points of each other. In number of organization I have worked, we used to have COE (Center of Excellence) in place to manage new technologies, innovations or culture. As an automation enabler and part of DevOps team, we should be working as facilitator only and not a part of silo. Processes Here is a famous quote from Tom Peters which says : Almost all quality improvement comes via simplification of design, manufacturing… layout, processes, and procedures Quality is extremely important when we are dealing with evolving a culture. We need processes and policies for doing things in proper way and standardized across the projects so sequence of operations, constraints, rules and so on are well defined to measure success. We need to set processes for following things: Agile Planning Resource Planning & Provisioning Configuration Management Role based Access Control to Cloud resources and other tools used in Automation Static Code Analysis – Rules for Programming Languages Testing Methodology and Tools  Release Management These processes are also important for measuring success in the process of evolving DevOps culture. Technology Here is a famous quote from Steve Jobs which says: Technology is nothing. What's important is that you have a faith in people, that they're basically good and smart, and if you give them tools, they'll do wonderful things with them Technology helps people and organizations to bring creativity and innovations while changing the culture. Without Technology, it is difficult to achieve speed and effectiveness in the daily and routine automation operations. Cloud Computing, Configuration Management tools, and Build Pipeline are among few that is useful in resource provisioning, installing runtime environment, and orchestration. Essentially, it helps to speed up different aspects of application lifecycle management. Why DevOps is not all about Tools Yes, tools are nothing. It is not that important factor in changing the culture of any organization. Reason is very simple. No matter what technology we use, we will perform Continuous Integration, Cloud Provisioning, Configuration Management, Continuous Delivery, Continuous Deployment, Continuous Monitoring and so on. Category wise different tool sets can be used but all perform similar things. It is just the way that tool perform operation that differs else outcome is same. Following are some tools based on the categories: Category Tools Build Automation Nant, MSBuild, Maven, Ant, Gradle Repository Git, SVN Static Code Analysis Sonar, PMD Continuous Integration Jenkins, Atlassian Bamboo, VSTS Configuration Management Chef, Puppet, Ansible, Salt Cloud Platforms AWS, Microsoft Azure Cloud Management Tool RightScale Application Deployment Shell Scripts, Plugins Functional Testing Selenium, Appium Load Testing Apache Jmeter Repositories Artifactory, Nexus, Fabric  Let's see how different tools can be useful in different stages for different operations. This may change based on number of environments or the number of DevOps practices we follow in different organizations. If we need to categorize tools based on different DevOps best practices then we can categorize them based on open source and commercial categories. Below are just sample examples. Components Open Source IBM Urban Code Electric-Cloud Build Tools Ant or Maven or MS Build Ant or Maven or MS Build Ant or Maven or MS Build Code Repositories Git or Subversion Git or Atlassian Stash or Subversion or StarTeam Git or Subversion or StarTeam Code Analysis Tools Sonar Sonar Sonar Continuous Integration Jenkins Jenkins or Atlassian Bamboo Jenkins or ElectricAccelerator Continuous Delivery Chef Artifactory and IBM UrbanCode Deploy ElectricFlow In this book we will try to focus on the Open source category as well as Commercial tools. We will use Jenkins and Visual Studio Team Services for all the major automation and orchestration related activities. DevOps Assessment Questions DevOps is a culture and we are very much aware with that fact. However, before implementing automation, putting processes in place and evolving culture, we need to understand existing status of organizations' culture and whether we need to introduce new processes or automation tools. We need to be very clear that we need to make the existing culture more efficient rather than importing culture. To accommodate assessment framework is difficult but we will try to provide some questions and hints based on which it will be easier to create an assessment framework. Create categories for which we want to ask questions and get responses for specific application. Few Sample Questions: Do you follow Agile Principles / Scrum or Kanban? Do you use any tool to keep track of Scrum or Kanban? What is normal sprint duration (2 weeks or 3 weeks) Is there a definitive and explicit definition of done for all phases of work? Are you using any Source Code Repository? Which Source Code Repository Do you use? Are you using any build automation tool such as Ant or Maven or Gradle or not? Are you using any custom script for build automation? Do you have Android and iOS based applications? Are you using any tools for Static Code Analysis? Are you using multiple environment for application deployment for different teams such as Dev, Test, Stage, pre-prod, prod etc. ? Are you using On Premise Infrastructure or Cloud based Infrastructure? Are you using any Configuration management tool or script for installing application packages or runtime environment? Are you using any automated scripts to deploy applications in prod and non-prod environments? Are you using manual approval before application release in any specific environment? Are you using any orchestration tool or script for Application Lifecycle Management? Are you using automation tools for Functional Testing, Load Testing, Security Testing, and Mobile Testing? Are you using any tools for Application and Infrastructure Monitoring? How are defects logged, triaged, and prioritized for resolving them based on priority? Are you using notification services to let stakeholders know about the status of application lifecycle management? Once questions are ready, prepare responses and based on responses decide rating for each response that is given for the above questions. Make a framework flexible so even if we change any question in any category then it will be managed automatically. Once rating is given, capture responses and calculate overall ratings by introducing different conditions and intelligence into the framework. Create category wise final ratings and create different kind of charts from the final rating to improve the reading value of it. The important thing to note here is the significance of organizations' expertise in each area of Application lifecycle management. It will give assessment framework a new dimension to add intelligence and make it more effective. Summary In this article, we have set many goals to achieve throughout this book. We have covered Continuous Integration, Resource provisioning in the Cloud environment, Configuration Management, Continuous Delivery, Continuous Deployment, and Continuous Monitoring. Setting goals is the first step in turning the invisible into the visible. Tony Robbins We have seen how Cloud Computing has changed the way innovation was perceived earlier and how feasible it has become now. We have also covered need for DevOps and all different DevOps practices in brief. People, Processes, and Technology is also important in this whole process of changing existing culture of an organization. We tried to touch upon the reasons why they are important. Tools are important but not the show stopper; Any toolset can be utilized and changing a culture doesn't need specific set of tools. We have discussed in brief about DevOps Assessment Framework as well. It will help to get going on the path of changing culture. Resources for Article: Further resources on this subject: Introduction to DevOps [article] DevOps Tools and Technologies [article] Command Line Tools for DevOps [article]
Read more
  • 0
  • 1
  • 8137
article-image-getting-started-ansible-2
Packt
23 Jun 2017
5 min read
Save for later

Getting Started with Ansible 2

Packt
23 Jun 2017
5 min read
In this article, by Jonathan McAllister, author of the book, Implementing DevOps with Ansible 2, we will learn what is Ansible, how users can leverage it, it's architecture, the key differentiators of Ansible from other configuration managements. We will also see the organizations that were successfully able to leverage Ansible. (For more resources related to this topic, see here.) What is Ansible? Ansible is a relatively new addition to the DevOps and configuration management space. It's simplicity, structured automation format and development paradigm have caught the eyes of both small and large corporations alike. Organizations as large as Twitter have managed successfully to leverage Ansible for highly scaled deployments and configuration management across implementations across thousands of servers simultaneously. And Twitter isn't the only organization that has managed to implement Ansible at scale, other well-known organizations that have successfully leveraged Ansible include Logitech, NASA, NEC, Twitter, Microsoft and hundreds more. As it stands today, Ansible is in use by major players around the world managing thousands of deployments and configuration management solutions world wide. The Ansible's Automation Architecture Ansible was created with an incredibly flexible and scalable automation engine. It allows users to leverage it in many diverse ways and can be conformed to be used in the way that best suits your specific needs. Since Ansible is agentless (meaning there is no permanently running daemon on the systems it manages or executes from), it can be used locally to control a single system (without any network connectivity), or leveraged to orchestrate and execute automation against many systems, via a control server. In addition to the aforementioned named architectures, Ansible can also be leveraged via Vagrant or Docker to provision infrastructure automatically. This type of solution basically allows the Ansible user to bootstrap their hardware or infrastructure provisioning by running an Ansible playbook(s).  If you happen to be a Vagrant user, there is instructions within the HashiCorp Ansible provisioning located at https://www.vagrantup.com/docs/provisioning/ansible.html. Ansible is open source, module based, pluggable, and agentless. These key differentiators from other configuration management solutions give Ansible a significant edge. Let's take a look at each of these differentiators in details and see what that actually means for Ansible developers and users: Open source: It is no secret that successful open source solutions are usually extraordinarily feature rich. This is because instead of a simple 8 person (or even 100) person engineering team, there are potentially thousands of developers. Each development and enhancement has been designed to fit a unique need. As a result the end deliverable product provides the consumers of Ansible with a very well rounded solution that can be adapted or leveraged in numerous ways. Module based: Ansible has been developed with the intention to integrate with numerous other open and closed source software solutions. This idea means that Ansible is currently compatible with multiple flavors of Linux, Windows and Cloud providers. Aside from its OS level support Ansible currently integrates with hundreds of other software solutions; including EC2, Jira, Jenkins, Bamboo, Microsoft Azure, Digital Ocean, Docker, Google and MANY MANY more.   For a complete list of Ansible modules, please consult the Ansible official module support list located at http://docs.ansible.com/ansible/modules_by_category.html. Agentless: One of the key differentiators that gives Ansible an edge against the competition is the fact that it is 100% agentless. This means there are no daemons that need to be installed on remote machines, no firewall ports that need to be opened (besides traditional SSH), no monitoring that needs to be performed on the remote machines and no management that needs to be performed on the infrastructure fleet. In effect, this makes Ansible very self sufficient.  Since Ansible can be implemented in a few different ways the aim of this section is to highlight these options and help get us familiar with the architecture types that Ansible supports. Generally the architecture of Ansible can be categorized into three distinct architecture types. These are described next. Pluggable: While Ansible comes out of the box with a wide spectrum of software integrations support, it is often times a requirement to integrate the solution with a company based internal software solution or a software solution that has not already been integrated into Ansible's robust playbook suite. The answer to such a requirement would be to create a plugin based solution for Ansible, this providing the custom functionality necessary.  Summary In this article, we discussed the architecture of Ansible, the key differentiators that differentiate Ansible from other configuration management. We learnt that Ansible can also be leveraged via Vagrant or Docker to provision infrastructure automatically. We also saw that Ansible has been successfully leveraged by large oraganizations like Twitter, Microsoft, and many more. Resources for Article: Further resources on this subject: Getting Started with Ansible Getting Started with Ansible Introduction to Ansible Introduction to Ansible System Architecture and Design of Ansible System Architecture and Design of Ansible
Read more
  • 0
  • 0
  • 2304

article-image-start-treating-your-infrastructure-code
Packt
26 Dec 2016
18 min read
Save for later

Start Treating your Infrastructure as Code

Packt
26 Dec 2016
18 min read
In this article by Veselin Kantsev, the author of the book Implementing DevOps on AWS. Ladies and gentlemen, put your hands in the atmosphere for Programmable Infrastructure is here! Perhaps Infrastructure-as-Code(IaC) is not an entirely new concept considering how long Configuration Management has been around. Codifying server, storage and networking infrastructure and their relationships however is a relatively recent tendency brought about by the rise of cloud computing. But let us leave Config Management for later and focus our attention on that second aspect of IaC. You would recall from the previous chapter some of the benefits of storing all-the-things as code: Code can be kept under version control Code can be shared/collaborated on easily Code doubles as documentation Code is reproducible (For more resources related to this topic, see here.) That last point was a big win for me personally. Automated provisioning helped reduce the time it took to deploy a full-featured cloud environment from four hours down to one and the occurrences of human error to almost zero (one shall not be trusted with an input field). Being able to rapidly provision resources becomes a significant advantage when a team starts using multiple environments in parallel and needs those brought up or down on-demand. In this article we examine in detail how to describe (in code) and deploy one such environment on AWS with minimal manual interaction. For implementing IaC in the Cloud, we will look at two tools or services: Terraform and CloudFormation. We will go through examples on how to: Configure the tool Write an IaC template Deploy a template Deploy subsequent changes to the template Delete a template and remove the provisioned infrastructure For the purpose of these examples, let us assume our application requires a Virtual Private Cloude(VPC) which hosts a Relational Database Services(RDS) back-end and a couple of Elastic Compute Cloud (EC2) instances behind an Elastic Load Balancing (ELB). We will keep most components behind Network Address Translation (NAT), allowing only the load-balancer to be accessed externally. IaC using Terraform One of the tools that can help deploy infrastructure on AWS is HashiCorp's Terraform (https://www.terraform.io). HashiCorp is that genius bunch which gave us Vagrant, Packer and Consul. I would recommend you look up their website if you have not already. Using Terraform (TF), we will be able to write a template describing an environment, perform a dry run to see what is about to happen and whether it is expected, deploy the template and make any late adjustments where necessary—all of this without leaving the shell prompt. Configuration Firstly, you will need to have a copy of TF (https://www.terraform.io/downloads.html) on your machine and available on the CLI. You should be able to query the currently installed version, which in my case is 0.6.15: $ terraform --version Terraform v0.6.15 Since TF makes use of the AWS APIs, it requires a set of authentication keys and some level of access to your AWS account. In order to deploy the examples in this article you could create a new Identity and Access Management (IAM) user with the following permissions: "autoscaling:CreateAutoScalingGroup", "autoscaling:CreateLaunchConfiguration", "autoscaling:DeleteLaunchConfiguration", "autoscaling:Describe*", "autoscaling:UpdateAutoScalingGroup", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateInternetGateway", "ec2:CreateNatGateway", "ec2:CreateRoute", "ec2:CreateRouteTable", "ec2:CreateSecurityGroup", "ec2:CreateSubnet", "ec2:CreateTags", "ec2:CreateVpc", "ec2:Describe*", "ec2:ModifySubnetAttribute", "ec2:RevokeSecurityGroupEgress", "elasticloadbalancing:AddTags", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:Describe*", "elasticloadbalancing:ModifyLoadBalancerAttributes", "rds:CreateDBInstance", "rds:CreateDBSubnetGroup", "rds:Describe*" Please refer:${GIT_URL}/Examples/Chapter-2/Terraform/iam_user_policy.json One way to make the credentials of the IAM user available to TF is by exporting the following environment variables: $ export AWS_ACCESS_KEY_ID='user_access_key' $ export AWS_SECRET_ACCESS_KEY='user_secret_access_key' This should be sufficient to get us started. Template design Before we get to coding, here are some of the rules: You could choose to write a TF template as a single large file or a combination of smaller ones. Templates can be written in pure JSON or TF's own format. Terraform will look for files with extensions .tfor .tf.json in a given folder and load these in alphabetical order. TF templates are declarative, hence the order in which resources appear in them does not affect the flow of execution. A Terraform template generally consists of three sections: resources, variables and outputs. As mentioned in the preceding section, it is a matter of personal preference how you arrange these, however, for better readability I suggest we make use of the TF format and write each section to a separate file. Also, while the file extensions are of importance, the file names are up to you. Resources In a way, this file holds the main part of a template, as the resources represent the actual components that end up being provisioned. For example, we will be using a VPC resource, RDS, an ELB one and a few others. Since template elements can be written in any order, Terraform determines the flow of execution by examining any references that it finds (for example a VPC should exist before an ELB which is said to belong to it is created). Alternatively, explicit flow control attributes such as the depends_on are used, as we will observe shortly. To find out more, let us go through the contents of the resources.tf file. Please refer to: ${GIT_URL}/Examples/Chapter-2/Terraform/resources.tf First we tell Terraform what provider to use for our infrastructure: # Set a Provider provider "aws" { region = "${var.aws-region}" } You will notice that no credentials are specified, since we set those as environment variables earlier. Now we can add the VPC and its networking components: # Create a VPC resource "aws_vpc""terraform-vpc" { cidr_block = "${var.vpc-cidr}" tags { Name = "${var.vpc-name}" } } # Create an Internet Gateway resource "aws_internet_gateway""terraform-igw" { vpc_id = "${aws_vpc.terraform-vpc.id}" } # Create NAT resource "aws_eip""nat-eip" { vpc = true } So far we have declared the VPC, its Internet and NAT gateways plus a set of public and private subnets with matching routing tables. It will help clarify the syntax if we examined some of those resource blocks, line by line: resource "aws_subnet""public-1" { The first argument is the type of the resource followed by an arbitrary name. vpc_id = "${aws_vpc.terraform-vpc.id}" The aws_subnet resource named public-1 has a property vpc_id which refers to the id attribute of a different resource of type aws_vpc named terraform-vpc. Such references to other resources implicitly define the execution flow, that is to say the VPC needs to exist before the subnet can be created. cidr_block = "${cidrsubnet(var.vpc-cidr, 8, 1)}" We will talk more about variables in a moment, but the format is var.var_name. Here we use the cidrsubnet function with the vpc-cidr variable which returns a cidr_block to be assigned to the public-1 subnet. Please refer to the Terraform documentation for this and other useful functions. Next we add a RDS to the VPC: resource "aws_db_instance""terraform" { identifier = "${var.rds-identifier}" allocated_storage = "${var.rds-storage-size}" storage_type= "${var.rds-storage-type}" engine = "${var.rds-engine}" engine_version = "${var.rds-engine-version}" instance_class = "${var.rds-instance-class}" username = "${var.rds-username}" password = "${var.rds-password}" port = "${var.rds-port}" vpc_security_group_ids = ["${aws_security_group.terraform-rds.id}"] db_subnet_group_name = "${aws_db_subnet_group.rds.id}" } Here we see mostly references to variables with a few calls to other resources. Following the RDS is an ELB: resource "aws_elb""terraform-elb" { name = "terraform-elb" security_groups = ["${aws_security_group.terraform-elb.id}"] subnets = ["${aws_subnet.public-1.id}", "${aws_subnet.public-2.id}"] listener { instance_port = 80 instance_protocol = "http" lb_port = 80 lb_protocol = "http" } tags { Name = "terraform-elb" } } Lastly we define the EC2 auto scaling group and related resources: resource "aws_launch_configuration""terraform-lcfg" { image_id = "${var.autoscaling-group-image-id}" instance_type = "${var.autoscaling-group-instance-type}" key_name = "${var.autoscaling-group-key-name}" security_groups = ["${aws_security_group.terraform-ec2.id}"] user_data = "#!/bin/bash n set -euf -o pipefail n exec 1>>(logger -s -t $(basename $0)) 2>&1 n yum -y install nginx; chkconfig nginx on; service nginx start" lifecycle { create_before_destroy = true } } resource "aws_autoscaling_group""terraform-asg" { name = "terraform" launch_configuration = "${aws_launch_configuration.terraform-lcfg.id}" vpc_zone_identifier = ["${aws_subnet.private-1.id}", "${aws_subnet.private-2.id}"] min_size = "${var.autoscaling-group-minsize}" max_size = "${var.autoscaling-group-maxsize}" load_balancers = ["${aws_elb.terraform-elb.name}"] depends_on = ["aws_db_instance.terraform"] tag { key = "Name" value = "terraform" propagate_at_launch = true } } The user_data shell script above will install and start NGINX onto the EC2 node(s). Variables We have made great use of variables to define our resources, making the template as re-usable as possible. Let us now look inside variables.tf to study these further. Similarly to the resources list, we start with the VPC : Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/variables.tf variable "aws-region" { type = "string" description = "AWS region" } variable "aws-availability-zones" { type = "string" description = "AWS zones" } variable "vpc-cidr" { type = "string" description = "VPC CIDR" } variable "vpc-name" { type = "string" description = "VPC name" } The syntax is: variable "variable_name" { variable properties } Where variable_name is arbitrary, but needs to match relevant var.var_name references made in other parts of the template. For example, variable aws-region will satisfy the ${var.aws-region} reference we made earlier when describing the region of the provider aws resource. We will mostly use string variables, however there is another useful type called map which can hold lookup tables. Maps are queried in a similar way to looking up values in a hash/dict (Please see: https://www.terraform.io/docs/configuration/variables.html). Next comes RDS: variable "rds-identifier" { type = "string" description = "RDS instance identifier" } variable "rds-storage-size" { type = "string" description = "Storage size in GB" } variable "rds-storage-type" { type = "string" description = "Storage type" } variable "rds-engine" { type = "string" description = "RDS type" } variable "rds-engine-version" { type = "string" description = "RDS version" } variable "rds-instance-class" { type = "string" description = "RDS instance class" } variable "rds-username" { type = "string" description = "RDS username" } variable "rds-password" { type = "string" description = "RDS password" } variable "rds-port" { type = "string" description = "RDS port number" } Finally, EC2: variable "autoscaling-group-minsize" { type = "string" description = "Min size of the ASG" } variable "autoscaling-group-maxsize" { type = "string" description = "Max size of the ASG" } variable "autoscaling-group-image-id" { type="string" description = "EC2 AMI identifier" } variable "autoscaling-group-instance-type" { type = "string" description = "EC2 instance type" } variable "autoscaling-group-key-name" { type = "string" description = "EC2 ssh key name" } We now have the type and description of all our variables defined in variables.tf, however no values have been assigned to them yet. Terraform is quite flexible with how this can be done. We could: Assign default values directly in variables.tf variable "aws-region" { type = "string"description = "AWS region"default = 'us-east-1' } Not assign a value to a variable, in which case Terraform will prompt for it at run time Pass a -var 'key=value' argument(s) directly to the Terraform command, like so: -var 'aws-region=us-east-1' Store key=value pairs in a file Use environment variables prefixed with TF_VAR, as in TF_VAR_ aws-region Using a key=value pairs file proves to be quite convenient within teams, as each engineer can have a private copy (excluded from revision control). If the file is named terraform.tfvars it will be read automatically by Terraform, alternatively -var-file can be used on the command line to specify a different source. Below is the content of our sample terraform.tfvars file: Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/terraform.tfvars autoscaling-group-image-id = "ami-08111162" autoscaling-group-instance-type = "t2.nano" autoscaling-group-key-name = "terraform" autoscaling-group-maxsize = "1" autoscaling-group-minsize = "1" aws-availability-zones = "us-east-1b,us-east-1c" aws-region = "us-east-1" rds-engine = "postgres" rds-engine-version = "9.5.2" rds-identifier = "terraform-rds" rds-instance-class = "db.t2.micro" rds-port = "5432" rds-storage-size = "5" rds-storage-type = "gp2" rds-username = "dbroot" rds-password = "donotusethispassword" vpc-cidr = "10.0.0.0/16" vpc-name = "Terraform" A point of interest is aws-availability-zones, it holds multiple values which we interact with using the element and split functions as seen in resources.tf. Outputs The third, mostly informational part of our template contains the Terraform Outputs. These allow for selected values to be returned to the user when testing, deploying or after a template has been deployed. The concept is similar to how echo statements are commonly used in shell scripts to display useful information during execution. Let us add outputs to our template by creating an outputs.tf file: Please refer to:${GIT_URL}/Examples/Chapter-2/Terraform/outputs.tf output "VPC ID" { value = "${aws_vpc.terraform-vpc.id}" } output "NAT EIP" { value = "${aws_nat_gateway.terraform-nat.public_ip}" } output "ELB URI" { value = "${aws_elb.terraform-elb.dns_name}" } output "RDS Endpoint" { value = "${aws_db_instance.terraform.endpoint}" } To configure an output you simply reference a given resource and its attribute. As shown in preceding code, we have chosen the ID of the VPC, the Elastic IP address of the NAT gateway, the DNS name of the ELB and the Endpoint address of the RDS instance. The Outputs section completes the template in this example. You should now have four files in your template folder: resources.tf, variables.tf, terraform.tfvars and outputs.tf. Operations We shall examine five main Terraform operations: Validating a template Testing (dry-run) Initial deployment Updating a deployment Removal of a deployment In the following command line examples, terraform is run within the folder which contains the template files. Validation Before going any further, a basic syntax check should be done with the terraform validate command. After renaming one of the variables in resources.tf, validate returns an unknown variable error: $ terraform validate Error validating: 1 error(s) occurred: * provider config 'aws': unknown variable referenced: 'aws-region-1'. define it with 'variable' blocks Once the variable name has been corrected, re-running validate returns no output, meaning OK. Dry-run The next step is to perform a test/dry-run execution with terraform plan, which displays what would happen during an actual deployment. The command returns a colour coded list of resources and their properties or more precisely: $ terraform plan Resources are shown in alphabetical order for quick scanning. Green resources will be created (or destroyed and then created if an existing resource exists), yellow resources are being changed in-place, and red resources will be destroyed. To literally get the picture of what the to-be-deployed infrastructure looks like, you could use terraform graph: $ terraform graph > my_graph.dot DOT files can be manipulated with the Graphviz open source software (Please see : http://www.graphviz.org) or many online readers/converters. Below is a portion of a larger graph representing the template we designed earlier: Deployment If you are happy with the plan and graph, the template can now be deployed using terraform apply: $ terraform apply aws_eip.nat-eip: Creating... allocation_id: "" =>"<computed>" association_id: "" =>"<computed>" domain: "" =>"<computed>" instance: "" =>"<computed>" network_interface: "" =>"<computed>" private_ip: "" =>"<computed>" public_ip: "" =>"<computed>" vpc: "" =>"1" aws_vpc.terraform-vpc: Creating... cidr_block: "" =>"10.0.0.0/16" default_network_acl_id: "" =>"<computed>" default_security_group_id: "" =>"<computed>" dhcp_options_id: "" =>"<computed>" enable_classiclink: "" =>"<computed>" enable_dns_hostnames: "" =>"<computed>" Apply complete! Resources: 22 added, 0 changed, 0 destroyed. The state of your infrastructure has been saved to the following path. This state is required to modify and destroy your infrastructure, so keep it safe. To inspect the complete state use the terraform show command. State path: terraform.tfstate Outputs: ELB URI = terraform-elb-xxxxxx.us-east-1.elb.amazonaws.com NAT EIP = x.x.x.x RDS Endpoint = terraform-rds.xxxxxx.us-east-1.rds.amazonaws.com:5432 VPC ID = vpc-xxxxxx At the end of a successful deployment, you will notice the Outputs we configured earlier and a message about another important part of Terraform – the state file.(Please refer to: https://www.terraform.io/docs/state/): Terraform stores the state of your managed infrastructure from the last time Terraform was run. By default this state is stored in a local file named terraform.tfstate, but it can also be stored remotely, which works better in a team environment. Terraform uses this local state to create plans and make changes to your infrastructure. Prior to any operation, Terraform does a refresh to update the state with the real infrastructure. In a sense, the state file contains a snapshot of your infrastructure and is used to calculate any changes when a template has been modified. Normally you would keep the terraform.tfstate file under version control alongside your templates. In a team environment however, if you encounter too many merge conflicts you can switch to storing the state file(s) in an alternative location such as S3 (Please see: https://www.terraform.io/docs/state/remote/index.html). Allow a few minutes for the EC2 node to fully initialize then try loading the ELB URI from the preceding Outputs in your browser. You should be greeted by NGINX as shown in the following screenshot: Updates As per Murphy 's Law, as soon as we deploy a template, a change to it will become necessary. Fortunately, all that is needed for this is to update and re-deploy the given template. Let us say we need to add a new rule to the ELB security group (shown in bold below). Update the resource "aws_security_group""terraform-elb" block in resources.tf: resource "aws_security_group""terraform-elb" { name = "terraform-elb" description = "ELB security group" vpc_id = "${aws_vpc.terraform-vpc.id}" ingress { from_port = "80" to_port = "80" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { from_port = "443" to_port = "443" protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } } Verify what is about to change $ terraform plan ... ~ aws_security_group.terraform-elb ingress.#: "1" =>"2" ingress.2214680975.cidr_blocks.#: "1" =>"1" ingress.2214680975.cidr_blocks.0: "0.0.0.0/0" =>"0.0.0.0/0" ingress.2214680975.from_port: "80" =>"80" ingress.2214680975.protocol: "tcp" =>"tcp" ingress.2214680975.security_groups.#: "0" =>"0" ingress.2214680975.self: "0" =>"0" ingress.2214680975.to_port: "80" =>"80" ingress.2617001939.cidr_blocks.#: "0" =>"1" ingress.2617001939.cidr_blocks.0: "" =>"0.0.0.0/0" ingress.2617001939.from_port: "" =>"443" ingress.2617001939.protocol: "" =>"tcp" ingress.2617001939.security_groups.#: "0" =>"0" ingress.2617001939.self: "" =>"0" ingress.2617001939.to_port: "" =>"443" Plan: 0 to add, 1 to change, 0 to destroy. Deploy the change: $ terraform apply ... aws_security_group.terraform-elb: Modifying... ingress.#: "1" =>"2" ingress.2214680975.cidr_blocks.#: "1" =>"1" ingress.2214680975.cidr_blocks.0: "0.0.0.0/0" =>"0.0.0.0/0" ingress.2214680975.from_port: "80" =>"80" ingress.2214680975.protocol: "tcp" =>"tcp" ingress.2214680975.security_groups.#: "0" =>"0" ingress.2214680975.self: "0" =>"0" ingress.2214680975.to_port: "80" =>"80" ingress.2617001939.cidr_blocks.#: "0" =>"1" ingress.2617001939.cidr_blocks.0: "" =>"0.0.0.0/0" ingress.2617001939.from_port: "" =>"443" ingress.2617001939.protocol: "" =>"tcp" ingress.2617001939.security_groups.#: "0" =>"0" ingress.2617001939.self: "" =>"0" ingress.2617001939.to_port: "" =>"443" aws_security_group.terraform-elb: Modifications complete ... Apply complete! Resources: 0 added, 1 changed, 0 destroyed. Some update operations can be destructive (Please refer: http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html.You should always check the CloudFormation documentation on the resource you are planning to modify to see whether a change is going to cause any interruption. Terraform provides some protection via the prevent_destroy lifecycle property (Please refer: https://www.terraform.io/docs/configuration/resources.html#prevent_destroy). Removal This is a friendly reminder to always remove AWS resources after you are done experimenting with them to avoid any unexpected charges. Before performing any delete operations, we will need to grant such privileges to the (terraform) IAM user we created in the beginning of this article. As a shortcut, you could temporarily attach the Aministrato rAccess managed policy to the user via the AWS Console as shown in the following figure: To remove the VPC and all associated resources that we created as part of this example, we will use terraform destroy: $ terraform destroy Do you really want to destroy? Terraform will delete all your managed infrastructure. There is no undo. Only 'yes' will be accepted to confirm. Enter a value: yes Terraform asks for a confirmation then proceeds to destroy resources, ending with: Apply complete! Resources: 0 added, 0 changed, 22 destroyed. Next, we remove the temporary admin access we granted to the IAM user by detaching the Aministrator Access managed policy as shown in the following screenshot: Then verify that the VPC is no longer visible in the AWS Console. Summary In this aticlewe looked at the importance and usefulness of Infrastructure as Code and ways to implement it using Terraform or AWS CloudFormation. We examined the structure and individual components of both a Terraform and a CF template then practiced deploying those onto AWS using the CLI.I trust that the examples we went through have demonstrated the benefits and immediate gains from the practice of deploying infrastructure as code. So far however, we have only done half the job. With the provisioning stage completed, you would naturally want to start configuring your infrastructure. Resources for Article: Further resources on this subject: Provision IaaS with Terraform [article] Ansible – An Introduction [article] Design with Spring AOP [article]
Read more
  • 0
  • 0
  • 3780

article-image-devops-tools-and-technologies
Packt
11 Nov 2016
15 min read
Save for later

DevOps Tools and Technologies

Packt
11 Nov 2016
15 min read
In this article by Ritesh Modi, the author of the book DevOps with Windows Server 2016, we will introduce foundational platforms and technologies instrumental in enabling and implementing DevOps practices. (For more resources related to this topic, see here.) These include: Technology stack for implementing Continuous Integration, Continuous Deployment, Continuous Deliver, Configuration Management, and Continuous Improvement. These form the backbone for DevOps processes and include source code services, build services, and release services through Visual Studio Team Services. Platform and technology used to create and deploy a sample web application. This includes technologies such as Microsoft .NET, ASP.NET and SQL Server databases. Tools and technology for configuration management, testing of code and application, authoring infrastructure as code, and deployment of environments. Examples of these tools and technologies are Pester for environment validation, environment provisioning through Azure Resource Manager (ARM) templates, Desired State Configuration (DSC) and Powershell, application hosting on containers through Windows Containers and Docker, application and database deployment through Web Deploy packages, and SQL Server bacpacs. Cloud technology Cloud is ubiquitous. Cloud is used for our development environment, implementation of DevOps practices, and deployment of applications. Cloud is a relatively new paradigm in infrastructure provisioning, application deployment, and hosting space. The only options prior to the advent of cloud was either self-hosted on-premises deployments or using services from a hosting service provider. However, cloud is changing the way enterprises look at their strategy in relation to infrastructure and application development, deployment, and hosting. In fact, the change is so enormous that it has found its way into every aspect of an organization's software development processes, tools, and practices. Cloud computing refers to the practice of deploying applications and services on the Internet with a cloud provider. A cloud provider provides multiple types of services on cloud. They are divided into three categories based on their level of abstraction and degree of control on services. These categories are as follows: Infrastructure as a Service (IaaS) Platform as a Service (PaaS) Software as a Service (SaaS) These three categories differ based on the level of control a cloud provider exercises compared to the cloud consumer. The services provided by a cloud provider can be divided into layers, with each layer providing a type of service. As we move higher in the stack of layers, the level of abstraction increases in line with the cloud provider's control over services. In other words, the cloud consumer starts to lose control over services as you move higher in each column: Figure 1: Cloud Services – IaaS, PaaS and SaaS Figure 1 shows the three types of service available through cloud providers and the layers that comprise these services. These layers are stacked vertically on each other and show the level of control a cloud provider has compared to a consumer. From Figure 1, it is clear that for IaaS, a cloud provider is responsible for providing, controlling, and managing layers from the network layer up to the virtualization layer. Similarly, for PaaS, a cloud provider controls and manages from the hardware layer up to the runtime layer, while the consumer controls only the application and data layers. Infrastructure as a Service (IaaS) As the name suggests, Infrastructure as a Service is an infrastructure service provided by a cloud provider. This service includes the physical hardware and its configuration, network hardware and its configuration, storage hardware and its configuration, load balancers, compute, and virtualization. Any layer above virtualization is the responsibility of the consumer to provision, configure, and manage. The consumer can decide to use the provided underlying infrastructure in whatever way best suits their requirements. Consumers can consume the storage, network, and virtualization to provision their virtual machines on top of. It is the consumer's responsibility to manage and control the virtual machines and the things deployed within it. Platform as a Service (PaaS) Platform as a Service enables consumers to deploy their applications and services on the provided platform, consuming the underlying runtime, middleware, and services. The cloud provider provides the services from infrastructure to runtime. The consumers cannot provision virtual machines as they cannot access and control them. Instead, they can only control and manage their applications. This is a comparatively faster method of development and deployment because now the consumer can focus on application development and deployment. Examples of Platform as a Service include Azure Automation, Azure SQL, and Azure App Services. Software as a Service (SaaS) Software as a Service provides complete control of the service to the cloud provider. The cloud provider provisions, configures, and manages everything from infrastructure to the application. It includes the provisioning of infrastructure, deployment and configuration of applications, and provides application access to the consumer. The consumer does not control and manage the application, and can use and configure only parts of the application. They control only their data and configuration. Generally, multi-tenant applications used by multiple consumers, such as Office 365 and Visual Studio Team Services, are examples of SaaS. Advantages of using cloud computing There are multiple distinct advantages of using cloud technologies. The major among them are as follows: Cost effective: Cloud computing helps organizations to reduce the cost of storage, networks, and physical infrastructure. It also prevents them from having to buy expensive software licenses. The operational cost of managing these infrastructures also reduces due to lesser effort and manpower requirements. Unlimited capacity: Cloud provides unlimited resources to the consumer. This ensures applications will never get throttled due to limited resource availability. Elasticity: Cloud computing provides the notion of unlimited capacity and applications deployed on it can scale up or down on an as-needed basis. When demand for the application increases, cloud can be configured to scale up the infrastructure and application by adding additional resources. At the same time, it can scale down unnecessary resources during periods of low demand. Pay as you go: Using cloud eliminates capital expenditure and organizations pay only for what they use, thereby providing maximum return on investment. Organizations do not need to build additional infrastructure to host their application for times of peak demand. Faster and better: Cloud provides ready-to-use applications and faster provisioning and deployment of environments. Moreover, organizations get better-managed services from their cloud provider with higher service-level agreements. We will use Azure as our preferred cloud computing provider for the purpose of demonstrating samples and examples. However, you can use any cloud provider that provides complete end-to-end services for DevOps. We will use multiple features and services provided by Azure across IaaS and PaaS. We will consume Operational Insights and Application Insights to monitor our environment and application, which will help capture relevant telemetry for auditing purposes. We will provision Azure virtual machines running Windows and Docker Containers as a hosting platform. We will use Windows Server 2016 as the target operating system for our applications on cloud. Azure Resource Manager (ARM). We will also use Desired State Configuration and PowerShell as our configuration management platform and tool. We will use Visual Studio Team Services (VSTS), a suite of PaaS services on cloud provided by Microsoft, to set up and implement our end-to-end DevOps practices. Microsoft also provides the same services as part of Team Foundation Services (TFS) as an on-premises solution. Technologies like Pester, DSC, and PowerShell can be deployed and configured to run on any platform. These will help both in the validation of our environment and in the configuration of both application and environment, as part of our Configuration management process. Windows Server 2016 is a breakthrough operating system from Microsoft also referred to as Cloud Operating System. We will look into Windows Server 2016 in the following section. Windows Server 2016 Windows Server 2016 has come a long way. All the way from Windows NT to Windows 2000 and 2003, then Windows 2008 (R2) and 2012 (R2), and now Windows Server 2016. Windows NT was the first popular Windows server among enterprises. However, the true enterprise servers were Windows 2000 and Windows 2003. The popularity of Windows Server 2003 was unprecedented and it was widely adopted. With Windows Server 2008 and 2008 R2, the idea of the data center took priority and enterprises with their own data center adopted it. Even the Windows Server 2008 series was quite popular among enterprises. In 2010, the Microsoft cloud, Azure, was launched. The first steps towards a cloud operating system were Windows Server 2012 and 2012 R2. They had the blueprints and technology to be seamlessly provisioned on Azure. Now, when Azure and cloud are gaining enormous popularity, Windows Server 2016 is released as a true cloud operating system. The evolution of Windows Server is shown in Figure 2: Figure 2: Windows Server evolution Windows Server 2016 is referred to as a cloud operating system. It is built with cloud in mind. It is also referred to as the first operating system that enables DevOps seamlessly by providing relevant tools and technologies. It makes implementing DevOps simpler and easier through its productivity tools. Let us look briefly into these tools and technologies. Multiple choices for Application platform Windows Server 2016 comes with many choices for application platform for applications. It provides the following: Windows Server 2016 Nano Server Windows and Docker Containers Hyper-V Containers Nested virtual machines Windows Server as a hosting platform Windows server 2016 can be used in the ways it has always been used, such as hosting applications and providing server functionalities. It provides the services necessary to make applications secure, scalable, and highly available. It also provides virtualization, directory services, certificate services, web server, databases, and more. These services can be consumed by the enterprise’s services and applications. Nano Server Windows Server provides a new option to host applications and services. This is a new variety of lightweight, scaled-down Windows server containing only the kernel and drivers necessary to run as an operating system. They are also known as headless servers. They do not have any graphical user interface and the only way to interact and manage them is through remote PowerShell. Out of the box, they do not contain any service or feature. The services need to be added to Nano servers explicitly before use. So far, they are the most secure servers from Microsoft. They are very lightweight and their resource requirements and consumption is less than 80% of a normal Windows server. The number of services running, the number of ports open, the number of processes running and the amount of memory and storage required, also are less than 80% compared to normal Windows server. Even though Nano Server out of box just has the kernel and drivers, its capabilities can be enhanced by adding features and deploying any Windows application on it. Windows Containers and Docker Containers are one of the most revolutionary features added to Windows Server 2016 after Nano Server. With the popularity and adoption of Docker Containers, which primarily run on Linux, Microsoft decided to introduce container services to Windows Server 2016. Containers are operating system virtualization. This means that multiple containers can be deployed on the same operating system and each one of them will share the host operating system kernel. It is the next level of virtualization after server virtualization (virtual machines). Containers generate the notion of complete operating system isolation and independence, even though it uses the same host operating system underneath it. This is possible through the use of namespace isolation and image layering. Containers are created from images. Images are immutable and cannot be modified. Each image has a base operating system and a series of instructions that are executed against it. Each instruction creates a new image on top of the previous image and contains only the modification. Finally, a writable image is stacked on top of these images. These images are combined into a single image, which can then be used for provisioning containers. A container made up of multiple image layers is shown in Figure 3: Figure 3: Containers made up of multiple image layers Namespace isolation helps provide containers with pristine new environments. The containers cannot see the host resources and the host cannot view the container resources. For the application within the container, a complete new installation of the operating system is available. The containers share the host's memory, CPU, and storage. Containers offer operating system virtualization, which means the containers can host only those operating systems supported by the host operating system. There cannot be a Windows container running on a Linux host, and a Linux container cannot run on a Windows host operating system. Hyper-V containers Another type of container technology Windows Server 2016 provides is Hyper-V Containers. These containers are similar to Windows Containers. They are managed through the same Docker client and extend the same Docker APIs. However, these containers contain their own scaled down operating system kernel. They do not share the host operating system but have their own dedicated operating system, and their own dedicated memory and CPU assigned in exactly the same way virtual machines are assigned resources. Hyper-V Containers brings in a higher level of isolation of containers from the host. While Windows Containers runs in full trust on the host operating system, Hyper-V Containers does not have full trust from the host’s perspective. It is this isolation that differentiates Hyper-V Containers from Windows Containers. Hyper-V Containers is ideal for hosting applications that might harm the host server affecting every other container and service on it. Scenarios where users can bring in and execute their own code are examples of such applications. Hyper-V Containers provides adequate isolation and security to ensure that applications cannot access the host resources and change them. Nested virtual machines Another breakthrough innovation of Windows Server 2016 is that now, virtual machines can host virtual machines. Now, we can deploy multiple virtual machines containing all tiers of an application within a single virtual machine. This is made possible through software-defined networks and storage. Enabling Microservices Nano Servers and Containers helps provide advanced lightweight deployment options through which we can now decompose the entire application into multiple smaller, independent services, each with their own scalability and high availability configuration, and deploy them independently of each other. Microservices helps in making the entire DevOps lifecycle agile. With Microservices, changes to services do not demand that every other Microservices undergo every test validation. Only the changed service needs to be tested rigorously, along with its integration with other services. Compare this to a monolithic application. Even a single small change will result in having to test the entire application. Microservices helps in that it requires smaller teams for its development, testing of a service can happen independently of other services, and deployment can be done for each service in isolation. Continuous Integration, Continuous Deployment, and Continuous Delivery for each service can be executed in isolation rather than compiling, testing, and deploying the whole application every time there is a change. Reduced maintenance Because of their intrinsic nature, Windows Nano Servers and Containers are lightweight and quick to provision. They help to quickly provision and configure environments, thereby reducing the overall time needed for Continuous Integration and deployment. Also, these resources can be provisioned on Azure on-demand without waiting for hours. Because of their small footprint in terms of size, storage, memory, and features, they need less maintenance. These servers are patched less often, with fewer fixes, they are secure by default, and have less chance of failing applications, which makes them ideal for operations. The operations team needs to spend fewer hours maintaining these servers compared to normal servers. This reduces the overall cost for the organization and helps DevOps ensure a high-quality delivery. Configuration management tools Windows Server 2016 comes with Windows Management Framework 5.0 installed by default. Desired State Configuration (DSC) is the new configuration management platform available out of the box in Windows Server 2016. It has a rich, mature set of features that enables configuration management for both environments and applications. With DSC, the desired state and configuration of environments are authored as part of Infrastructure as Code and executed on every server on a scheduled basis. They help check the current state of servers with the documented desired state and bring them back to the desired state. DSC is available as part of PowerShell and PowerShell helps with authoring these configuration documents. Windows server 2016 provides a PowerShell unit testing framework known as PESTER. Historically, unit testing for infrastructure environments was always missing as a feature. PESTER enables the testing of infrastructure provisioned either manually or through Infrastructure as Code using DSC configuration or ARM templates. These help with the operational validation of the entire environment, bringing in a high level of cadence and confidence in Continuous Integration and deployment processes. Deployment and packaging Package management and the deployment of utilities and tools through automation is a new concept in the Windows world. Package management has been ubiquitous in the Linux world for a long time. Packing management helps search, save, install, deploy, upgrade, and remove software packages from multiple sources and repositories on demand. There are public repositories such as Chocolatey and PSGallery available for storing readily deployable packages. Tools such as NuGet can connect these repositories and help with package management. They also help with the versioning of packages. Applications that rely on a specific package version can download it on an as-needed basis. Package management helps with the building of environments and application deployment. Package deployment is much easier and faster with this out-of-the-box Windows feature. Summary We have covered a lot of ground in this article. DevOps concepts were discussed mapping technology to those concepts. In this we saw the impetus DevOps can get from technology. We looked at cloud computing and the different services provided by cloud providers. From there, we went on to look at the benefits Windows Server 2016 brings to DevOps practices and how Windows Server 2016 makes DevOps easier and faster with its native tools and features. Resources for Article: Further resources on this subject: Introducing Dynamics CRM [article] Features of Dynamics GP [article] Creating Your First Plug-in [article]
Read more
  • 0
  • 0
  • 2860
article-image-bringing-devops-network-operations
Packt
14 Oct 2016
37 min read
Save for later

Bringing DevOps to Network Operations

Packt
14 Oct 2016
37 min read
In this article by Steven Armstrong, author of the book DevOps for Networking, we will focus on people and process with regards to DevOps. The DevOps initiative was initially about breaking down silos between development and operations teams and changing company's operational models. It will highlight methods to unblock IT staff and allow them to work in a more productive fashion, but these mindsets have since been extended to quality assurance testing, security, and now network operations. This article will primarily focus on the evolving role of the network engineer, which is changing like the operations engineer before them, and the need for network engineers to learn new skills that will allow network engineers to remain as valuable as they are today as the industry moves towards a completely programmatically controlled operational model. (For more resources related to this topic, see here.) This article will look at two differing roles, that of the CTO / senior manager and engineer, discussing at length some of the initiatives that can be utilized to facilitate the desired cultural changes that are required to create a successful DevOps transformation for a whole organization or even just allow a single department improve their internal processes by automating everything they do. In this article, the following topics will be covered: Initiating a change in behavior Top-down DevOps initiatives for networking teams Bottom-up DevOps initiatives for networking teams Initiating a change in behavior The networking OSI model contains seven layers, but it is widely suggested that the OSI model has an additional 8th layer named the user layer, which governs how end users integrate and interact with the network. People are undoubtedly a harder beast to master and manage than technology, so there is no one size fits all solution to the vast amount of people issues that exist. The seven layers of OS are shown in the following image: Initiating cultural change and changes in behavior is the most difficult task an organization will face, and it won't occur overnight. To change behavior there must first be obvious business benefits. It is important to first outline the benefits that these cultural changes will bring to an organization, which will enable managers or change agents to make business justifications to implement the required changes. Cultural change and dealing with people and processes is notoriously hard, so divorcing the tools and dealing with people and processes is paramount to the success of any DevOps initiative or project. Cultural change is not something that needs to be planned and a company initiative. In a recent study by Gartner, it was shown that selecting the wrong tooling was not the main reason that cloud projects were a failure, instead the top reason was failure to change the operational model. Reasons to implement DevOps When implementing DevOps, some myths are often perpetuated, such as DevOps only works for start-ups, it won't bring any value to a particular team, or that it is simply a buzz word and a fad. The quantifiable benefits of DevOps initiatives are undeniable when done correctly. Some of these benefits include improvements to the following: The velocity of change Mean time to resolve Improved uptime Increased number of deployments Cross-skilling between teams The removal of the bus factor of one Any team in the IT industry would benefit from these improvements, so really teams can't afford to not adopt DevOps, as it will undoubtedly improve their business functions. By implementing a DevOps initiative, it promotes repeatability, measurement, and automation. Implementing automation naturally improves the velocity of change and increased number of deployments a team can do in any given day and time to market. Automation of the deployment process allows teams to push fixes through to production quickly as well as allowing an organization to push new products and features to market. A byproduct of automation is that the mean time to resolve will also become quicker for infrastructure issues. If infrastructure or network changes are automated, they can be applied much more efficiently than if they were carried out manually. Manual changes depend on the velocity of the engineer implementing the change rather than an automated script that can be measured more accurately. Implementing DevOps also means measuring and monitoring efficiently too, so having effective monitoring is crucial on all parts of infrastructure and networking, as it means the pace in which root cause analysis can carried out improves. Having effective monitoring helps to facilitate the process of mean time to resolve, so when a production issue occurs, the source of the issue can be found quicker than numerous engineers logging onto consoles and servers trying to debug issues. Instead a well-implemented monitoring system can provide a quick notification to localize the source of the issue, silencing any resultant alarms that result from the initial root cause, allowing the issue to be highlighted and fixed efficiently. The monitoring then hands over to the repeatable automation, which can then push out the localized fix to production. This process provides a highly accurate feedback loop, where processes will improve daily. If alerts are missed, they will ideally be built into the monitoring system over time as part of the incident post-mortem. Effective monitoring and automation results in quicker mean time to resolve, which leads to happier customers, and results in improved uptime of products. Utilizing automation and effective monitoring also means that all members of a team have access to see how processes work and how fixes and new features are pushed out. This will mean less of a reliance on key individuals removing the bus factor of one where a key engineer needs to do the majority of tasks in the team as he is the most highly skilled individual and has all of the system knowledge stored in his head. Using a DevOps model means that the very highly skilled engineer can instead use their talents to help cross skill other team members and create effective monitoring that can help any team member carry out the root cause analysis they normally do manually. This builds the talented engineers deep knowledge into the monitoring system, so the monitoring system as opposed to the talented engineer becomes the go to point of reference when an issue first occurs, or ideally the monitoring system becomes the source of truth that alerts on events to prevent customer facing issues. To improve cross-skilling, the talented engineer should ideally help write automation too, so they are not the only member of the team that can carry out specific tasks. Reasons to implement DevOps for networking So how do some of those DevOps benefits apply to traditional networking teams? Some of the common complaints with siloed networking teams today are the following: Reactive Slow often using ticketing systems to collaborate Manual processes carried out using admin terminals Lack of preproduction testing Manual mistakes leading to network outages Constantly in firefighting mode Lack of automation in daily processes Network teams like infrastructure teams before them are essentially used to working in siloed teams, interacting with other teams in large organizations via ticketing systems or using suboptimal processes. This is not a streamlined or optimized way of working, which led to the DevOps initiative that sought to break down barriers between Development and Operations staff, but its remit has since widened. Networking does not seem to have been initially included in this DevOps movement yet, but software delivery can only operate as fast as the slowest component. The slowest component will eventually become the bottleneck or blocker of the entire delivery process. That slowest component often becomes the star engineer in a siloed team that can't process enough tickets in a day manually to keep up with demand, thus becoming the bus factor of one. If that engineer goes off sick, then work is blocked, the company becomes too reliant and cannot function efficiently without them. If a team is not operating in the same way as the rest of the business, then all other departments will be slowed down as the siloed department is not agile enough. Put simply, the reason networking teams exist in most companies is to provide a service to development teams. Development teams require networking to be deployed, so they deliver applications to production so that the business can make money from those products. So networking changes to ACL policies, load-balancing rules, and provisioning of new subnets for new applications can no longer be deemed acceptable if they take days, months or even weeks. Networking has a direct impact on the velocity of change, mean time to resolve, uptime, as well as the number of deployments, which are four of the key performance indicators of a successful DevOps initiative. So networking needs to be included in a DevOps model by companies, otherwise all of these quantifiable benefits will become constrained. Given the rapid way AWS, Microsoft Azure, OpenStack, and Software-defined Networking (SDN) can be used to provision network functions in the private and public cloud, it is no longer acceptable for network teams to not adapt their operational processes and learn new skills. But the caveat is that the evolution of networking has been quick, and they need the support and time to do this. If a cloud solution is implemented and the operational model does not change, then no real quantifiable benefits will be felt by the organization. Cloud projects traditionally do not fail because of technology, cloud projects fail because of the incumbent operational models that hinder them from being a success. There is zero value to be had from building a brand new OpenStack private cloud, with its open set of extensible APIs to manage compute, networking, and storage if a company doesn't change its operational model and allow end users to use those APIs to self-service their requests. If network engineers are still using the GUI to point and click and cut and paste then this doesn't bring any real business value as the network engineer that cuts and pastes the slowest is the bottleneck. The company may as well stick with their current processes as implementing a private cloud solution with manual processes will not result in a speeding up time to market or mean time to recover from failure. However, cloud should not be used as an excuse to deride your internal network staff with, as incumbent operational models in companies are typically not designed or set up by current staff, they are normally inherited. Moving to public cloud doesn't solve the problem of the operational agility of a company's network team, it is a quick fix and bandage that disguises the deeper rooted cultural challenges that exist. However, smarter ways of working allied with use of automation, measurement, and monitoring can help network teams refine their internal processes and facilitate the developers and operations staff that they work with daily. Cultural change can be initiated in two different ways, grass roots bottom-up initiatives coming from engineers, or top-down management initiatives. Top-down DevOps initiatives for networking teams Top-down DevOps initiatives are when a CTO, Directors, or Senior Manager have to buy in from the company to make changes to the operational model. These changes are required as the incumbent operational model is deemed suboptimal and not set up to deliver software at the speed of competitors, which inherently delays new products or crucial fixes from being delivered to market. When doing DevOps transformations from a top-down management level, it is imperative that some ground work is done with the teams involved, if large changes are going to be made to the operational model, it can often cause unrest or stress to staff on the ground. When implementing operational changes, upper management need to have the buy in of the people on the ground as they will operate within that model daily. Having teams buy in is a very important aspect; otherwise, the company will end up with an unhappy workforce, which will mean the best staff will ultimately leave. It is very important that upper management engage staff when implementing new operational processes and deal with any concerns transparently from the outset, as opposed to going for an offsite management meeting and coming back with an enforced plan, which is all too common a theme. Management should survey the teams to understand how they operate on a daily basis, what they like about the current processes and where their frustrations lie. The biggest impediment to changing an operational model is misunderstanding the current operational model. All initiatives should ideally be led and not enforced. So let's focus on some specific top-down initiatives that could be used to help. Analyzing successful teams One approach would be for the management is to look at other teams within the organization whose processes are working well and are delivering in an incremental agile fashion, if no other team in the organization is working in this fashion, then reach out to other companies. Ask if it would be possible to go and look at the way another company operate for a day. Most companies will happily use successful projects as reference cases to public audiences at conferences or meet-ups, as they enjoy showing their achievements, so it shouldn't be difficult to seek out companies that have overcome similar cultural challenges. It is good to attend some DevOps conferences and look at who is speaking, so approach the speakers and they will undoubtedly be happy to help. Management teams should initially book a meeting with the high-performing team and do a question and answer session focusing on the following points, if it is an external vendor then an introduction phone call can suffice. Some important questions to ask in the initial meeting are the following: Which processes normally work well? What tools they actually use on a daily basis? How is work assigned? How do they track work? What is the team structure? How do other teams make requests to the team? How is work prioritized? How do they deal with interruptions? How are meetings structured? It is important not to reinvent the wheel, if a team in the organization already has a proven template that works well, then that team could also be invaluable in helping facilitate cultural change within the networks team. It will be slightly more challenging if focus is put on an external team as the evangelist as it opens up excuses such as it being easier for them because of x, y, and z in their company. A good strategy, when utilizing a local team in the organization as the evangelist, is to embed a network engineer in that team for a few weeks and have them observe and give feedback how the other teams operate and document their findings. This is imperative, so the network engineers on the ground understand the processes. Flexibility is also important, as only some of the successful team's processes may be applicable to a network team, so don't expect two teams to work identically. The sum of parts and personal individuals in the team really do mean that every team is different, so focus on goals rather than the implementation of strict process. If teams achieve the same outcomes in slightly different ways, then as long as work can be tracked and is visible to management, it shouldn't be an issue as long as it can be easily reported on. Make sure pace is prioritized, select specific change agents to make sure teams are comfortable with new processes, so empower change agents in the network team to choose how they want to work by engaging with the team by creating new processes and also put them in charge of eventual tool selection. However, before selecting any tooling, it is important to start with process and agree on the new operational model to prevent tooling driving processes, this is a common mistake in IT. Mapping out activity diagrams A good piece of advice is to use an activity diagram as a visual aid to understand how a team's interactions work and where they can be improved. A typical development activity diagram, with manual hand-off to a quality assurance team is shown here: Utilizing activity diagrams as a visual aid is important as it highlights suboptimal business process flows. In the example, we see a development team's activity diagram. This process is suboptimal as it doesn't include the quality assurance team in the Test locally and Peer review phases. Instead it has a formalized QA hand-off phase, which is very late in the development cycle, and a suboptimal way of working as it promotes a development and QA silo, which is a DevOps anti-pattern. A better approach would be to have QA engineers work on creating test tasks and creating automated tests, whereas the development team works on coding tasks. This would allow the development Peer review process to have QA engineers' review and test developer code earlier in the development lifecycle and make sure that every piece of code written has appropriate test coverage before the code is checked in. Another shortcoming in the process is that it does not cater for software bugs found by the quality assurance team or in production by customers, so mapping these streams of work into the activity diagram would also be useful to show all potential feedback loops. If a feedback loop is missed in the overall activity diagram, then it can cause a breakdown in the process flow, so it is important to capture all permutations in the overarching flow that could occur before mapping tooling to facilitate the process. Each team should look at ways of shortening interactions to aid mean time to resolve and improve the velocity of change at which work can flow through the overall process. Management should dedicate some time in their schedule with the development, infrastructure, networking, and test teams and map out what they believe the team processes to be in their individual teams. Keep it high level, this should represent a simple activity swim-lane utilizing the start point where they accept work and the process the team goes through to deliver that work. Once each team has mapped out the initial approach, they should focus on optimizing it and removing the parts of the process they dislike and discuss ways the process could be improved as a team. It may take many iterations before this is mapped out effectively, so don't rush this process, it should be used as a learning experience for each team. The finalized activity diagram will normally include management and technical functions combined in an optimized way to show the overall process flow. Try not to bother using Business Process Management (BPM) software at this stage a simple white board will suffice to keep it simple and informal. It is a good practice to utilize two layers of an activity diagram, so the first layer can be a box that simply says Peer review, which then references a nested activity diagrams outlining what the teams peer review process is. Both need refined but the nested tier of business processes should be dictated by the individual teams as these are specific to their needs, so it's important to leave teams the flexibility they need at this level. It is important to split the two out tiers; otherwise, the overall top layer of activity diagram will be too complex to extract any real value from, so try and minimize the complexity at the top layer, as this will need to be integrated with other teams processes. The activity doesn't need to contain team-specific details such as how an internal team's Peer review process operates as this will always be subjective to that team; this should be included but will be a nested layer activity that won't be shared. Another team should be able to look at a team's top layer activity diagram and understand the process without explanation. It can sometimes be useful to first map out a high performing teams top layer activity diagram to show how an integrated joined up business process should look. This will help teams that struggle a bit more with these concepts and allow them to use that team's activity diagram as a guide. This can be used as a point of reference and show how these teams have solved their cross team interaction issues and facilitated one or more teams interacting without friction. The main aim of this exercise is to join up business processes, so they are not siloed between teams, so the planning and execution of work is as integrated as possible for joined up initiatives. Once each team has completed their individual activity diagram and optimized it to the way the team wants, the second phase of the process can begin. This involves layering each team's top layer of their activity diagrams together to create a joined up process. Teams should use this layering exercise as an excuse to talk about suboptimal processes and how the overall business process should look end to end. Utilize this session to remove perceived bottlenecks between teams, completely ignoring existing tools and the constraints of current tools, this whole exercise should be focusing on process not tooling. A good example of a suboptimum process flow that is constrained by tooling would be a stage on a top layer activity diagram that says raise ticket with ticketing system. This should be broken down so work is people focused, what does the person requesting the change actually require? Developers' day job involves writing code and building great features and products, so if a new feature needs a network change, then networking should be treated as part of that feature change. So the time taken for the network changes needs to be catered for as part of the planning and estimation for that feature rather than a ticketed request that will hinder the velocity of change when it is done reactively as an afterthought. This is normally a very successful exercise when engagement is good, it is good to utilize a senior engineer and manager from each team in the combined activity diagram layering exercise with more junior engineers involved in each team included in the team-specific activity diagram exercise. Changing the network team's operational model The network team's operational model at the end of the activity diagram exercise should ideally be fully integrated with the rest of the business. Once the new operational model has been agreed with all teams, it is time to implement it. It is important to note that because the teams on the ground created the operational model and joined up activity diagram, it should be signed off by all parties as the new business process. So this removes the issue of an enforced model from management as those using it have been involved in creating it. The operational model can be iterated and improved over time, but interactions shouldn't change greatly although new interaction points may be added that have been initially missed. A master copy of the business process can then be stored and updated, so anyone new joining the company knows exactly how to interact with other teams. Short term it may seem the new approach is slowing down development estimates as automation is not in place for network functions, so estimation for developer features becomes higher when they require network changes. This is often just a truer reflection of reality, as estimations didn't take into account network changes and then they became blockers as they were tickets, but once reported, it can be optimized and improved over time. Once the overall activity diagram has been merged together and agreed with all the teams, it is important to remember if the processes are properly optimized, there should not be pages and pages of high-level operations on the diagram. If the interactions are too verbose, it will take any change hours and hours to traverse each of the steps on the activity diagram. The activity diagram below shows a joined up business process, where work is either defined from a single roadmap producing user stories for all teams. New user stories, which are units of work, are then estimated out by cross-functional teams, including developers, infrastructure, quality assurance, and network engineers. Each team will review the user story and working out which cross-functional tasks are involved to deliver the feature. The user story then becomes part of the sprint with the cross-functional teams working on the user story together making sure that it has everything it needs to work prior to the check-in. After Peer review, the feature or change is then handed off to the automated processes to deliver the code, infrastructure, and network changes to production. The checked-in feature then flows through unit testing, quality assurance, integration, performance testing quality gates, which will include any new tests that were written by the quality assurance team before check-in. Once every stage is passed, the automation is invoked by a button press to push the changes to production. Each environment has the same network changes applied, so network changes are made first on test environments before production. This relies on treating networking as code, meaning automated network processes need to be created so the network team can be as agile as the developers. Once the agreed operational model is mapped out only then should the DevOps transformation begin. This will involve selecting the best of breed tools at every stage to deliver the desired outcome with the focus on the following benefits: The velocity of change Mean time to resolve Improved uptime Increased number of deployments Cross-skilling between teams The removal of the bus factor of one All business processes will be different for each company, so it is important to engage each department and have the buy in from all managers to make this activity a success. Changing the network teams behavior Once a new operational model has been established in the business, it is important to help prevent the network team from becoming the bottleneck in a DevOps-focused continuous delivery model. Traditionally, network engineers will be used to operating command lines and logging into admin consoles on network devices to make changes. Infrastructure engineers adjusted to automation as they already had scripting experience in bash and PowerShell coupled with a firm grounding in Linux or Windows operating systems, so transitioning to configuration management tooling was not a huge step. However, it may be more difficult to persuade network engineers from making that same transition initially. Moving network engineers towards coding against APIs and adopting configuration management tools may initially appear daunting, as it is a higher barrier to entry, but having an experienced automation engineer on hand, can help network engineers make this transition. It is important to be patient, so try to change this behavior gradually by setting some automation initiatives for the network team in their objectives. This will encourage the correct behavior and try and incentivize it too. It may be useful to start off automation initiatives by offering training or purchasing particular coding books for teams. It may also be useful to hold an initial automation hack day; this will give network engineers a day away from their day jobs and time to attempt to automate a small process, which is repeated everyday by network engineers. If possible, make this a mandatory exercise, so that it is adopted and make other teams available to cover for the network team, so they aren't distracted. This is a good way of seeing which members of the network team may be open to evangelizing DevOps and automation. If any particular individual stands out, then work with them to help push automation initiatives forward to the rest of the team by making them the champion for automation. Establishing an internal DevOps meet-up where teams present back their automation achievements is also a good way of promoting automation in network teams and this keeping the momentum going. Encourage each team across the business to present back interesting things they have achieved each quarter and incentivize this too by allowing each team time off from their day job to attend if they participate. This leads to a sense of community and illustrates to teams they are part of bigger movement that is bringing real cost benefits to the business. This also helps to focus teams on the common goal of making the company better and breaks down barriers between teams in the process. One approach that should be avoided at all costs is having other teams write all the network automation for networking teams. Ideally, it should be the networking team that evolve and adopt automation, so giving the network team a sense of ownership over the network automation is very important. This though requires full buy in from networking teams and discipline not to revert back to manual tasks at any point even if issues occur. To ease the transition offer to put an automation engineer into the network team from infrastructure or development, but this should only be a temporary measure. It is important to select an automation engineer that is respected by the network team and knowledgeable in networking, as no one should ever attempt to automate something that they cannot operate by hand, so having someone well-versed in networking to help with network automation is crucial, as they will be training the network team so have to be respected. If an automation engineer is assigned to the network team and isn't knowledgeable or respected, then the initiative will likely fail, so choose wisely. It is important to accept at an early stage, that this transition towards DevOps and automation may not be for everyone, so not every network engineer will be able to make the journey. It is all about the network team seizing the opportunity and showing initiative and willingness to pick up and learn new skills. It is important to stamp out disruptive behavior early on which may be a bad influence on the team. It is fine to have for people to have a cynical skepticism at first, but not attempting to change or build new skills shouldn't be tolerated, as it will disrupt the team dynamic and this should be monitored so it doesn't cause automation initiatives to fail or stall, just because individuals are proving to be blockers or being disruptive. It is important to note that every organization has its own unique culture and a company's rate of change will be subject to cultural uptake of the new processes and ways of working. When initiating cultural change, change agents are necessary and can come from internal IT staff or external sources depending on the aptitude and appetite of the staff to change. Every change project is different, but it is important that it has the correct individuals involved to make it a success along with the correct management sponsorship and backing. Bottom-up DevOps initiatives for networking teams Bottom-up DevOps initiatives are when an engineer, team leads, or lower management don't necessarily have buy in from the company to make changes to the operational model. However, they realize that although changes can't be made to the overall incumbent operational model, they can try and facilitate positive changes using DevOps philosophies within their team that can help the team perform better and make their productivity more efficient. When implementing DevOps initiatives from a bottom-up initiative, it is much more difficult and challenging at times as some individuals or teams may not be willing to change the way they work and operate as they don't have to. But it is important not to become disheartened and do the best possible job for the business. It is still possible to eventually convince upper management to implement a DevOps initiative using grass roots initiatives to prove the process brings real business benefits. Evangelizing DevOps in the networking team It is important to try and stay positive at all times, working on a bottom-up initiative can be tiring, but it is important to roll with the punches and not take things too personally. Always remain positive and try to focus on evangelizing the benefits associated with DevOps processes and positive behavior first within your own team. The first challenge is to convince your own team of the merits of adopting a DevOps approach before even attempting to convince other teams in the business. A good way of doing this is by showing the benefits that DevOps approach has made to other companies, such as Google, Facebook, and Etsy, focusing on what they have done in the networking space. A pushback from individuals may be the fact that these companies are unicorns and DevOps has only worked for companies for this reason, so be prepared to be challenged. Seek out initiatives that have been implemented by these companies that the networking team could adopt and are actually applicable to your company. In order to facilitate an environment of change, work out your colleagues drivers are, what motivates them? Try tailor the sell to individuals motivations, the sell to an engineer or manager may be completely different, an engineer on the ground may be motivated by the following: Doing more interesting work Developing skills and experience Helping automate menial daily tasks Learning sought-after configuration management skills Understanding the development lifecycle Learning to code A manager on the other hand will probably be more motivated by offering to measure KPI's that make his team look better such as: Time taken to implement changes Mean time to resolve failures Improved uptime of the network Another way to promote engagement is to invite your networking team to DevOps meet-ups arranged by forward thinking networking vendors. They may be amazed that most networking and load-balancing vendors are now actively promoting automation and DevOps and not yet be aware of this. Some of the new innovations in this space may be enough to change their opinions and make them interested in picking up some of the new approaches, so they can keep pace with the industry. Seeking sponsorship from a respected manager or engineer After making the network team aware of the DevOps initiatives, it is important to take this to the next stage. Seek out a respected manager or senior engineer in the networking team that may be open to trying out DevOps and automation. It is important to sell this person the dream, state how you are passionate about implementing some changes to help the team, and that you are keen to utilize some proven best practices that have worked well for other successful companies. It is important to be humble, try not to rant or spew generalized DevOps jargon to your peers, which can be very off-putting. Always make reasonable arguments and justify them while avoid making sweeping statements or generalizations. Try not to appear to be trying to undermine the manager or senior engineer, instead ask for their help to achieve the goal by seeking their approval to back the initiative or idea. A charm offensive may be necessary at this stage to convince the manager or engineer that it's a good idea but gradually building up to the request can help otherwise it may appear insincere if the request comes out the blue. Potentially analyze the situation over lunch or drinks and gauge if it is something they would be interested in, there is little point trying to convince people that are stubborn as they probably will not budge unless the initiative comes from above. Once you have found the courage to broach the subject, it is now time to put forward numerous suggestions on how the team could work differently with the help of a mediator that could take the form of a project manager. Ask for the opportunity to try this out on a small scale and offer to lead the initiative and ask for their support and backing. It is likely that the manager or senior engineer will be impressed at your initiative and allow you to run with the idea, but they may choose the initiative you implement. So, never suggest anything you can't achieve, you may only get one opportunity at this so it is important to make a good impression. Try and focus on a small task to start with; that's typically a pain point, and attempt to automate it. Anyone can write an automation script, but try and make the automation process easy to use, find what the team likes in the current process, and try and incorporate aspects of it. For example, if they often see the output from a command line displayed in a particular way, write the automation script so that it still displays the same output, so the process is not completely alien to them. Try not to hardcode values into scripts and extract them into a configuration files to make the automation more flexible, so it could potentially be used again in different ways. By showing engineers the flexibility of automation, it will encourage them to use it more, show other in the teams how you wrote the automation and ways they could adapt it to apply it to other activities. If this is done wisely, then automation will be adopted by enthusiastic members of the team, and you will gain enough momentum to impress the sponsor enough to take it forward onto more complex tasks. Automate a complex problem with the networking team The next stage of the process after building confidence by automating small repeatable tasks is to take on a more complex problem; this can be used to cement the use of automation within the networking team going forward. This part of the process is about empowering others to take charge, and lead automation initiatives themselves in the future, so will be more time-consuming. It is imperative that the more difficult to work with engineers that may have been deliberately avoided while building out the initial automation is involved this time. These engineers more than likely have not been involved in automation at all at this stage. This probably means the most certified person in the team and alpha of the team, nobody said it was going to be easy, but it will be worth it in the long run convincing the biggest skeptics of the merits of DevOps and automation. At this stage, automation within the network team should have enough credibility and momentum to broach the subject citing successful use cases. It's easier to involve all difficult individuals in the process rather than presenting ideas back to them at the end of the process. Difficult senior engineers or managers are less likely to shoot down your ideas in front of your peers if they are involved in the creation of the process and have contributed in some way. Try and be respectful, even if you do not agree with their viewpoints, but don't back down if you believe that you are correct or give up. Make arguments fact based and non-emotive, write down pros and cons, and document any concerns without ignoring them, you have to be willing to compromise but not to the point of devaluing the solution. There may actually be genuine risks involved that need addressed, so valid points should not be glossed over or ignored. Where possible seek backup from your sponsor if you are not sure on some of the points or feel individuals are being unreasonable. When implementing the complex automation task work as a team, not as an individual, this is a learning experience for others as well as yourself. Try and teach the network team a configuration management tool, they may just be scared try out new things, so go with a gentle approach. Potentially stopping at times to try out some online tutorials to familiarize everyone with the tool and try out various approaches to solve problems in the easiest way possible. Try and show the network engineers how easy it is to use configuration management tools and the benefits. Don't use complicated configuration management tools as it may put them off. The majority of network engineers can't currently code, something that will potentially change in the coming years. As stated before, infrastructure engineers at least had a grounding in bash or PowerShell to help get started, so pick tooling that they like and give them options. Try not to enforce tools they are not comfortable with. When utilizing automation, one of the key concerns for network engineers is peer review as they have a natural distrust that the automation has worked. Try and build in gated processes to address these concerns, automation doesn't mean any peer review so create a lightweight process to help. Make the automation easy to review by utilizing source control to show diffs and educate the network engineers on how to do this. Coding can be a scary prospect initially, so propose to do some team exercises each week on a coding or configuration management task. Work on it as a team. This makes it less threatening, and it is important to listen to feedback. If the consensus is that something isn't working well or isn't of benefit, then look at alternate ways to achieve the same goal that works for the whole team. Before releasing any new automated process, test it in preproduction environment, alongside an experienced engineer and have them peer review it, and try to make it fail against numerous test cases. There is only one opportunity to make a first impression, with a new process, so make sure it is a successful one. Try and set up knowledge-sharing session between the team to discuss the automation and make sure everyone knows how to do operations manually too, so they can easily debug any future issues or extend or amend the automation. Make sure that output and logging is clear to all users as they will all need to support the automation when it is used in production. Summary In this article, we covered practical initiatives, which when combined, will allow IT staff to implement successful DevOps models in their organization. Rather than just focusing on departmental issues, it has promoted using a set of practical strategies to change the day-to-day operational models that constrain teams. It also focuses on the need for network engineers to learn new skills and techniques in order to make the most of a new operational model and not become the bottleneck for delivery. This article has provided practical real-world examples that could help senior managers and engineers to improve their own companies, emphasizing collaboration between teams and showing that networking departments now required to automate all network operations to deliver at the pace expected by businesses. Key takeaways from this article are: DevOps is not just about development and operations staff it can be applied to network teams Before starting a DevOps initiative analyze successful teams or companies and what made them successful Senior management sponsorship is a key to creating a successful DevOps model Your own companies model will not identically mirror other companies, so try not to copy like for like, adapt it so that it works in your own organization Allow teams to create their own processes and don't dictate processes Allow change agents to initiate changes that teams are comfortable with Automate all operational work, start small, and build up to larger more complex problems once the team is comfortable with new ways of working Successful change will not happen overnight; it will only work through a model of continuous improvement Useful links on DevOps are: https://www.youtube.com/watch?v=TdAmAj3eaFI https://www.youtube.com/watch?v=gqmuVHw-hQw Resources for Article: Further resources on this subject: Jenkins 2.0: The impetus for DevOps Movement [article] Introduction to DevOps [article] Command Line Tools for DevOps [article]
Read more
  • 0
  • 0
  • 4140

article-image-jenkins-20-impetus-devops-movement
Packt
19 Sep 2016
15 min read
Save for later

Jenkins 2.0: The impetus for DevOps Movement

Packt
19 Sep 2016
15 min read
In this article by Mitesh Soni, the author of the book DevOps for Web Development, provides some insight into DevOps movement, benefits of DevOps culture, Lifecycle of DevOps, how Jenkins 2.0 is bridging the gaps between Continuous Integration and Continuous Delivery using new features and UI improvements, installation and configuration of Jenkins 2.0. (For more resources related to this topic, see here.) Understanding the DevOps movement Let's try to understand what DevOps is. Is it a real, technical word? No, because DevOps is not just about technical stuff. It is also neither simply a technology nor an innovation. In simple terms, DevOps is a blend of complex terminologies. It can be considered as a concept, culture, development and operational philosophy, or a movement. To understand DevOps, let's revisit the old days of any IT organization. Consider there are multiple environments where an application is deployed. The following sequence of events takes place when any new feature is implemented or bug fixed:   The development team writes code to implement a new feature or fix a bug. This new code is deployed to the development environment and generally tested by the development team. The new code is deployed to the QA environment, where it is verified by the testing team. The code is then provided to the operations team for deploying it to the production environment. The operations team is responsible for managing and maintaining the code.   Let's list the possible issues in this approach: The transition of the current application build from the development environment to the production environment takes weeks or months. The priorities of the development team, QA team, and IT operations team are different in an organization and effective, and efficient co-ordination becomes a necessity for smooth operations. The development team is focused on the latest development release, while the operations team cares about the stability of the production environment. The development and operations teams are not aware of each other's work and work culture. Both teams work in different types of environments; there is a possibility that the development team has resource constraints and they therefore use a different kind of configuration. It may work on the localhost or in the dev environment. The operations team works on production resources and there will therefore be a huge gap in the configuration and deployment environments. It may not work where it needs to run—the production environment. Assumptions are key in such a scenario, and it is improbable that both teams will work under the same set of assumptions. There is manual work involved in setting up the runtime environment and configuration and deployment activities. The biggest issue with the manual application-deployment process is its nonrepeatability and error-prone nature. The development team has the executable files, configuration files, database scripts, and deployment documentation. They provide it to the operations team. All these artifacts are verified on the development environment and not in production or staging. Each team may take a different approach for setting up the runtime environment and the configuration and deployment activities, considering resource constraints and resource availability. In addition, the deployment process needs to be documented for future usage. Now, maintaining the documentation is a time-consuming task that requires collaboration between different stakeholders. Both teams work separately and hence there can be a situation where both use different automation techniques. Both teams are unaware of the challenges faced by each other and hence may not be able to visualize or understand an ideal scenario in which the application works. While the operations team is busy in deployment activities, the development team may get another request for a feature implementation or bug fix; in such a case, if the operations team faces any issues in deployment, they may try to consult the development team, who are already occupied with the new implementation request. This results in communication gaps, and the required collaboration may not happen. There is hardly any collaboration between the development team and the operations team. Poor collaboration causes many issues in the application's deployment to different environments, resulting in back-and-forth communication through e-mail, chat, calls, meetings, and so on, and it often ends in quick fixes. Challenges for the development team: The competitive market creates pressure of on-time delivery. They have to take care of production-ready code management and new feature implementation. The release cycle is often long and hence the development team has to make assumptions before the application deployment finally takes place. In such a scenario, it takes more time to fix the issues that occurred during deployment in the staging or production environment. Challenges for the operations team: Resource contention: It's difficult to handle increasing resource demands Redesigning or tweaking: This is needed to run the application in the production environment Diagnosing and rectifying: They are supposed to diagnose and rectify issues after application deployment in isolation The benefits of DevOps This diagram covers all the benefits of DevOps: Collaboration among different stakeholders brings many business and technical benefits that help organizations achieve their business goals. The DevOps lifecycle – it's all about "continuous" Continuous Integration(CI),Continuous Testing(CT), and Continuous Delivery(CD) are significant part of DevOps culture. CI includes automating builds, unit tests, and packaging processes while CD is concerned with the application delivery pipeline across different environments. CI and CD accelerate the application development process through automation across different phases, such as build, test, and code analysis, and enable users achieve end-to-end automation in the application delivery lifecycle: Continuous integration and continuous delivery or deployment are well supported by cloud provisioning and configuration management. Continuous monitoring helps identify issues or bottlenecks in the end-to-end pipeline and helps make the pipeline effective. Continuous feedback is an integral part of this pipeline, which directs the stakeholders whether are close to the required outcome or going in the different direction. "Continuous effort – not strength or intelligence – is the key to unlocking our potential"                                                                                            -Winston Churchill Continuous integration What is continuous integration? In simple words, CI is a software engineering practice where each check-in made by a developer is verified by either of the following: Pull mechanism: Executing an automated build at a scheduled time Push mechanism: Executing an automated build when changes are saved in the repository This step is followed by executing a unit test against the latest changes available in the source code repository. The main benefit of continuous integration is quick feedback based on the result of build execution. If it is successful, all is well; else, assign responsibility to the developer whose commit has broken the build, notify all stakeholders, and fix the issue. Read more about CI at http://martinfowler.com/articles/continuousIntegration.html. So why is CI needed? Because it makes things simple and helps us identify bugs or errors in the code at a very early stage of development, when it is relatively easy to fix them. Just imagine if the same scenario takes place after a long duration and there are too many dependencies and complexities we need to manage. In the early stages, it is far easier to cure and fix issues; consider health issues as an analogy, and things will be clearer in this context. Continuous integration is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is then verified by an automated build, allowing teams to detect problems early. CI is a significant part and in fact a base for the release-management strategy of any organization that wants to develop a DevOps culture. Following are immediate benefits of CI: Automated integration with pull or push mechanism Repeatable process without any manual intervention Automated test case execution Coding standard verification Execution of scripts based on requirement Quick feedback: build status notification to stakeholders via e-mail Teams focused on their work and not in the managing processes Jenkins, Apache Continuum, Buildbot, GitLabCI, and so on are some examples of open source CI tools. AnthillPro, Atlassian Bamboo, TeamCity, Team Foundation Server, and so on are some examples of commercial CI tools. Continuous integration tools – Jenkins Jenkins was originally an open source continuous integration software written in Java under the MIT License. However, Jenkins 2 an open source automation server that focuses on any automation, including continuous integration and continuous delivery. Jenkins can be used across different platforms, such as Windows, Ubuntu/Debian, Red Hat/Fedora, Mac OS X, openSUSE, and FreeBSD. Jenkins enables user to utilize continuous integration services for software development in an agile environment. It can be used to build freestyle software projects based on Apache Ant and Maven 2/Maven 3. It can also execute Windows batch commands and shell scripts. It can be easily customized with the use of plugins. There are different kinds of plugins available for customizing Jenkins based on specific needs for setting up continuous integration. Categories of plugins include source code management (the Git, CVS, and Bazaar plugins), build triggers (the Accelerated Build Now and Build Flow plugins), build reports (the Code Scanner and Disk Usage plugins), authentication and user management (the Active Directory and GitHub OAuth plugins), and cluster management and distributed build (Amazon EC2 and Azure Slave plugins). To know more about all plugins, visit https://wiki.jenkins-ci.org/display/JENKINS/Plugins. To explore how to create a new plugin, visit https://wiki.jenkins-ci.org/display/JENKINS/Plugin+tutorial. To download different versions of plugins, visit https://updates.jenkins-ci.org/download/plugins/. Visit the Jenkins website at http://jenkins.io/. Jenkins accelerates the software development process through automation: Key features and benefits Here are some striking benefits of Jenkins: Easy install, upgrade, and configuration. Supported platforms: Windows, Ubuntu/Debian, Red Hat/Fedora/CentOS, Mac OS X, openSUSE, FreeBSD, OpenBSD, Solaris, and Gentoo. Manages and controls development lifecycle processes. Non-Java projects supported by Jenkins: Such as .NET, Ruby, PHP, Drupal, Perl, C++, Node.js, Python, Android, and Scala. A development methodology of daily integrations verified by automated builds. Every commit can trigger a build. Jenkins is a fully featured technology platform that enables users to implement CI and CD. The use of Jenkins is not limited to CI and CD. It is possible to include a model and orchestrate the entire pipeline with the use of Jenkins as it supports shell and Windows batch command execution. Jenkins 2.0 supports a delivery pipeline that uses a Domain-Specific Language (DSL) for modeling entire deployments or delivery pipelines. Pipeline as code provides a common language—DSL—to help the development and operations teams to collaborate in an effective manner. Jenkins 2 brings a new GUI with stage view to observe the progress across the delivery pipeline. Jenkins 2.0 is fully backward compatible with the Jenkins 1.x series. Jenkins 2 now requires Servlet 3.1 to run. You can use embedded Winstone-Jetty or a container that supports Servlet 3.1 (such as Tomcat 8). GitHub, Collabnet, SVN, TFS code repositories, and so on are supported by Jenkins for collaborative development. Continuous integration: Automate build and test—automated testing (continuous testing), package, and static code analysis. Supports common test frameworks such as HP ALM Tools, Junit, Selenium, and MSTest. For continuous testing, Jenkins has plugins for both; Jenkins slaves can execute test suites on different platforms. Jenkins supports static code analysis tools such as code verification by CheckStyle and FindBug. It also integrates with Sonar. Continuous delivery and continuous deployment: It automates the application deployment pipeline, integrates with popular configuration management tools, and automates environment provisioning. To achieve continuous delivery and deployment, Jenkins supports automatic deployment; it provides a plugin for direct integration with IBM uDeploy. Highly configurable: Plugins-based architecture that provides support to many technologies, repositories, build tools, and test tools; it has an open source CI server and provides over 400 plugins to achieve extensibility. Supports distributed builds: Jenkins supports "master/slave" mode, where the workload of building projects is delegated to multiple slave nodes. It has a machine-consumable remote access API to retrieve information from Jenkins for programmatic consumption, to trigger a new build, and so on. It delivers a better application faster by automating the application development lifecycle, allowing faster delivery. The Jenkins build pipeline (quality gate system) provides a build pipeline view of upstream and downstream connected jobs, as a chain of jobs, each one subjecting the build to quality-assurance steps. It has the ability to define manual triggers for jobs that require intervention prior to execution, such as an approval process outside of Jenkins. In the following diagram Quality Gates and Orchestration of Build Pipeline is illustrated: Jenkins can be used with the following tools in different categories as shown here: Language Java .Net Code repositories Subversion, Git, CVS, StarTeam Build tools Ant, Maven NAnt, MS Build Code analysis tools Sonar, CheckStyle, FindBugs, NCover, Visual Studio Code Metrics, PowerTool Continuous integration Jenkins Continuous testing Jenkins plugins (HP Quality Center 10.00 with the QuickTest Professional add-in, HP Unified Functional Testing 11.5x and 12.0x, HP Service Test 11.20 and 11.50, HP LoadRunner 11.52 and 12.0x, HP Performance Center 12.xx, HP QuickTest Professional 11.00, HP Application Lifecycle Management 11.00, 11.52, and 12.xx, HP ALM Lab Management 11.50, 11.52, and 12.xx, JUnit, MSTest, and VsTest) Infrastructure provisioning Configuration management tool—Chef Virtualization/cloud service provider VMware, AWS, Microsoft Azure (IaaS), traditional environment Continuous delivery/deployment Chef/deployment plugin/shell scripting/Powershell scripts/Windows batch commands Installing Jenkins Jenkins provides us with multiple ways to install it for all types of users. We can install it on at least the following operating systems: Ubuntu/Debian Windows Mac OS X OpenBSD FreeBSD openSUSE Gentoo CentOS/Fedora/Red Hat One of the easiest options I recommend is to use a WAR file. A WAR file can be used with or without a container or web application server. Having Java is a must before we try to use a WAR file for Jenkins, which can be done as follows: Download the jenkins.war file from https://jenkins.io/. Open command prompt in Windows or a terminal in Linux, go to the directory where the jenkins.war file is stored, and execute the following command: java – jar jenkins.war Once Jenkins is fully up and running, as shown in the following screenshot, explore it in the web browser by visiting http://localhost:8080. By default, Jenkins works on port 8080. Execute the following command from the command line: java -jar jenkins.war --httpPort=9999 For HTTPS, use the following command: java -jar jenkins.war --httpsPort=8888 Once Jenkins is running, visit the Jenkins home directory. In our case, we have installed Jenkins 2 on a CentOS 6.7 virtual machine. Go to /home/<username>/.jenkins, as shown in the following screenshot. If you can't see the .jenkins directory, make sure hidden files are visible. In CentOS, press Ctrl+H to make hidden files visible. Setting up Jenkins Now that we have installed Jenkins, let's verify whether Jenkins is running. Open a browser and navigate to http://localhost:8080 or http://<IP_ADDRESS>:8080. If you've used Jenkins earlier and recently downloaded the Jenkins 2 WAR file, it will ask for a security setup. To unlock Jenkins, follow these steps: Go to the .Jenkins directory and open the initialAdminPassword file from the secrets subdirectory: Copy the password in that file, paste it in the Administrator password box, and click on Continue, as shown here: Clicking on Continue will redirect you to the Customize Jenkins page. Click on Install suggested plugins: The installation of the plugins will start. Make sure that you have a working Internet connection. Once all the required plugins have been installed, you will seethe Create First Admin User page. Provide the required details, and click on Save and Finish: Jenkins is ready! Our Jenkins setup is complete. Click on Start using Jenkins: Get Jenkins plugins from https://wiki.jenkins-ci.org/display/JENKINS/Plugins. Summary We have covered some brief details on DevOps culture and Jenkins 2.0 and its new features. DevOps for Web Developmentprovides more details on extending Continuous Integration to Continuous Delivery and Continuous Deployment using Configuration management tools such as Chef and Cloud Computing platforms such Microsoft Azure (App Services) and AWS (Amazon EC2 and AWS Elastic Beanstalk), you refer at https://www.packtpub.com/networking-and-servers/devops-web-development. To get more details Jenkins, refer to JenkinsEssentials, https://www.packtpub.com/application-development/jenkins-essentials. Resources for Article: Further resources on this subject: Setting Up and Cleaning Up [article] Maven and Jenkins Plugin [article] Exploring Jenkins [article]
Read more
  • 0
  • 0
  • 3389