Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech Guides - DevOps

29 Articles
article-image-why-containers-are-driving-devops
Diego Rodriguez
12 Jun 2017
5 min read
Save for later

Why containers are driving DevOps

Diego Rodriguez
12 Jun 2017
5 min read
It has been a long ride since the days where one application would just take a full room of computing hardware. Research and innovation in information technology (IT) have taken us far and will surely keep moving even faster every day. Let's talk a bit about the present state of DevOps, and how containers are driving the scene. What are containers? According to Docker (the most popular containers platform), a container is a stand-alone, lightweight package that has everything needed to execute a piece of software. It packs your code, runtime environment, systems tools, libraries, binaries, and settings. It's available for Linux and Windows apps. It runs the same everytime regardless of where you run it. It adds a layer of isolation, helping reduce conflicts between teams running different software on the same infrastructure. Containers are one level deeper in the virtualization stack, allowing lighter environments, more isolation, more security, more standarization, and many more blessings. There are tons of benefits you could take advantage of. Instead of having to virtualize the whole operating system (like virtual machines [VMs] do), containers take the advantage of sharing most of the core of the host system and just add the required, not-in-the-host binaries and libraries; no more gigabytes of disk space lost due to bloated operating systems with repeated stuff. This means a lot of things: your deployments can go packed in a much more smaller image than having to run it alone in a full operating system, each deployment boots up way faster, the idling resource usage is lower, there is less configuration and more standarization (remember "Convention over configuration"), less things to manage and more isolated apps means less ways to screw something up, therefore there is less attack surface, which subsequently means more security. But keep in mind, not everything is perfect and there are many factors that you need to take into account before getting into the containerization realm. Considerations It has been less than 10 years since containerization started, and in the technology world that is a lot, considering how fast other technologies such as web front-end frameworks and artificial intelligence [AI] are moving. In just a few years, development of this widely-deployed technology has gone mature and production-ready, coupled with microservices, the boost has taken it to new parts in the DevOps world, being now the defacto solution for many companies in their application and services deployment flow. Just before all this exciting movement started, VMs were the go-to for the many problems encountered by IT people, including myself. And although VMs are a great way to solve many of these problems, there was still room for improvement. Nowadays, the horizon seems really promising with the support of top technology companies backing tools, frameworks, services and products, all around containers, benefiting most of the daily code we develop, test, debug, and deploy on a daily basis. These days, thanks to the work of many, it's possible to have a consistent all-around lightweight way to run, test, debug, and deploy code from whichever platform you work from. So, if you code in Linux using VIM, but your coworker uses Windows using VS code, both can have the same local container with the same binaries and libraries where code is ran. This removes a lot of incompatibility issues and allows teams to enjoy production environments in their own machine, not having to worry about sharing the same configuration files, misconfiguration, versioning hassles, etc. It gets even better. Not only is there no need to maintain the same configuration files across the different services: there is less configuration to handle as a whole. Templates do most of the work for us, allowing you and your team to focus on creating and deploying your products, improving and iterating your services, changing and enhancing your code. In less than 10 lines you can specify a working template containing everything needed to run a simple Node.js service, or maybe a Ruby on Rails application, and how about a Scala cron job. Containerization supports most, if not all languages and stacks. Containers and virtualization Virtualization has allowed for acceleration in the speed in which we build things for many years. It will continue to provide us with better solutions as time goes by. Just as we went from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS) and finally Software as a Service (SaaS) and others (Anything as a Service? AaaS?), I am certain that we will find more abstraction beyond containers, making our life easier everyday. As most of today's tools, many virtualization and containerization ones are open source, with huge communities around them and support boards, but keep the trust in good'ol Stack Overflow. So remember to give back something to the amazing community of open source, open issues, report bugs, share the best about it and help fix and improve the lacking parts. But really, just try to learn these new and promising technologies that give us IT people a huge bump in efficiency in pretty much all aspects. About the author Diego Rodriguez Baquero is a full stack developer specializing in DevOps and SysOps. He is also a WebTorrent core team member. He can be found at https://diegorbaquero.com/. 
Read more
  • 0
  • 0
  • 33763

article-image-promising-devops-projects
Julian Ursell
29 Jan 2015
3 min read
Save for later

Promising DevOps Projects

Julian Ursell
29 Jan 2015
3 min read
The DevOps movement is currently driving a wave of innovations in technology, which are contributing to the development of powerful systems and software development architectures, as well as generating a transformation in “systems thinking”. Game changers like Docker have revolutionized the way system engineers, administrators, and application developers approach their jobs, and there is now a concerted push to iterate on the new paradigms that have emerged. The crystallization of containerization virtualization methods is producing a different perspective on service infrastructures, enabling a greater modularity and fine-grained-ness not so imaginable a decade ago. Powerful configuration management tools such as Chef, Puppet, and Ansible allow for infrastructure to be defined literally as code. The flame of innovation is burning brightly in this space and the concept of the “DevOps engineer” is becoming a reality and not the idealistic myth it appeared to be before. Now that DevOps know roughly where they're going, a feverish development drive is gathering pace with projects looking to build upon the flagship technologies that contributed to the initial spark. The next few years are going to be fascinating in terms of seeing how the DevOps foundations laid down will be built upon moving forward. The major foundation of modern DevOps development is the refinement of the concept and implementation of containerization. Docker has demonstrated how it can be leveraged to host, run, and deploy applications, servers, and services in an incredibly lightweight fashion, abstracting resources by isolating parts of the operating system in separate containers. The sea change in thinking this has created has been resounding. Still, however, a particular challenge for DevOps engineers working at scale with containers is developing effective orchestration services. Enter Kubernetes (apparently meaning “helmsman” in Greek), the project open sourced by Google for the orchestration and management of container clusters. The value of Kubernetes is that it works alongside Docker, building beyond simply booting containers to allow a finer degree of management and monitoring. It utilizes units called “pods” that facilitate communication and data sharing between Docker containers and the grouping of application-specific containers. The Docker project has actually taken the orchestration service Fig under its wing for further development, but there are a myriad of ways in which containers can be orchestrated. Kubernetes illustrates how the wave of DevOps-oriented technologies like Docker are driving large scale companies to open source their own solutions, and contribute to the spirit of open source collaboration that underlines the movement. Other influences of DevOps can be seen on the reappraisal of operating system architectures. CoreOS,for example, is a Linux distribution that has been designed with scale, flexibility, and lightweight resource consumption in mind. It hosts applications as Docker containers, and makes the development of large scale distributed systems easier by making it “natively” clustered, meaning it is adapted naturally for use over multiple machines. Under the hood it offers powerful tools including Fleet (CoreOS' cluster orchestration system) and Etcd for service discovery and information sharing between cluster nodes. A tool to watch out for in the future is Terraform (built by the same team behind Vagrant), which offers at its core the ability to build infrastructures with combined resources from multiple service providers, such as Digital Ocean, AWS, and Heroku, describing this infrastructure as code with an abstracted configuration syntax. It will be fascinating to see whether Terraform catches on and becomes opened up to a greater mass of major service providers. Kubernetes, CoreOS, and Terraform all convey the immense development pull generated by the DevOps movement, and one that looks set to roll on for some time yet.
Read more
  • 0
  • 0
  • 33439

article-image-bridging-gap-between-data-science-and-devops
Richard Gall
23 Mar 2016
5 min read
Save for later

Bridging the gap between data science and DevOps with DataOps

Richard Gall
23 Mar 2016
5 min read
What’s the real value of data science? Hailed as the sexiest job of the 21st century just a few years ago, there are rumors that it’s not quite proving its worth. Gianmario Spacagna, a data scientist for Barclays bank in London, told Computing magazine at Spark Summit Europe in October 2015 that, in many instances, there’s not enough impact from data science teams – “It’s not a playground. It’s not academic” he said. His solution sounds simple. We need to build a bridge between data science and DevOps - and DataOps is perhaps the answer. He says: "If you're a start-up, the smartest person you want to hire is your DevOps guy, not a data scientist. And you need engineers, machine learning specialists, mathematicians, statisticians, agile experts. You need to cover everything otherwise you have a very hard time to actually create proper applications that bring value." This idea makes a lot of sense. It’s become clear over the past few years that ‘data’ itself isn’t enough; it might even be distracting for some organizations. Sometimes too much time is spent in spreadsheets and not enough time is spent actually doing stuff. Making decisions, building relationships, building things – that’s where real value comes from. What Spacagna has identified is ultimately a strategic flaw within how data science is used in many organizations. There’s often too much focus on what data we have and what we can get, rather than who can access it and what they can do with it. If data science isn’t joining the dots, DevOps can help. True, a large part of the problem is strategic, but DevOps engineers can also provide practical solutions by building dashboards and creating APIs. These sort of things immediately give data additional value by making they make it more accessible and, put simply, more usable. Even for a modest medium sized business, data scientists and analysts will have minimal impact if they are not successfully integrated into the wider culture. While it’s true that many organizations still struggle with this, Airbnb demonstrate how to do it incredibly effectively. Take a look at their Airbnb Engineering and Data Science publication on Medium. In this post, they talk about the importance of scaling knowledge effectively. Although they don’t specifically refer to DevOps, it’s clear that DevOps thinking has informed their approach. In the products they’ve built to scale knowledge, for example, the team demonstrate a very real concern for accessibility and efficiency. What they build is created so people can do exactly what they want and get what they need from data. It’s a form of strict discipline that is underpinned by a desire for greater freedom. If you keep reading Airbnb’s publication, another aspect of ‘DevOps thinking’ emerges: a relentless focus on customer experience. By this, I don’t simply mean that the work done by the Airbnb engineers is specifically informed by a desire to improve customer experiences; that’s obvious. Instead, it’s the sense that tools through which internal collaboration and decision making take place should actually be similar to a customer experience. They need to be elegant, engaging, and intuitive. This doesn’t mean seeing every relationship as purely transactional, based on some perverse logic of self-interest, but rather having a deeper respect for how people interact and share ideas. If DevOps is an agile methodology that bridges the gap between development and operations, it can also help to bridge the gap between data and operations. DataOps - bringing DevOps and data science together This isn’t a new idea. As much as I’d like to, I can’t claim credit for inventing ‘DataOps’. But there’s not really much point in asserting that distinction. DataOps is simply another buzzword for the managerial class. And while some buzzwords have value, I’m not so sure that we need another one. More importantly, why create another gap between Data and Development? That gap doesn’t make sense in the world we’re building with software today. Even for web developers and designers, the products they are creating are so driven by data that separating the data from the dev is absurd. Perhaps then, it’s not enough to just ask more from our data science as Gianmario Spacagna does. DevOps offers a solution, but we’re going to miss out on the bigger picture if we start asking for more DevOps engineers and some space for them to sit next to the data team. We also need to ask how data science can inform DevOps too. It’s about opening up a dialogue between these different elements. While DevOps evangelists might argue that DevOps has already started that, the way forward is to push for more dialogue, more integration and more collaboration. As we look towards the future, with the API economy becoming more and more important to the success of both startups and huge corporations, the relationships between all these different areas are going to become more and more complex. If we want to build better and build smarter we’re going to have to talk more. DevOps and DataOps both offer us a good place to start the conversation, but it’s important to remember it’s just the start.
Read more
  • 0
  • 0
  • 30810
Banner background image

article-image-getting-started-devops
Michael Herndon
10 Feb 2016
7 min read
Save for later

Getting Started with DevOps

Michael Herndon
10 Feb 2016
7 min read
DevOps requires you to know many facets of people and technology. If you're interested in starting your journey into the world of DevOps, then take the time to know what you are getting yourself into, be ready to put in some work, and be ready to push out of your comfort zone. Know What You're Getting Yourself Into Working in a DevOps job where you're responsible for both coding and operational tasks means that you need to be able to shift mental gears. Mental context switching comes at a cost. You need to be able to pull yourself out of one mindset and switch to another, and you need to be able to prioritize. Accept your limitations and know when it's prudent to request more resources to handle the load. The amount of context switching will vary depending on the business. Let's say that you join a startup, and you're the only DevOps person on the team. In this scenario, you're most likely the operations team and still responsible for some coding tasks as well. This means that you need to tackle operations tasks as they come in. In this instance, Scrum and Agile will only carry you so far, you'll have to take more of a GTD approach. If you come from a development background, you will be tempted to put coding first as you have deadlines. However, if you are the operations team, then operations must come first. When you become a part of the operations team, employees at your business are now your customers too. Some days you can churn out code, other days are going to be an onslaught of important, time-sensitive requests. At the business that I currently work for, I took on the DevOps role so that other developers could focus on coding. One of the developers that I work with has exceptional code output. However, operational tasks were impeding their productivity. It was an obvious choice for me to jump in and take over the operational tasks so that the other developer could focus his efforts on bringing new features to customers. It's simply good business. Ego can get in the way of good business and DevOps. Leave your ego at home. In a bigger business, you may have a DevOps team where there is more breathing room to focus on things that you're more interested in, whether it's more coding or working with systems. Emergencies happen. When an emergency arises, you need to be able to calmly assess the situation, avoid the blame game, and provide solutions. Don't react. Respond. If you're too excitable or easily get caught up in the emotions of a given situation, DevOps may be your trial of fire. Work on pulling yourself outside of a situation so that you can see the whole picture and work towards solving the problem. Never play the blame game. Be the person who gets things done. Dive Into DevOps Start small. Taking on too much will overwhelm you and stifle progress. After you’ve done a few iterations of taking small steps, you'll be further along the journey than you realize. "It's a dangerous business, Frodo, going out your door. You step onto the road, and if you don't keep your feet, there's no knowing where you might be swept off to.” - Bilbo Baggins. If you're a developer, take one of your side projects and set up continuous delivery for the project. I would keep it simple and use something like Travis CI or AppVeyor and have your final output published somewhere. If you're using something like node, you could set up nightly builds for NPM. If its .NET you could use a service like MyGet. The second thing I would do as a developer is to focus on learning SSH, security access, and scheduled tasks. One of the things I've seen developers struggle with is locking down systems, so it's worth taking the time to dive into user access permissions. If you're on Windows, learn the windows task scheduler. If you're on Linux, learn to setup cron jobs. If you're from the operations and systems side of things, pick a scripting language that suits your needs. If you're working for a company that uses Microsoft technology, I'd suggest that you learn the Powershell scripting language and a language that compiles to .NET like C# or F#. If you're using open source technologies, I'd suggest learning bash and a language like Ruby or Python. Puppet and Chef use Ruby. Salt Stack uses Python. Build a simple web application with the language of your choice. That should give you enough familiarity with a language for you to start creating scripts that automate tasks. Read into DevOps books like Continuous Delivery or Continuous Delivery and DevOps Quickstart Guid. Expand your knowledge. Explore tools. Work on your intercommunication skills. Create a list of tasks that you wish to automate. Then create a habit out of reducing that list. Build A Habit Out Of Automating Infrastructure. Make it a habit to find time to automate your infrastructure while continuing to support your business. It's rare to get into a position that only focuses on automating infrastructure constantly as one's sole job, so it's important to be able to carve out time to remove mundane work so that you can focus your time and value on tasks that can't be automated. A habit loop is made up of three things. A cue, a routine, and a reward. For example, at 2pm your alarm goes off (cue). You go for a short run (routine). You feel awake and refreshed (reward). Design a cue that works for you. For example, every Friday at 2pm you could switch gears to work on automation. Spend some time on automating a task or infrastructure need (Routine), then find a reward that suits your lifestyle. A reward could be having a treat on Friday to celebrate all the hard work for the week or going home early (if your business permits this). Maybe learning something new is the reward and in that case, you spend a little time each week with a new DevOps related technology. Once you've removed some of the repetitive tasks that waste time, then you'll find yourself with enough time to take on bigger automation projects that seemed impossible to get to before. Repeat this process ad infinitum (To infinity and beyond). Lastly, Always Write and Communicate Whether you plan on going into DevOps or not, the ability to communicate will set you apart from others in your field. In DevOps, communication becomes a necessity because the value you provide may not always be apparent to everyone around you. Furthermore, you need to be able to resolve group conflicts, persuasively elicit buy-in, and provide a vision that people can follow. Always strive to improve your communication skills. Read books. Write. Work on your non-verbal communication skills. Non-verbal communication accounts for 93% of communication. It's worth knowing that messages that your body language sends could be preventing you from getting your ideas across. Communicating in a plain language to the lowest common denominator of your intended audience is your goal. People that are technical and nontechnical need to understand problems, solutions, and the value that you are giving them. Learn to use the right adjectives to paint bright illustrations in the minds of your readers to help them conceptualize hard-to-understand topics. The ability to persuade with writing is almost a lost art. It is a skill that transcends careers, disciplines, and fields of study. Used correctly, you can provide vision to guide your business into becoming a lean competitor that provides exceptional value to customers. At the end of the day, DevOps exists so that you can provide exceptional value to customers. Let your words guide and inspire the people around you. Off You Go All this is easier said than done. It takes time, practice, and years of experience. Don't be discouraged and don't give up. Instead, find things that light up your passion and focus on taking small incremental steps that allow you to win. You'll be there before you know it. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy. 
Read more
  • 0
  • 0
  • 30268

article-image-devops-not-continuous-delivery
Xavier Bruhiere
11 Apr 2017
5 min read
Save for later

DevOps is not continuous delivery

Xavier Bruhiere
11 Apr 2017
5 min read
What is the difference between DevOps and continuous delivery? The tech world is full of buzzwords; DevOps is undoubtly one of the best-known of the last few years. Essentially DevOps is a concept that attempts to solve two key problems for modern IT departments and development teams - the complexity of a given infrastructure or service topology and market agility. Or, to put it simply, there are lots of moving parts in modern software infrastructures, which make changing and fixing things hard.  Project managers who know their agile inside out - not to mention customers too - need developers to: Quickly release new features based on client feedback Keep the service available, even during large deployments Have no lasting crashes, regressions, or interruptions How do you do that ? You cultivate a DevOps philosophy and build a continous integration pipeline. The key thing to notice there are the italicised verbs - DevOps is cultural, continuous delivery is a process you construct as a team. But there's also more to it than that. Why do people confuse DevOps and continuous delivery? So, we've established that there's a lot of confusion around DevOps and continuous delivery (CD). Let's take a look at what the experts say.  DevOps is defined on AWS as: "The combination of cultural philosophies, practices, and tools that increases an organization’s ability to deliver applications and services at high velocity." Continuous Delivery, as stated by Carl Caum from Puppet, "…is a series of practices designed to ensure that code can be rapidly and safely be deployed to production by delivering every change to a production-like environment and ensuring that business applications and services function as expected through rigorous automated testing." So yes, both are about delivering code. Both try to enforce practices and tools to improve velocity and reliability of software in production. Both want the IT release pipeline to be as cost effective and agile as possible. But if we're getting into the details, DevOps is focused on managing challenging time to market expectations, while continuous delivery was a process to manage greater service complexity - making sure the code you ship is solid, basically. Human problems and coding problems In its definition of DevOps, Atlassian puts forward a neat formulation: "DevOps doesn’t solve tooling problems. It solves human problems." DevOps, according to this understanding promotes the idea that development and operational teams should work seamlessly together. It argues that they should design tools and processes to ensure rapid and efficient development-to-production cycles. Continuous Delivery, on the other hand, narrows this scope to a single mentra: your code should always be able to be safely released. It means that any change goes through an automated pipeline of tests (units, integrations, end-to-end) before being promoted to production. Martin Fowler nicely sums up the immediate benefits of this sophisticated deployment routine: "reduced deployment risk, believable progress, and user feedback." You can't have continuous delivery without DevOps Applying CD is difficult and requires advanced operational knowledge and enough resources to set up a pipeline that works for the team. Without a DevOps culture, you're team won't communicate properly, and technical resources won't be properly designed. It will certainly hurt the most critical IT pipeline: longer release cycles, more unexpected behaviors in production, and a slow feedback loop. Developers and management might fear the deployment step and become less agile. You can have DevOps without continuous delivery... but it's a waste of time The reverse this situation, DevOps without CD, is slightly less dangerous. But it is, unfortunately, pretty inefficient. While DevOps is a culture, or a philosophy, it is by no means supposed to remain theoretical. It's supposed to be put into practice. After all, the main aim isn't chin stroking intellectualism, it's to help teams build better tools and develop processes to deliver code. The time (ie. money) spent to bootstrap such a culture shouldn't be zeroed by a lack of concrete actions. CD delivery is a powerful asset for projects trying to conquer a market in a lean fashion. It overcomes the investments with teams of developers focused on business problems anddelevering to clients tested solutions as fast as they are ready. Take DevOps and continuous delivery seriously What we have, then, are two different, but related, concepts in how modern development teams understand operations. Ignoring one of them induces waste of resources and poor engineering efficiency. However, it is important to remember that the scope of DevOps - involving an entire organizational culture, potentially - and the complexity of continuous delivery mean that adoption shouldn't be rushed or taken lightly. You need to make an effort to do it properly. It might need a mid or long term roadmap, and will undoubtedly require buy-in from a range of stakeholders. So, keep communication channels open, consider using built cloud services if required, understand the value of automated tests and feedback-loops, and, most importantly, hire awesome people to take responsibility. A final word of caution. As sophisticated as they are, DevOps and continuous delivery aren't magical methodologies. A service as critical as AWS S3 claims 99.999999999% durability, thanks to rigorous engineering methods and yet, on February 28, it suffered a large service disruption. Code delivery is hard so keep your processes sharp! About the author Xavier Bruhiere is a Senior Data Engineer at Kpler. He is a curious, sharp, entrepreneur, and engineer who has built many projects, broke most of them, and launched and scaled what was left, learning from them all.
Read more
  • 0
  • 0
  • 28539

article-image-most-important-skills-you-need-devops
Rick Blaisdell
19 Jul 2017
4 min read
Save for later

The most important skills you need in DevOps

Rick Blaisdell
19 Jul 2017
4 min read
During the last couple of years, we’ve seen how DevOps has exploded and has become one of the most competitive differentiators for every organization, regardless of their size. When talking about DevOps, we refer to agility and collaboration, keys that unlock a business’s success. However, to make it work for your business, you have to first understand how DevOps works, and what skills are required for adopting this agile business culture. Let’s look over this in more detail. DevOps culture Leaving the benefits aside, here are the three basic principles of a successful DevOps approach: Well-defined processes Enhanced collaboration across business functions Efficient tools and automation  DevOps skills you need Recently, I came across an infographic showing the top positions that technology companies are struggling to fill, and DevOps was number one on the list. Surprising? Not really. If we're looking at the skills required for a successful DevOps methodology, we will understand why finding a good DevOps engineer akin to finding a needle in a haystack. Besides communication and collaboration, which are the most obvious skills that a DevOps engineer must have, here is what draws the line between success, or failure. Knowledge of Infrastructure – whether we are talking about datacenter-based or cloud infrastructure, a DevOps engineer needs to have a deep understanding of different types of infrastructure and its components (virtualization, networking, load balancing, etc). Experience with infrastructure automation tools – taking into consideration that DevOps is mainly about automation, a DevOps engineer must have the ability to implement automation tools at any level. Coding - when talking about coding skills for DevOps engineers, I am not talking about just writing the code, but rather delivering solutions. In a DevOps organization, you need to have well-experienced engineers that are capable of delivering solutions. Experience with configuration management tools – tools such as Puppet, Chef, or Ansible are mandatory for optimizing software deployment and you need engineers with the know-how. Understanding continuous integration – being an essential part of a DevOps culture, continuous integration is the process that increases the engagement across the entire team and allows source code updates to be run whenever is required. Understanding security incident response – security is the hot button for all organizations, and one of the most pressing challenges to overcome. Having engineers that have a strong understanding of how to address various security incidents and developing a recovery plan, is mandatory for creating a solid DevOps culture.  Beside the above skills that DevOps engineers should have, there are also skills that companies need to adopt: Agile development – agile environment is the foundation on which the DevOps approach has been built. To get the most out of this innovative approach, your team needs to have strong collaboration capabilities to improve their delivery and quality. You can create your dream team by teaching different agile approaches such as Scrum, Kaizen, and Kanban. Process reengineering – forget everything you knew. This is one good piece of advice. The DevOps approach has been developed to polish and improve the traditional Software Development Lifecycle but also to highlight and encourage collaboration among teams, so an element of unlearning is required.  The DevOps approach has changed the way people collaborate with each other, and improving not only the processes, but their products and services as well. Here are the benefits:  Ensure faster delivery times – every business owner wants to see his product or service on the market as soon as possible, and the DevOps approach manages to do that. Moreover, since you decrease the time-to-market, you will increase your ROI; what more could you ask for? Continuous release and deployment – having strong continuous release and deployment practices, the DevOps approach is the perfect way to ensure the team is continuously delivering quality software within shorter timeframes. Improve collaboration between teams – there has always been a gap between the development and operation teams, a gap that has disappeared once DevOps was born. Today, in order to deliver high-quality software, the devs and ops are forced to collaborate, share, and revise strategies together, acting as a single unit.  Bottom line, DevOps is an essential approach that has changed not only results and processes, but also the way in which people interact with each other. Judging by the way it has progressed, it’s safe to assume that it's here to stay.  About the Author Rick Blaisdell is an experienced CTO, offering cloud services and creating technical strategies, which reduce IT operational costs and improve efficiency. He has 20 years of product, business development, and high-tech experience with Fortune 500 companies, developing innovative technology strategies. 
Read more
  • 0
  • 0
  • 28014
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-tao-devops
Joakim Verona
21 Apr 2016
4 min read
Save for later

The Tao of DevOps

Joakim Verona
21 Apr 2016
4 min read
What is Tao? It's a complex idea - but one method of thinking of it is to find the natural way of doing things, and then making sure you do things that way. It is intuitive knowing, an approach that can't be grasped just in principle but only through putting them into practice in daily life. The principles of Tao can be applied to almost anything. We've seen the Tao of Physics, the Tao of Pooh, even the Tao of Dating. Tao principles apply just as well to DevOps - because who can know fully what DevOps actually is? It is an idiom as hard to define as "quality" - and good DevOps is closely tied to the good quality of a software product. Want a simple example? A recipe for cooking a dish normally starts with a list of ingredients, because that's the most efficient way of describing cooking. When making a simple desert, the recipe starts with a title: "Strawberries And Cream". Already we can infer a number of steps in making the dish. We must acquire strawberries and cream, and probably put them together on a plate. The recipe will continue to describe the preparation of the dish in more detail, but even if we read only the heading, we will make few mistakes. So what does this mean for DevOps and product creation? When you are putting things together and building things, the intuitive and natural way to describe the process is to do it declaratively. Describe the "whats" rather than the "hows", and then the "hows" can be inferred. The Tao of Building Software Most build tools have at their core a way of declaring relationships between software components. Here's a Make snippet: a : b cc b And here's an Ant snippet: cc b </build> And a Maven snippet: <dependency> lala </dep> Many people think they wound up in a Lovecraftian hell when they see XML, even though the brackets are perfectly euclidean. But if you squint hard enough, you will see that most tools at their core describe dependency trees. The Apache Maven tool is well-known, and very explicit about the declarative approach. So, let's focus on that and try to find the Tao of Maven. When we are having a good day with Maven and we are following the ways of Tao, we describe what type of software artifact we want to build, and the components we are going to use to put it together. That's all. The concrete building steps are inferred. Of course, since life is interesting and complex, we will often encounter situations were the way of Tao eludes us. Consider this example: type:pom antcall tar together ../*/target/*.jar Although abbreviated, I have observed this antipattern several times in real world projects. Whats wrong with it? After all, this antipattern occurs because the alternatives are non-obvious, or more verbose. You might think it's fine. But first of all, notice that we are not describing whats (at least not in a way that Maven can interpret). We are describing hows. Fixing this will probably require a lot of work, but any larger build will ensure that it eventually becomes mandatory to find a fix. Pause (perhaps in your Zen Garden) and consider that dependency trees are already described within the code of most programming languages. Isn't the "import" statement of Java, Python and the like enough? In theory this is adequate - if we disregard the dynamism afforded by Java, where it is possible to construct a class name as a string and load it. In practice, there are a lot of different artifact types that might contain various resources. Even so, it is clearly possible in theory to package all required code if the language just supported it. Jsr 294 - "modularity in Java" - is an effort to provide such support at the language level. In Summary So what have we learned? The two most important lessons are simple - when building software (or indeed, any product), focus on the "Whats" before the "Hows". And when you're empowered with building tools such as Maven, make sure you work with the tool rather than around it. About the Author Joakim Verona is a consultant with a specialty in Continuous Delivery and DevOps, and the author of Practical DevOps. He has worked as the lead implementer of complex multilayered systems such as web systems, multimedia systems, and mixed software/hardware systems. His wide-ranging technical interests led him to the emerging field of DevOps in 2004, where he has stayed ever since. Joakim completed his masters in computer science at Linköping Institute of Technology. He is a certified Scrum master, Scrum product owner, and Java professional.
Read more
  • 0
  • 0
  • 26191

article-image-devops-evolution-and-revolution
Julian Ursell
24 Jul 2014
4 min read
Save for later

DevOps: An Evolution and a Revolution

Julian Ursell
24 Jul 2014
4 min read
Are we DevOps now? The system-wide software development methodology that breaks down the problematic divide between development and operations is still in that stage where enterprises implementing the idea are probably asking that question, working out whether they've reached the endgame of effective collaboration between the two spheres and a systematic automation of their IT service infrastructure. Considered to be the natural evolution of Agile development and practices, the idea and business implementation of DevOps is rapidly gaining traction and adoption in significant commercial enterprises and we're very likely to see a saturation of DevOps implementations in serious businesses in the near future. The benefits of DevOps for scaling and automating the daily operations of businesses are wide-reaching and becoming more and more crucial, both from the perspective of enabling rapid software development as well as delivering products to clients who demand and need more and more frequent releases of up-to-date applications. The movement towards DevOps systems moves in close synchronization with the growing demand for experiencing and accessing everything in real time, as it produces the level of infrastructural agility to roll out release after release with minimal delays. DevOps has been adopted prominently by hitters as big as Spotify, who have embraced the DevOps culture throughout the formative years of their organization and still hold this philosophy now. The idea that DevOps is an evolution is not a new one. However, there’s also the argument to be made that the actual evolution from a non-DevOps system to a DevOps one entails a revolution in thinking. From a software perspective, an argument could be made that DevOps has inspired a minor technological revolution, with the spawning of multiple technologies geared towards enabling a DevOps workflows. Docker, Chef, Puppet, Ansible, and Vagrant are all powerful key tools in this space and vastly increase the productivity of developers and engineers working with software at scale. However, it is one thing to mobilize DevOps tools and implement them physically into a system (not easy in itself), but it is another thing entirely to revolve the thinking of an organization round to a collaborative culture where developers and administrators live and breathe in the same DevOps atmosphere. As a way of thinking, it requires a substantial cultural overhaul and a breaking down of entrenched programming habits and the silo-ization of the two spheres. It's not easy to transform the day-to-day mind-set of a developer so that they incorporate thinking in ops (monitoring, configuration, availability) or vice versa of a system engineer so they are thinking in terms of design and development. One can imagine it is difficult to cultivate this sort of culture and atmosphere within a large enterprise system with many large moving parts, as opposed to a startup which may have the “day zero” flexibility to employ a DevOps approach from the roots up. To reach the “state” of DevOps is a long journey, and one that involves a revolution in thinking. From a systematic as well as cultural point of view, it takes a considerable degree of ground breaking in order to shatter (what is sometimes) the monolithic wall between development and operations. But for organizations that realize that they need the responsiveness to adapt to clients on demand and have the foresight to put in place system mechanics that allow them to scale their services in the future, the long term benefits of a DevOps revolution are invaluable. Continuous and automated deployment, shorter testing times, consistent application monitoring and performance visibility, flexibility when scaling, and a greater margin for error all stem from a successful DevOps implementation. On top of that a survey showed that engineers working in a DevOps environment spent less time firefighting and more productive time focusing on self-improvement, infrastructure improvement, and product quality. Getting to that point where engineers can say “we’re DevOps now!” however is a bit of a misconception, because it’s more than a matter of sharing common tools, and there will be times where keeping the bridge between devs and ops stable and productive is challenging. There is always the potential that new engineers joining an organization can dilute the DevOps culture, and also the fact that DevOps engineers don't grow overnight. It is an ongoing philosophy, and as much an evolution as it is a revolution worth having.
Read more
  • 0
  • 0
  • 25944

article-image-trick-question-what-devops
Michael Herndon
10 Dec 2015
7 min read
Save for later

Trick Question: What is DevOps?

Michael Herndon
10 Dec 2015
7 min read
An issue that plagues DevOps is the lack of a clearly defined definition. A Google search displays results that state that DevOps is empathy, culture, or a movement. There are also derivations of DevOps like ChatOps and HugOps. A lot of speakers mentions DEVOPS but no-one seemed to have a widely agreed definition of what DEVOPS actually means. — Stephen Booth (@stephenbooth_uk) November 19, 2015 Proposed Definition of DevOps: "Getting more done with fewer meetings." — DevOps Research (@devopsresearch) October 12, 2015 The real head-scratchers are the number of job postings for DevOps Engineers and the number of certifications for DevOps that are popping up all over the web. The job title Software Engineer is contentious within the technology community, so the job title DevOps engineer is just begging to take pointless debates to a new level. How do you create a curriculum and certification course that has any significant value on an unclear subject? For a methodology that has an emphasis on people, empathy, communications, it falls woefully short heeding its own advice and values. On any given day, you can see the meaning debated on blog posts and tweets. My current understanding of DevOps and why it exists DevOps is an extension of the agile methodology that is hyper-focused on bringing customers extraordinary value without compromising creativity (development) or stability (operations). DevOps is from the two merged worlds of Development and Operations. Operations in this context include all aspects of IT such as system administration, maintenance, etc. Creation and stability are naturally at odds with each other. The ripple effect is a good way to explain how these two concepts have friction. Stability wants to keep the pond from becoming turbulent and causing harm. Creation leads to change which can act as a random rock thrown into the water sending ripples throughout the whole pond, leading to undesired side effects that causes harm to the whole ecosystem. DevOps seeks to leverage the momentum of controlled ripples to bring about effective change without causing enough turbulence to impact the whole pond negatively. The natural friction between these two needs often drives a wedge between development and operations. Operations worry that a product update may include broken functionality that customers have come to depend on, and developers worry that sorely needed new features may not make it to customers because of operation's resistance to change. Instead of taking polarizing positions, DevOps is focused on blending those two positions into a force that effectively provides value to the customer without compromising the creativity and stability that a product or service needs to compete in an ever-evolving world. Why is a clear singular meaning needed for DevOps? The understanding of a meaning is an important part of sending a message. If an unclear word is used to send a message, and then the word risks becoming noise and the message risks becoming uninterpreted or misinterpreted. Without a clear singular meaning, you risk losing the message that you want people to hear. In technology, I see messages get drowned in noise all the time. The problem of multiple meanings In communication's theory, noise is anything that interferes with understanding. Noise is more than just the sounds of static, loud music, or machinery. Creating noise can be simple as using obscure words to explain a topic or providing an unclear definition that muddles the comprehension of a given subject. DevOps suffers from too much noise that increases people's uncertainty of the word. After a reading a few posts on DevOps, each one with its declaration of the essence of DevOps, DevOps becomes confusing. DevOps is empathy! DevOps is culture! DevOps is a movement! Because of noise, DevOps seems to stands for multiple ideas plus agile operations without setting any prioritization or definitive context. OK, so which is it? Is it one of those or is it all of them? Which idea is the most important? Furthermore, these ideas can cause friction as not everyone shares the same view on these topics. DevOps is supposed to reduce friction between naturally opposing groups within a business, not create more of it. People can get behind making more money and working fewer hours by strategically providing customers with extraordinary value. Once you start going into things that people can consider personal, people can start to feel excluded for not wanting to mix the two topics, and thus you diminish the reach of the message that you once had. When writing about empathy, one should practice empathy and consider that not everyone wants to be emotionally vulnerable in the workplace. Forcing people to be emotionally vulnerable or fit a certain mold for culture can cause people to shut down. I would argue that all businesses need people that are capable of empathy to argue on the behalf of the customer and other employees, but it's not a requirement that all employees are empathetic. At the other end of the spectrum, you need people that are not empathetic to make hard and calculating decisions. One last point on empathy, I've seen people write on empathy and users in a way that should have been about the psychology of users or something else entirely. Empathy is strictly understanding and sharing the feelings of another. It doesn't cover physical needs or intellectual ones, just the emotional. So another issue with crossing multiple topics into one definition, is that you risk damaging two topics at once. This doesn't mean people should avoid writing about these topics. Each topic stands on its own merit. Each topic deserves its own slate. Empathy and culture are causes that any business can take up without adopting DevOps. They are worth writing about, just make sure that you don't mix messages to avoid confusing people. Stick to one message. Write to the lowest common denominator Another aspect of noise is using wording that is a barrier to understanding a given definition. DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support. - the agile admin People that are coming from outside the world of agile and development are going to have a hard time piecing together the meaning of a definition like that. What my mind sees when reading something like that is same sound that a teacher in Charlie Brown makes. Blah, Blah Blah. blah! Be kind to your readers. When you want them to remember something, make it easy to understand and memorable. Write to appeal to all personality styles In marketing, you're taught to write to appeal to 4 personality styles: driver, analytical, expressive, and amiable. Getting people to work together in the workplace also requires appealing to these four personality types. There is a need for a single definition of DevOps that appeals to the 4 personality styles or at the very least, refrains from being a barrier to entry. If a person needs to persuade a person with a driver type of personality, but the definition includes language that invokes an automatic no, then it puts people who want to use DevOps at a disadvantage. Give people every advantage you can for them to adopt DevOps. Its time for a definition for DevOps One of the main points of communication is to reduce uncertainity. Its hypocritical to introduce a word without a definite meaning that touchs upon importance of communication and then expect people to take it seriously when the definition constantly changes. Its time that we have a singular definition for DevOps so that people use it for the hiring process, certifications, and that market it can do so without the risk of the message being lost or co-opted into something that is not. About the author Michael Herndon is the head of DevOps at Solovis, creator of badmishka.co, and all around mischievous nerdy guy.
Read more
  • 0
  • 0
  • 25644

article-image-8-ways-artificial-intelligence-can-improve-devops
Prasad Ramesh
01 Sep 2018
6 min read
Save for later

8 ways Artificial Intelligence can improve DevOps

Prasad Ramesh
01 Sep 2018
6 min read
DevOps combines development and operations in an agile manner. ITOps refers to network infrastructure, computer operations, and device management. AIOps is artificial intelligence applied to ITOps, a term coined by Gartner. Makes us wonder what AI applied to DevOps would look like. Currently, there are some problem areas in DevOps that mainly revolve around data. Namely, accessing the large pool of data, taking actions on it, managing alerts etc. Moreover, there are errors caused by human intervention. AI works heavily with data and can help improve DevOps in numerous ways. Before we get into how AI can improve DevOps, let’s take a look at some of the problem areas in DevOps today. The trouble with DevOps Human errors: When testing or deployment is performed manually and there is an error, it is hard to repeat and fix. Many a time, the software development is outsourced in companies. In such cases, there is lack of coordination between the dev and ops teams. Environment inconsistency: Software functionality breaks when the code moves to different environments as each environment has different configurations. Teams can run around wasting a lot of time due to bugs when the software works fine on one environment but not on the other. Change management: Many companies have change management processes well in place, but they are outdated for DevOps. The time taken for reviews, passing a new module etc is manual and proves to be a bottleneck. Changes happen frequently in DevOps and the functioning suffers due to old processes. Monitoring: Monitoring is key to ensure smooth functioning in Agile. Many companies do not have the expertise to monitor the pipeline and infrastructure. Moreover monitoring only the infrastructure is not enough, there also needs to be monitoring of application performance, solutions need to be logged and analytics need to be tracked. Now let’s take a look at 8 ways AI can improve DevOps given the above context. 1. Better data access One of the most critical issues faced by DevOps teams is the lack of unregulated access to data. There is also a large amount of data, while the teams rarely view all of the data and focus on the outliers. The outliers only work as an indicator but do not give robust information. Artificial intelligence can compile and organize data from multiple sources for repeated use. Organized data is much easier to access and understand than heaps of raw data. This will help in predictive analysis and eventually a better decision making process. This is very important and enables many other ways listed below. 2. Superior implementation efficiency Artificially intelligent systems can work with minimal or no human intervention. Currently, a rules-based environment managed by humans is followed in DevOps teams. AI can transform this into self governed systems to greatly improve operational efficiency. There are limitations to the volume and complexity of analysis a human can perform. Given the large volumes of data to be analyzed and processed, AI systems being good at it, can set optimal rules to maximize operational efficiencies. 3. Root cause analysis Conducting root cause analysis is very important to fix an issue permanently. Not getting to the root cause allows for the cause to persist and affect other areas further down the line. Often, engineers don’t investigate failures in depth and are more focused on getting the release out. This is not surprising given the limited amount of time they have to work with. If fixing a superficial area gets things working, the root cause is not found. AI can take all data into account and see patterns between activity and cause to find the root cause of failure. 4 Automation Complete automation is a problem in DevOps, many tasks in DevOps are routine and need to be done by humans. An AI model can automate these repeatable tasks and speed up the process significantly. A well-trained model increases the scope of complexity of the tasks that can be automated by machines. AI can help achieve least human intervention so that developers can focus on more complex interactive problems. Complete automation also allows the errors to be reproduced and fixed promptly. 5 Reduce Operational Complexity AI can be used to simplify operations by providing a unified view. An engineer can view all the alerts and relevant data produced by the tools in a single place. This improves the current scenario where engineers have to switch between different tools to manually analyze and correlate data. Alert prioritization, root cause analysis, evaluating unusual behavior are complex time consuming tasks that depend on data. An organized singular view can greatly benefit in looking up data when required. “AI and machine learning makes it possible to get a high-level view of the tool-chain, but at the same time zoom in when it is required.” -SignifAI 6 Predicting failures A critical failure in a particular tool/area in DevOps can cripple the process and delay cycles. With enough data, machine learning models can predict when an error can occur. This goes beyond simple predictions. If an occurred fault is known to produce certain readings, AI can read patterns and predict the signs failure. AI can see indicators that humans may not be able to. As such early failure prediction and notification enable the team to fix it before it can affect the software development life cycle (SDLC). 7 Optimizing a specific metric AI can work towards solutions where the uptime is maximized. An adaptive machine learning system can learn how the system works and improve it. Improving could mean tweaking a specific metric in the workflow for optimized performance. Configurations can be changed by AI for optimal performance as required during different production phases. Real-time analysis plays a big part in this. 8 Managing Alerts DevOps systems can be flooded with alerts which are hard for humans to read and act upon. AI can analyze these alerts in real-time and categorize them. Assigning priority to alerts helps teams towards work on fixing them rather than going through a long list of alerts. The alerts can simply be tagged by a common ID for specific areas or AI can be trained for classifying good and bad alerts. Prioritizing alerts in such a way that flaws are shown first to be fixed will help smooth functioning. Conclusion As we saw, most of these areas depend heavily on data. So getting the system right to enhance data accessibility is the first step to take. Predictions work better when data is organized, performing root cause analysis is also easier. Automation can repeat mundane tasks and allow engineers to focus on more interactive problems that machines cannot handle. With machine learning, the overall operation efficiency, simplicity, and speed can be improved for smooth functioning of DevOps teams. Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner Top 7 DevOps tools in 2018 GitLab’s new DevOps solution
Read more
  • 0
  • 0
  • 24409
article-image-differences-kubernetes-docker-swarm
Richard Gall
02 Apr 2018
4 min read
Save for later

The key differences between Kubernetes and Docker Swarm

Richard Gall
02 Apr 2018
4 min read
The orchestration war between Kubernetes and Docker Swarm appears to be over. Back in October, Docker announced that its Enterprise Edition could be integrated with Kubernetes. This move was widely seen as the Docker team conceding to Kubernetes dominance as an orchestration tool. But Docker Swarm nevertheless remains popular; it doesn't look like it's about to fall off the face of the earth. So what is the difference between Kubernetes and Docker Swarm? And why should you choose one over the other?  To start with it's worth saying that both container orchestration tools have a lot in common. Both let you run a cluster of containers, allowing you to increase the scale of your container deployments significantly without cloning yourself to mess about with the Docker CLI (although as you'll see, you could argue that one is more suited to scalability than the other). Ultimately, you'll need to view the various features and key differences between Docker Swarm and Kubernetes in terms of what you want to achieve. Do you want to get up and running quickly? Are you looking to deploy containers on a huge scale? Here's a brief but useful comparison of Kubernetes and Docker Swarm. It should help you decide which container orchestration tool you should be using. Docker Swarm is easier to use than Kubernetes One of the main reasons you’d choose Docker Swarm over Kubernetes is that it has a much more straightforward learning curve. As popular as it is, Kubernetes is regarded by many developers as complex. Many people complain that it is difficult to configure. Docker Swarm, meanwhile, is actually pretty simple. It’s much more accessible for less experienced programmers. And if you need a container orchestration solution now, simplicity is likely going to be an important factor in your decision making. ...But Docker Swarm isn't as customizable Although ease of use is definitely one thing Docker Swarm has over Kubernetes, it also means there's less you can actually do with it. Yes, it gets you up and running, but if you want to do something a little different, you can't. You can configure Kubernetes in a much more tailored way than Docker Swarm. That means that while the learning curve is steeper, the possibilities and opportunities open to you will be far greater. Kubernetes gives you auto-scaling - Docker Swarm doesn't When it comes to scalability it’s a close race. Both tools are able to run around 30,000 containers on 1,000 nodes, which is impressive. However, when it comes to auto-scaling, Kubernetes wins because Docker doesn’t offer that functionality out of the box. Monitoring container deployments is easier with Kubernetes This is where Kubernetes has the edge. It has in-built monitoring and logging solutions. With Docker Swarm you’ll have to use third-party applications. That isn’t necessarily a huge problem, but it does make life ever so slightly more difficult. Whether or not it makes life more difficult to outweigh the steeper Kubernetes learning curve however is another matter… Is Kubernetes or Docker Swarm better? Clearly, Kubernetes is a more advanced tool than Docker Swarm. That's one of the reasons why the Docker team backed down and opened up their enterprise tool for integration with Kubernetes. Kubernetes is simply the software that's defining container orchestration. And that's fine - Docker has cemented its position within the stack of technologies that support software automation and deployment. It's time to let someone else take on the challenge of orchestration But although Kubernetes is the more 'advanced' tool, that doesn't mean you should overlook Docker Swarm. If you want to begin deploying container clusters, without the need for specific configurations, then don't allow yourself to be seduced by something shinier, something ostensibly more popular. As with everything else in software development, understand and define what job needs to be done - then choose the right tool for the job.
Read more
  • 0
  • 1
  • 9518

article-image-so-you-want-be-devops-engineer
Darrell Pratt
20 Oct 2016
5 min read
Save for later

So you want to be a DevOps engineer

Darrell Pratt
20 Oct 2016
5 min read
The DevOps movement has come about to accomplish the long sought-after goal of removing the barriers between the traditional development and operations organizations. Historically, development teams have written code for an application and passed that code over to the operations team to both test and deploy onto the company’s servers. This practice generates many mistakes and misunderstandings in the software development lifecycle, in addition to the lack of ownership amongst developers that grows as a result of them not owning more of the deployment pipeline and production responsibilities. The new DevOps teams that are appearing now start as blended groups of developers, system administrators, and release engineers. The thought isthat the developers can assist the operations team members in the process of building and more deeply understanding the applications, and the operations team member can shed light on the environments and deployment processes that they must master to keep the applications running. As these teams evolve, we are seeing the trend to specifically hire people into the role of the DevOps Engineer. What this role is and what type of skills you might need to succeed as a DevOps engineer is what we will cover in this article. The Basics Almost every job description you are going to find for a DevOps engineer is going to require some level of proficiency in the desired production operating systems. Linux is probably the most common. You will need to have a very good level of understanding of how to administer and use a Linux-based machine. Words like grep, sed, awk, chmod, chown, ifconfig, netstat and others should not scare you. In the role of DevOps engineer, you are the go-to person for developers when they have issues with the server or cloud. Make sure that you have a good understanding of where the failure points can be in these systems and the commands that can be used to pinpoint the issues. Learn the package manager systems for the various distributions of Linux to better understand the underpinnings of how they work. From RPM and Yum to Apt and Apk, the managers vary widely but the common ideas are very similar in each. You should understand how to use the managers to script machine configurations and understand how the modern containers are built. Coding The type of language you need for a DevOps role is going to depend quite a bit on the particular company. Java, C#, JavaScript, Ruby and Python are all popular languages. If you are a devout Java follower then choosing a .NET shop might not be your best choice. Use your discretion here, but the job is going to require a working knowledge of coding in one more focused languages. At a minimum, you will need to understand how the build chain of the language works and should be comfortable understanding the error logging of the system and understand what those logs are telling you. Cloud Management Gone are the days of uploading a war file to a directory on the server. It’s very likely that you are going to be responsible for getting applications up and running on a cloud provider. Amazon Web Services is the gorilla in the space and having a good level of hands on experience with the various services that make up a standard AWS deployment is a much sought after skill set. From standard AMIs to load balancing, cloud formation and security groups, AWS can be complicated but luckily it is very inexpensive to experiment and there are many training classes of the different components. Source Code Control Git is the tool of choice currently for source code control. Git gives a team a decentralized SCM system that is built to handle branching and merging operations with ease. Workflows that teams use are varied, but a good understanding of how to merge branches, rebase and fix commit issues is required in the role. The DevOps engineers are usually looked to for help on addressing “interesting” Git issues, so good, hands-on experience is vital. Automation Tooling A new automation tool has probably been released in the time it takes to read this article. There will be new tools and platforms in this part of the DevOps space, but the most common are Chef, Puppet and Ansible. Each system provides a framework for treating the setup and maintenance of your infrastructure as code. Each system has a slightly different take on the method for writing the configurations and deploying them, but the concepts are similar and a good background in any one of these is more often than not a requirement for any DevOps role. Each of these systems requires a good understanding of either Ruby or Python and these languages appear quite a bit in the various tools used in the DevOps space. A desire to improve systems and processes While not an exhaustive list, mastering this set of skills will accelerate anyone’s journey towards becoming a DevOps engineer. If you can augment these skills with a strong desire to improve upon the systems and processes that are used in the development lifecycle, you will be an excellent DevOps engineer. About the author Darrell Pratt is the director of software development and delivery at Cars.com, where he is responsible for a wide range of technologies that drive the Cars.com website and mobile applications. He is passionate about technology and still finds time to write a bit of code and hack on hardware projects. You can find him on Twitter here: @darrellpratt.
Read more
  • 0
  • 0
  • 8512

article-image-7-crucial-devops-metrics-that-you-need-to-track
Guest Contributor
20 Aug 2019
9 min read
Save for later

7 crucial DevOps metrics that you need to track

Guest Contributor
20 Aug 2019
9 min read
DevOps has taken the IT world by storm and is increasingly becoming the de facto industry standard for software development. The DevOps principles have the potential to result in a competitive differentiation allowing the teams to deliver a high quality software developed at a faster rate which adequately meets the customer requirements. DevOps prevents the development and operations teams from functioning in two distinct silos and ensures seamless collaboration between all the stakeholders. Collection of feedback and its subsequent incorporation plays a critical role in DevOps implementation and formulation of a CI/CD pipeline. Successful transition to DevOps is a journey, not a destination. Setting up benchmarks, measuring yourself against them and tracking your progress is important for determining the stage of DevOps architecture you are in and ensuring a smooth journey onward. Feedback loops are a critical enabler for delivery of the application and metrics help transform the qualitative feedback into quantitative form. Collecting the feedback from the stakeholders is only half the work, gathering insights and communicating it through the DevOps team to keep the CI/CD pipeline on track is equally important. This is where the role of metrics comes in. DevOps metrics are the tools that your team needs for ensuring that the feedback is collected and communicated with the right people to improve upon the existing processes and functions in a unit. Here are 7 DevOps metrics that your team needs to track for a successful DevOps transformation: 1. Deployment frequency Quick iteration and continuous delivery are key measurements of DevOps success. It basically means how long the software takes to deploy and how often the deployment takes place. Keeping track of the frequency with which the new code is deployed helps keep track of the development process. The ultimate goal of deployment is to be able to release smaller deployments of code as quickly as possible. Smaller deployments are easier to test and release. They also improve the discoverability of bugs in the code allowing for faster and timely resolution of the same. Determining the frequency of deployments needs to be done separately for development, testing, staging, and production environments. Keeping track of the frequency of deployment to QA or pre-production environments is also an important consideration. A high deployment frequency is a tell-tale sign that things are going smooth in the production cycle. Smaller deployments are easier to test and release so higher deployment frequency directly corresponds with higher efficiency. No wonder tech giants such as Amazon and Netflix deploy code thousands of times a day. Amazon has built a deployment engine called Apollo that has deployed more than 50 million deployments in 12 months which is more than one deployment per second. This results in reduced outages and decreased downtimes. 2. Failed deployments Any deployment that causes issues or outages for your users is a failed deployment. Tracking the percentage of deployments that result in negative feedback from the user’s end is an important DevOps metric. The DevOps teams are expected to build quality in the product right from the beginning of the project. The responsibility for ensuring the quality of the software is also disseminated through the entire team and not just centered around the QA. While in an ideal scenario, there should be no failed deployments, that’s often not the case. Tracking the percentage of deployment that results in negative sentiment in the project helps you ascertain the ground level realities and makes you better prepared for such occurrences in the future. Only if you know what is wrong can you formulate a plan to fix it. While a failure rate of 0 is the magic number, less than 5% failed deployments is considered workable. In case the metric consistently shows spike of failed deployments over 10%, the existing process needs to be broken down into smaller segments with mini-deployments. Fixing 5 issues in 100 deployments is any day easier than fixing 50 in 1000 within the same time-frame. 3. Code committed Code committed is a DevOps metric that tracks the number of commits the team makes to the software before it can be deployed into production. This serves as an indicator of the development velocity as well as the code quality. The number of code commits that a team makes has to be within the optimum range defined by the DevOps team. Too many commits may be indicative of low quality or lack of direction in development. Similarly, if the commits are too low, it may be an indicator that the team is too taxed and non-productive. Uncovering the reason behind the variation in code committed is important for maintaining the productivity and project velocity while also ensuring optimal satisfaction within the team members. 4. Lead Time The software development cycle is a continuous process where new code is constantly developed and successfully deployed to production. Lead time for changes in DevOps is the time taken to go from code committed to code successfully running into production. It is an important indicator to determine the efficiency in the existing process and identifying the possible areas of improvement. The lead time and mean time to change (MTTC) result in the DevOps team getting a better hold of the project. By measuring the amount of time passing between its inception and the actual production and deployment, the team’s ability to adapt to change as the project requirements evolve can be computed. 5. Error rate Errors in any software application are inevitable. A few occasional errors aren’t a red flag but keeping track of the error rates and being on the lookout for any unusual spikes is important for the health of your application. A significant rise in error rate is an indicator of inherent quality problems and ongoing performance-related issues. The errors that you encounter can be of two types, bugs and production issues. Bugs are the exceptions in the code discovered after deployment. Production issues, on the other hand, are issues related to database connections and query timeouts. The error rate is calculated as a function of the transactions that result in an error during a particular time window. For a specified time duration, out of a 1000 transactions, if 20 have errors, the error rate is calculated as 20/1000 or 2 percent. A few intermittent errors throughout the application life cycle is a normal occurrence but any unusual spikes that occur need to be looked out for. The process needs to be analysed for bugs and production issues and the exceptions that occur need to be handled concurrently. 6. Mean time to detection Issues happen in every project but how fast you discover the issues is what matters. Having robust application monitoring and optimal coverage would help you find out any issues that happen as quickly as possible. The mean time to detection metric (MTTD) is the amount of time that passes between the beginning of the issue and the time when the issue gets detected and some remedial action is taken. The time to fix the issues is not covered under MTTD. Ideally, the DevOps teams need to strive to keep the MTTD as low as possible (ideally close to zero) i.e the DevOps teams should be able to detect any issues as soon as they occur. There needs to be a proper protocol established and communication channels need to be in place in order to help the team discover the error quickly and respond to its correction in a rapid manner. 7. Mean time to recovery Time to restore service or Mean time to recovery (MTTR) is a critical part of any project. It is the average time taken by the team to repair a failure in the system. It comprises of the time taken from failure detection till the time the project starts operating in the normal manner. Recovery and resilience are key components that determine the market readiness of a project. MTTR is an important DevOps metric because it allows for tracking of complex issues and failures while judging the capability of the team to handle change and bounce back again. The ideal recovery time for the fix to take place should be as low as possible, thus minimizing the overall system downtime. System downtimes and outages though undesirable are unavoidable. This especially runs true in the current development scenario where companies are making the move to the cloud. Designing for failure is a concept that needs to be ingrained right from the start. Even major applications like Facebook & Whatsapp, Twitter, Cloudflare, and Slack are not free of outages. What matters is that the downtime is kept minimal. Mean time to recovery thus becomes critical to realize the time the DevOps teams would need to bring the system back on track. Closing words DevOps isn’t just about tracking metrics, it is primarily about the culture. Organizations that make the transition to DevOps place immense emphasis on one goal-rapid delivery of stable, high-quality software through automation and continuous delivery. Simply having a bunch of numbers in the form of DevOps metrics isn’t going to help you across the line. You need to have a long-term vision combined with valuable insights that the metrics provide. It is only by monitoring these over a period of time and tracking your team’s progress in achieving the goals that you have set can you hope to reap the true benefits that DevOps offers. Author Bio Vinati Kamani writes about emerging technologies and their applications across various industries for Arkenea, a custom software development company and devops consulting company. When she's not on her desk penning down articles or reading up on the recent trends, she can be found traveling to remote places and soaking up different cultural experiences. DevOps engineering and full-stack development – 2 sides of the same agile coin Introducing kdevops, modern devops framework for Linux kernel development Why do IT teams need to transition from DevOps to DevSecOps?
Read more
  • 0
  • 0
  • 8352
article-image-what-are-lightweight-architecture-decision-records
Richard Gall
16 May 2018
4 min read
Save for later

What are lightweight Architecture Decision Records?

Richard Gall
16 May 2018
4 min read
Architecture Decision Records (ADRs) document all the decisions made about software. Every change is recorded in a plain text file sitting inside a version control system (like GitHub). The record should be a complement to the information you can find in a version control system. The ADR provides context and information around every decision made about a piece of software. Why are lightweight Architecture Decision Records needed? We are always making decisions when we build software. Even the simplest piece of software will have required the engineer to take a number of different decisions. Often these decisions aren't obvious. If you've ever had to work with code written by someone else you're probably familiar with this sort of situation. You might have even found that when you come across someone else's code, you need to make a further decision. Either you can simply accept what has been written, and merely surmise and assume why it has been done in the way that it, or you can decide to change it, based on your own judgement. Neither option is ideal. This was what Michael Nygard identified in this blog post in 2011. This was when the concept of Architecture Decision Records first emerged. An ADR should prevent situations like this arising. That makes life easier for you. More importantly, it should mean that every decision is transparent to everyone involved. So, instead of blindly accepting something or immediately changing it, you can simply check the Architecture Decision Record. This will then inform how you proceed. Perhaps you need to make a change. But perhaps you also now understand the context of why something was built in the way it was. Any questions you might have should be explicitly answered in the architecture decision record. So, when you start asking yourself why has she done it like that? instead of floundering helplessly, you can find the answer in the ADR. Why lightweight Architecture Decision Records now? Architecture Decision Records aren't a new thing. Nygard wrote his post all the way back in 2011, after all. But the fact remains that the context from which Nygard was writing in 2011 was very specific. Today it is mainstream. As we've moved away from monolithic architecture towards microservices or serverless, decision making has become more and more important in software engineering. This is a point that is well explained in a blog post here: "The rise of lean development and microservices... complicates the ability to communicate architecture decisions. While these concepts are not inherently opposed to documentation, their processes often fail to effectively capture decision-making processes and reasoning. Another possible inefficiency when recording decisions is bad or out-of-date documentation. It's often a herculean effort to keep large, complex architecture documents current, making maintenance one of the most common barriers to entry." ADRs are, then, a way of managing the complexity in modern software engineering. They are a response to a fundamental need to better communicate decisions. Most importantly, they codify decision-making within the development process. It is when they are lightweight and sit within the project itself that they are most effective. Architecture Decision Record template Architecture Decision Records must follow a template. Not only does that mean everyone is working off the same page, it also means people are actually more likely to document their decisions. Think about it: if you're asked to note how you decide to do something without any guidelines, you're probably not going to do it at all. Below, you'll find an Architecture Decision Record example template. There are a number of different templates you can use, but it's probably best to sit down with your team and agree on what needs to be captured. An Architecture Decision Record example Date Decision makers [who was involved in the decision taken] Category [which part of the architecture does this decision pertain to] Contextual outline [Explain why this decision was made. Outline the key considerations and assumptions at play] Impact consequences [What does this decision mean for the project? What should someone reading this be aware of in terms of future decisions?] As I've already noted, there are a huge number of ways you may want to approach this. Use this as a starting point. Read next Enterprise Architecture Concepts Reactive Programming and the Flux Architecture
Read more
  • 0
  • 0
  • 7911

article-image-why-do-it-teams-need-to-transition-from-devops-to-devsecops
Guest Contributor
13 Jul 2019
8 min read
Save for later

Why do IT teams need to transition from DevOps to DevSecOps?

Guest Contributor
13 Jul 2019
8 min read
Does your team perform security testing during development? If not, why not? Cybercrime is on the rise, and formjacking, ransomware, and IoT attacks have increased alarmingly in the last year. This makes security a priority at every stage of development. In this kind of ominous environment, development teams around the globe should take a more proactive approach to threat detection. This can be done in a number of ways. There are some basic techniques that development teams can use to protect their development environments. But ultimately, what is needed is an integration of threat identification and management into the development process itself. Integrated processes like this are referred to as DevSecOps, and in this guide, we’ll take you through some of the advantages of transitioning to DevSecOps. Protect Your Development Environment First, though, let’s look at some basic measures that can help to protect your development environment. For both individuals and enterprises, online privacy is perhaps the most valuable currency of all. Proxy servers, Tor, and virtual private networks (VPN) have slowly crept into the lexicon of internet users as cost-effective privacy tools to consider if you want to avoid drawing the attention of hackers. But what about enterprises? Should they use the same tools? They would prefer to avoid hackers as well. This answer is more complicated. Encryption and authentication should be addressed early in the development process, especially given the common practice of using open source libraries for app coding. The advanced security protocols that power many popular consumer VPN services make it a good first step to protecting coding and any proprietary technology. Additional controls like using 2-factor authentication and limiting who has access will further protect the development environment and procedures. Beyond these basic measures, though, it is also worth looking in detail at your entire development process and integrating security management at every stage. This is sometimes referred to as integrating DevOps and DevSecOps. DevOps vs. DevSecOps: What's the Difference? DevOps and DevSecOps are not separate entities, but different facets of the development process. Traditionally, DevOps teams work to integrate software development and implementation in order to facilitate the rapid delivery of new business applications. Since this process omits security testing and solutions, many security flaws and vulnerabilities aren't addressed early enough in the development process. With a new approach, DevSecOps, this omission is addressed by automating security-related tasks and integrating controls and functions like composition analysis and configuration management into the development process. Previously, DevSec focused only on automating security code testing, but it is gradually transitioning to incorporate an operations-centric approach. This helps in reconciling two environments that are opposite by nature. DevOps is forward-looking because it's toward rapid deployment, while development security looks backward to analyze and predict future issues. By prioritizing security analysis and automation, teams can still improve delivery speed without the need to retroactively find and deal with threats. Best Practices: How DevSecOps Should Work The goal of current DevSecOps best practices is to implement a shift towards real-time threat detection rather than undergoing a historical analysis. This enables more efficient application development that recognizes and deals with issues as they happen rather than waiting until there's a problem. This can be done by developing a more effective strategy while adopting DevSecOps practices. When all areas of concern are addressed, it results in: Automatic code procurement: Automatic code procurement eliminates the problem of human error and incorporating weak or flawed coding. This benefits developers by allowing vulnerabilities and flaws to be discovered and corrected earlier in the process. Uninterrupted security deployment: Uninterrupted security deployment through the use of automation tools that work in real time. This is done by creating a closed-loop testing and reporting and real-time threat resolution. Leveraged security resources: Leveraged security resources through automation. Using automated DevSecOps typically address areas related to threat assessment, event monitoring, and code security. This frees your IT or security team to focus in other areas, like threat remediation and elimination. There are five areas that need to be addressed in order for DevSecOps to be effective: Code analysis By delivering code in smaller modules, teams are able to identify and address vulnerabilities faster. Management changes Adapting the protocol for changes in management or admins allows users to improve on changes faster as well as enabling security teams to analyze their impact in real time. This eliminates the problem of getting calls about problems with system access after the application is deployed. Compliance Addressing compliance with Payment Card Industry Digital Security Standard (PCI DSS) and the new General Data Protection Regulations (GDPR) earlier, helps prevent audits and heavy fines. It also ensures that you have all of your reporting ready to go in the event of a compliance audit. Automating threat and vulnerability detection Threats evolve and proliferate fast, so security should be agile enough to deal with emerging threats each time coding is updated or altered. Automating threat detection earlier in the development process improves response times considerably. Training programs Comprehensive security response begins with proper IT security training. Developers should craft a training protocol that ensures all personnel who are responsible for security are up to date and on the same page. Organizations should bring security and IT staff into the process sooner. That means advising current team members of current procedures and ensuring that all new staff is thoroughly trained. Finding the Right Tools for DevSecOps Success Does a doctor operate with a chainsaw? Hopefully not. Likewise, all of the above points are nearly impossible to achieve without the right tools to get the job done with precision. What should your DevSec team keep in their toolbox? Automation tools Automation tools provide scripted remediation recommendations for security threats detected. One such tool is Automate DAST, which scans new or modified code against security vulnerabilities listed on the Open Web Application Security Project's (OWASP) list of the most common flaws, such as a SQL injection errors. These are flaws you might have missed during static analysis of your application code. Attack modeling tools Attack modeling tools create models of possible attack matrices and map their implications. There are plenty of attack modeling tools available, but a good one for identifying cloud vulnerabilities is Infection Monkey, which simulates attacks against the parts of your infrastructure that run on major public cloud hosts like Google Cloud, AWS, and Azure, as well as most cloud storage providers like Dropbox and pCloud. Visualization tools Visualization tools are used for evolving, identifying, and sharing findings with the operations team. An example of this type of tool is PortVis, developed by a team led by professor Kwan-Liu Ma at the University of California, Davis. PortVis is designed to display activity by host or port in three different modes: a grid visualization, in which all network activity is displayed on a single grid; a volume visualization, which extends the grid to a three-dimensional volume; and a port visualization, which allows devs to visualize the activity on specific ports over time. Using this tool, different types of attack can be easily distinguished from each other. Alerting tools  Alerting tools prioritize threats and send alerts so that the most hazardous vulnerabilities can be addressed immediately. WhiteSource Bolt, for instance, is a useful tool of this type, designed to improve the security of open source components. It does this by checking these components against known security threats, and providing security alerts to devs. These alerts also auto-generate issues within GitHub. Here, devs can see details such as references for the CVE, its CVSS rating, a suggested fix, and there is even an option to assign the vulnerability to another team member using the milestones feature. The Bottom Line Combining DevOps and DevSec is not a meshing of two separate disciplines, but rather the natural transition of development to a more comprehensive approach that takes security into account earlier in the process, and does it in a more meaningful way. This saves a lot of time and hassles by addressing enterprise security requirements before deployment rather than probing for flaws later. The sooner your team hops on board with DevSecOps, the better. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Does it make sense to talk about DevOps engineers or DevOps tools? How Visual Studio Code can help bridge the gap between full-stack development and DevOps
Read more
  • 0
  • 0
  • 7679