Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Author Posts - DevOps

10 Articles
article-image-is-devops-experiencing-an-identity-crisis-interview
Packt Editorial Staff
07 Jan 2020
7 min read
Save for later

Is DevOps experiencing an identity crisis? [Interview]

Packt Editorial Staff
07 Jan 2020
7 min read
The definition of DevOps is a hotly disputed topic among amateur practitioners and experienced engineers alike. Ironically, DevOps was actually supposed to bring some order into the messy and chaotic environment of IT software development. In DevOps Paradox, DevOps expert Viktor Farcic talks to fellow industry figures who reveal their perspectives on the trend and what it means to them. In this article, we’ll see what some prominent people in the DevOps community have to say about DevOps. The quotes in this article are taken directly from the book. So, how are we supposed to incorporate DevOps into our organizations if we don't even know what it is? Let’s hear Viktor’s thoughts about what DevOps is and what are it’s trends and future aspects: What is DevOps and why do we need it? What is DevOps and why do we need it? What is the most important thing DevOps helps us achieve? What are the factors that drive the development of DevOps? Viktor Farcic: Almost everyone gives a different answer to the question “What is DevOps?”—there is a huge discrepancy between the idea and the implementation. I believe that the main objective of DevOps is to enable self-sufficient product-oriented teams capable of having a full control of their products. That is in stark contrast with the way many companies operate today. Normally, a lifecycle of an application is split between many teams. Business analysts define requirements, architects work on guidelines that must be followed and frameworks that must be used, developers write code, testers are in charge of validations, operators deploy new releases, and so on and so forth. The problem is that each of those groups belong to different departments and have different and often opposing objectives. Instead of fostering collaboration towards a common goal, different teams (departments) are looking for their own shortsighted interests. DevOps tries to remove organization based on the type of tasks performed and unite all the expertise required for the whole lifecycle of an application into a single team reporting to a single person. It forces us to work together and it builds empathy. Is DevOps a process? Or a set of technologies? What's your perspective on this area of debate? Viktor: DevOps is neither of the two. Unlike some agile frameworks (e.g. Scrum), there is no prescribed process to follow. Similarly, there is no technology we can adapt that will convert us into “DevOps” teams. It is only an idea that developers, operators, and everyone else needs to work together instead of being isolated in different silos. That does not mean that technology is not important; it is, but often for other than obvious reasons. Every new technology is created by a group of people that worked together to create it. As such, it always reflects processes of those involved in creating it. Those processes, on the other hand, are a result of the culture of the people following it. In other words, every tech is a result of certain processes created as a result of the culture of the team that worked on it. So, even though we use an end result, it is a product of a process created in a specific culture. If we adopt a technology that does not match our own processes and culture, it will produce suboptimal results at best. So, we must either adopt technology that matches our processes and culture or use it to change them. One cannot work without the other. All in all, DevOps is an idea, not much more than that. It’s up to us to figure out which processes and technology will help us make it reality. What does a DevOps engineer do? Is it even a real job role? What are the core roles of DevOps Engineers in terms of development and Infrastructure? Viktor: I don't think there is such a thing as a “DevOps Engineer.” The term was invented by people who were not ready to apply the changes DevOps leads to. Most of the time, a “DevOps Engineer” is just a different name for someone working in shared services, operations, infrastructure, or whichever department was first to be renamed into DevOps. Do you think DevOps is experiencing an identity crisis? Viktor: DevOps was never defined as a process. Agile, for example, got quite a few implementations that tell people what to do. Among others, we got Scrum that clearly defines what to do. We could even argue that Scrum, as being a set of practices that must be followed, is against the spirit of Agile, but that’s a conversation for some other time. What matters is that no one defined the process behind DevOps. There is no such thing as a set of steps that must be followed daily or weekly. It is just an idea that we should work together and not throw things over department walls. As such, the way to accomplish that is open to interpretation. So, DevOps never had a clear identity, so it cannot have an identity crisis either. It’s just an idea, and it’s up to each one of us to try to figure out how to make it reality. The biggest challenges in DevOps today What are the biggest challenges in DevOps at the moment? Viktor: Currently, DevOps is mostly misunderstood. More often than not, companies just rename a department. In some companies, shared services become DevOps teams; in others it is infrastructure, operations, or any other department. It’s as if it was a race and the first department to change their name into “DevOps” was a winner. Logically, changing the name means nothing and does not result in any tangible improvement. The key challenges are related to people and culture. DevOps is not easy because it challenges current organizational structure, it restructures power within an organization, and it questions the need for the existence of many departments. As such, middle management is often against it because it is perceived as a risk to their position. At the same time, people who spent many years doing the same thing over and over again feel that their credibility is at risk if the structure that allowed them to climb company ladder is removed. Congratulations on the release of DevOps Paradox. Could you talk a little bit about the idea behind it and what you hope it achieves? Viktor: I go to a lot of conferences and I realized that scheduled talks are not the main takeaway from them. True, I learned things by listening to them, but the primary reason I continue attending are “corridor talks.” Conferences are a great opportunity for me to find interesting people and have amazing discussions. Unlike scheduled talks, those conversations are not structured. I do not prepare a list of questions for the next person I’ll meet in between talks or at a party. Instead, we’d just start talking about a random thing that happens to be interesting. I wanted to bring those types of conversations to people who cannot travel the world and be every moth in a different conference in a different country. So, I did not have any real goals for this book, other than speaking with people about any topic, as long as it is related to DevOps. Since DevOps can be anything related to software development, you can say that the scope of the book is as broad as it can be. My true goal was to enjoy having conversations with people. I did not prepare questions in advance. Instead, I just gathered people I’d like to speak with if I’d meet them in a conference and say, “Let’s have a coffee and see what you were up to since the last time we met”. Some of those I interviewed are my friends, while others I met for the first time. Some work for huge enterprises, while others work in startups. Some worked in software industry for many years, while others are young up-and-coming experts. I wanted to make sure that the book gives as many different opinions as possible. Find Viktor Farcic's DevOps Paradox on the Packt store. Read the first chapter for free on the Packt subscription platform.
Read more
  • 0
  • 0
  • 12299

article-image-luis-weir-explains-how-apis-can-power-business-growth-interview
Packt Editorial Staff
06 Jan 2020
10 min read
Save for later

Luis Weir explains how APIs can power business growth [Interview]

Packt Editorial Staff
06 Jan 2020
10 min read
API management is a discipline that has evolved to deliver the processes and tools required to discover, design, implement, use, or operate enterprise-grade APIs. The discipline bisects two distinct communities and deserves the attention of both: developers who build APIs and business and IT leaders looking at APIs to drive growth. In Enterprise API Management, Luis Weir shows how to define the right architecture, implement the right patterns, and define the right organization model for business-driven APIs. The book explores architectural decisions, implementation patterns, and management practices for successful enterprise APIs. It also gives clear, actionable advice on choosing and executing the right API strategy in your enterprise. Let’s see what Luis has to say about API management and key principles to improve API design for enterprise organizations. What API management involves What does API management mean and involve? Luis Weir: In simple terms, it’s the discipline that aligns tools with processes and people in order to realize the value from implementing enterprise-grade APIs throughout their full cycle. By enterprise-grade, I mean APIs that comply with a minimum set of quality standards, not just in the actual API itself (e.g. use of normalize semantics, well-documented interfaces, and good user experience), but also in the engineering processes behind their delivery (e.g. CICD pipelines and robust automation at all levels, different levels of testing, and so on). Guiding principles for API design What are the some guiding principles that can improve API design? LW: First and foremost is the identification of APIs themselves. It’s not just about building an API for the sake of it and value will just come. Without adopting a process (e.g. ideation) that can help identify APIs that can truly add value, there is real risk that an API might just end up being a DOA (dead on arrival), as there might not even be a need for it. Assuming such a process has taken place and APIs that have real potential to add value have been identified, the next step is to conceptualize a design. It is at this point that disciplines such as domain-driven design can help produce such a design in a way that both business and IT people can relate to it. This design should capture things such as consuming applications, producing applications (data sources), data entities, and services involved in the concept. It should clearly and simply define the relationship between the components and define boundaries (bounded contexts) as these will be key not just in the actual implementation of the API or APIs (as it may end up being more than just one), but also in the creation of the API specifications themselves thought IDLs (e.g. an OAS file, API blueprint, GraphQL schema, .proto file in gRPC to name a few). The next and very important step for producing a good API is to follow an API design-first process. This process ensures that the API specifications and API mocks (produced from the API specifications themselves) undergo a series of validations by potential consumers of the API themselves as well as other relevant parties. The idea is to obtain as much feedback as possible through multiple iterations (or feedback-loops) to ensure that the API is fit for purpose but that it also delivers a good user experience. For more details, please refer to the API Life cycle section in my book. Testing APIs What are different API testing approaches? LW: At the very minimum, API testing should involve the following testing approaches: Interface testing Functional testing Performance testing Security testing Interface testing is used to validate that an API implementation conforms to the API specification. Functional testing is used to validate that the API delivers the functionality that it is meant to deliver and with the expected behavior. Performance testing ensures that APIs can actually handle the expected volume and scale as required. Security testing ensures that the API is not vulnerable to common threads such as those described in the OWASP top 10 projects. Other more sophisticated testing approaches may include A/B testing and Chaos testing. A/B testing dynamically tests new API features against a subset of the API audience and in a running environment (even production). Chaos testing (e.g. randomly shutting down components of the solution in production to ensure the API is resilient) should be considered as the API initiative matures. Understanding API gateways What are the key features of an API gateway? LW: There are many capabilities expected of an API gateway and these are all well described in the API exposure section in my book. However, in addition to such capabilities, which in my view are all essential, there are some key features that put modern API gateways (3rd generation) apart from more traditional ones (1st and 2nd gen). These are: Lightweight: Requires minimum disk space, CPU, and RAM to run. Hybrid: Can run on-premise, on cloud, and on multiple cloud platforms (e.g. AWS, Azure, Google, Oracle, etc). Kubernetes ready: k8s has become the most popular runtime platform for microservices. Modern APIs should be easily deployed into the K8s runtime and support many of the patterns as described in my book. Common Control Plane: If the management of APIs deployed on gateways isn’t centralized in some way, shape, or form, then allowing enterprise users to discover and (re)use already built (or being built) APIs will be extremely difficult and will lead to a lot of duplication. We’ve already seen this in the SOA days. Modern API Gateways should, therefore, be pluggable to control planes that take care of things like API lifecycle management and gateway infra management. Phone-home: This is a key feature and one that still not many modern gateways support. The ability for an API gateway to stablish the communication to the management tier via the control plane (Phone-home) using standard ports is key in hybrid architectures to avoid networking and other security constraints. Enterprise API Management, I think, provides a pretty comprehensive overview of what modern API platforms look like and how to differentiate them with more traditional ones. Common mistakes in API management What are the common mistakes people make in API management? LW: Throughout my time as an API strategist and practitioner I’ve seen many mistakes and also made some myself. The important thing is being able to recognize what they are and learnt from them. The top 3 that come to my mind: Thinking that API management is just about implementing a product or tools without having business and customer value at the epicentre of the API strategy. (Sometimes there even isn’t an API strategy.) This is perhaps the most common one, and one that happened a lot in the old SOA days…unfortunately still occurs in the modern API-led era. My book, Enterprise API Management, can be used as the guideline on how to avoid making an API management initiative less about tools, and more about business/customer value, people, and processes. Thinking that all APIs are the same and therefore treating them all the same way. In some cases, this just happens accidentally, in other cases this happens to avoid ‘layering’ APIs because ‘microservices architectures and practitioners say so’. The matter of fact is, that an API that is built specifically in support of a given mobile application will be less generic and less suited for its used outside of the ‘context’ on which it was built, as compare to an API that was built without any specific consuming application in mind (and thus is not coupled to any application lifecycle). Adopting the wrong organizational model to provide API capabilities across the enterprise. Foor example, this could be a model that centralizes all API efforts and capabilities thus becoming a bottleneck and eventually becoming slow (aka traditional IT). Modern API initiatives should think about adopting platforms models with self-service at the epicentre. In addition to the above 3, there are many common pitfalls when it comes to API architecture and design. However, to cover these I strongly recommend my talk on the 7 deadly sins of API design... https://www.youtube.com/watch?v=Sx2_etbb9JA API management and DevOps What are your thoughts about 3rd generation API management having huge impact on DevOps? LW: Succeeding in modern API management and microservices architectures requires changes beyond technology and also requires diving deep into the organization and its culture. It means moving away from traditional project-based deliveries wherein teams assemble just for the duration of a project and hand over the delivered software (e.g. an API and related services) to different support teams. Instead, move towards a product-based organization wherein teams are assemble around business capabilities and retain accountability and ownership through the entire life cycle of the product. This fundamental change of approach in delivering software means that there is no longer a split between development and operation teams, as a product team has full ownership and accountability over its product. With that said, in order to avoid (re)building these product teams and maintaining core IT capabilities from scratch (e.g. API platforms and service runtimes), a platform operating model can be adopted. This model can offer common IT capabilities, although in a decentralized, on-demand, and self-service way. And for me accomplishing the above is true DevOps. It is at this point that organizations can become more agile and can truly increase their time to markets. What were your goals and objectives in this book, and how well do you feel you achieved them? LW: When I started defining and implementing API and microservices strategies in large enterprises (many of them Fortune 500), although there was plentiful of content around to get inspiration from (much of this content referenced in my book), I had to literally go through several articles, books, videos, and others in order to conceive a top-down, business-led approach towards delivering end-to-end API and microservices strategies. When I say end to end, it doesn’t mean just defining PowerPoints and lengthy Word documents explaining how to deliver API/Microservices strategies and then just walking away. Or worst, sitting on the side with an opinion but no accountability (unfortunately, only too common in the consulting world - lots of senior consultants with strong opinions but who have little or no real practical knowledge and experience). Rather it means walking the talk, defining the strategy, and also delivering it with all of its implications. With this book, I, therefore, wanted to share to the community an approach that I created, evolved through the years, and have seen working. It’s not just theory, but a mix of theory with practice. It’s not just ideas, but ideas that I have put into practice. This book is about sharing my real-life experiences and approach in delivering API and microservices strategies. Therefore, I think (or hope) that I have accomplished my goals with this book. I felt that there is great stuff out there focused on specific things of the “end to end” but not the actual “end to end,” which is what I wanted to cover in this book. I didn’t want to be too high level or too detailed. I wanted to give something to multiple audiences, as it requires multiple audiences (technical and non-technical) working together in order to successfully deliver API management. Ultimately, the readers will be the judge, but I think I have accomplished my goals with this book. Find Enterprise API Management on the Packt store. Read the first chapter for free on Packt's subscription platform.
Read more
  • 0
  • 0
  • 10502

article-image-devsecops-and-the-shift-left-in-security-how-semmle-is-supporting-software-developers-podcast
Richard Gall
11 Nov 2019
2 min read
Save for later

DevSecOps and the shift left in security: how Semmle is supporting software developers [Podcast]

Richard Gall
11 Nov 2019
2 min read
Software security has been 'shifting left' in recent years. Thanks to movements like Agile and Dev(Sec)Ops, software developers are finding that they have to take more responsibility for the security of their code. By moving performance and security testing earlier in the development lifecycle it's much easier to identify and capture defects and issues. The reasons for this are largely rooted in the utter dominance of open source software and the increasingly distributed nature of the systems we're building. To put it bluntly, if our software is open, and loosely connected, the opportunity for systems to be exploited by malignant actors grows vastly. To tackle this we're starting to see a wealth of platforms and tools emerge that are trying to support developers embrace security as a fundamental part of the development process. One such platform is Semmle, a code analysis platform designed to help developers and engineers identify issues quickly. To find out more about Semmle - and the wider DevSecOps movement - we spoke to Chief Security Officer Fermin Serna in an edition of the Packt Podcast. He explained how Semmle works, what its trying to achieve, and placed it in the broader context of this 'shift left' that's quickly becoming a new reality for many engineers. Listen to the episode: https://soundcloud.com/packt-podcasts/we-need-to-democratize-security-how-semmle-is-improving-open-source-security   To learn more about Semmle, visit its website here. You can also follow Fermin Serna on Twitter: @fjserna. Read next:  5 reasons poor communication can sink DevSecOps How Chaos Engineering can help predict and prevent cyber-attacks preemptively
Read more
  • 0
  • 0
  • 3098
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-listen-puppets-vp-of-ecosystem-engineering-nigel-kersten-talks-about-key-devops-challenges-podcast
Richard Gall
23 Jul 2019
4 min read
Save for later

Listen: Puppet's VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast]

Richard Gall
23 Jul 2019
4 min read
We've been talking about DevOps a lot on the Packt Podcast. The reason for that is simple: it's a critical part of how we actually build software from both a technical and an organizational perspective. And anything that draws us closer to the relationship between people and software can only be a good thing right? For this edition of the Packt Podcast we spoke to Nigel Kersten, who's the VP of Ecosystem Engineering at Puppet. With Puppet playing an important role in the evolution of DevOps over the last decade or so, we thought he would be a great person to give an insight not only on how Puppet has been adapting to industry trends (yes, we're waving at you, Kubernetes). Listen to the episode: https://soundcloud.com/packt-podcasts/puppets-vp-of-engineering-nigel-kersten-on-the-organizational-challenges-of-devops Nigel Kersten talks DevOps We covered a diverse range of topics in the episode. From Nigel's move from Google to Puppet (which, he tells us, slightly upset his mom...), through to the challenges - and pitfalls - engineering teams face when trying to implement DevOps. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin Key quotes from this podcast episode How to automate workflows effectively “One thing we definitely tell people to do is… don’t automate one service from end to end. Don’t pick one complicated three tier web application put a small team on it and say “your job is to puppetize all of this infrastructure. What, instead, is a more powerful way to work is you go what are those low level building blocks that are across all of your infrastructure...? What are the things that are common across all of your infrastructure? Automate those things because they’re often really simple to do, and the rewards are huge.”  “Look at the things that are causing you pain in production. If you go and talk to the people who are on call, in charge of deployments, any of those parts of your infrastructure and ask them what would be the one thing that you would fix that would make your infrastructure more reliable, they will always have a shortlist of things… and when you do this, you start building trust across the whole organization.” The fear of automation “There’s always fear about adopting automation. There’s always fear about people’s jobs changing and adopting new tools and disciplines - sort of in an endless cycle of new tool adoption, people being told that they have to learn new things - the more you can actually show value across the whole organization that this thing’s relatively easy, a small investment for large returns, the more powerful an effect you're actually going to have.” DevOps challenges “I think it’s a huge mistake if people think they’re embarking on a DevOps journey and they’re not willing to actually make some of the cultural and organizational changes - it’s about creating more cross-functional teams, it’s about giving them more autonomy, and it’s about actually letting people work across organizational boundaries without having to go up and down the hierarchy of the organization.” “Most people are actually struggling pre-DevOps in many ways… the people who we’ve seen fail are the ones who have gone, look we’re going to jump exactly from where we are now and try to move to an incredibly automated environment without putting a lot of the ground work in place  - like building up trust within the org, giving teams more autonomy, allowing service owners to configure monitoring themselves - I think all of those sorts of things are really prerequisites for a whole organization succeeding at DevOps.”
Read more
  • 0
  • 0
  • 3906

article-image-is-devops-really-that-different-from-agile-no-says-viktor-farcic-podcast
Richard Gall
09 Jul 2019
2 min read
Save for later

Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast]

Richard Gall
09 Jul 2019
2 min read
No one can seem to agree on what DevOps really is. Although it's been around for the better part of a decade, it still inspires a good deal of confusion within organizations and across engineering teams. But perhaps we're all over thinking it? To get to the heart of the issues and debates around DevOps, we spoke to Viktor Farcic in the latest episode of the Packt Podcast. Viktor is a consultant at CloudBees, but he's also a prolific author, having written multiple for books for Packt and other publishers. Most recently he helped put together the series of interviews that make up DevOps Paradox, which was published in June. Listen to the podcast here: https://soundcloud.com/packt-podcasts/why-devops-isnt-really-any-different-from-agile-an-interview-with-viktor-farcic Viktor Farcic on DevOps and agile and their importance in today's cloud-native world In the podcast, Farcic talks about a huge range of issues within DevOps. From the way the term itself has been used and misused by technology leaders, to its relationship to containers, cloud, and serverless, he provides some clarifications to what he sees as common misconceptions. What's covered in the podcast: What DevOps means today and its evolution over the last decade Its importance in the context of cloud and serverless DevOps tools Is DevOps a specialized role? Or is it something everyone that writes code should do? How it relates to roles like Site Reliability Engineering (SRE) Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin What Viktor had to say... Viktor had this to say about the multiple ways in which DevOps is interpreted and practiced: "I work with a lot of companies, and every time I visit a company and they say “yes, we are doing DevOps” and I ask them “what is DevOps?” and I always get a different answer." This highlights that some clarification is long overdue when it comes to. Hopefully this conversation will go some way to doing just that...
Read more
  • 0
  • 0
  • 3592

article-image-listen-to-uber-engineer-yuri-shkuro-discuss-distributed-tracing-and-observability-podcast
Richard Gall
17 May 2019
2 min read
Save for later

Listen to Uber engineer Yuri Shkuro discuss distributed tracing and observability [Podcast]

Richard Gall
17 May 2019
2 min read
We've been talking a lot about observability on the Packt Hub over the last few months. Back in March we spoke to Honeycomb CEO Charity Majors who told us why observability is so important and why it can be so challenging for engineering teams to implement. It's clear it's a big topic with plenty of perspectives - but one that could have a ripple effect across the software industry. To get a further perspective on the topic, we spoke to Yuri Shkuro, who's an engineer at Uber and author of Mastering Distributed Tracing (which was published in February) to talk about how distributed tracing can help engineers build more observable systems. Yuri spoke in detail in the podcast about the value of observability in the context of complex distributed systems, as well as some of the challenges in implementing distributed tracing. As one of the creators of Jaeger, an open source tool built specifically for distributed tracing, he's well-placed to comment on how the ecosystem is evolving and how organizations can start thinking more seriously about observability. Read an extract from Yuri's book here. The episode covers: The difference between monitoring and observability Some of the misconceptions around distributed tracing Who can benefit from distributed tracing - from DevOps to SREs Practical advice for getting started with distributed tracing Listen on SoundCloud: https://soundcloud.com/packt-podcasts/if-youre-on-call-you-need-observability-tools-uber-engineer-yuri-shkuro-on-distributed-tracing “Tracing is conceptually a white box instrumentation technique. You cannot do tracing in an application by purely observing it from the outside, because that feature of context propagation is simply not possible - if you have 10 incoming requests into an application concurrently, and it does 100 outbound requests then how do you know which ones correlate to the incoming requests? That’s what context propagation allows us to achieve, it allows us to establish causality within events.”
Read more
  • 0
  • 0
  • 4095
article-image-honeycomb-ceo-charity-majors-discusses-observability-and-dealing-with-the-coming-armageddon-of-complexity-interview
Richard Gall
13 Mar 2019
16 min read
Save for later

Honeycomb CEO Charity Majors discusses observability and dealing with "the coming armageddon of complexity" [Interview]

Richard Gall
13 Mar 2019
16 min read
Transparency is underrated in the tech industry. But as software systems grow in complexity and their relationship with the real world becomes increasingly fraught, it nevertheless remains a value worth fighting for. But to effectively fight for it, it’s essential to remember that transparency is a technological issue, not just a communication one. Decisions about how software is built and why it’s built in the way that it is lie at the heart of what it means to work in software engineering. Indeed, the industry is in trouble if we can’t see just how important those questions are in relation to everything from system reliability to our collective mental health. Observability, transparency, and humility One term has recently emerged as a potential solution to these challenges: observability (or o11y as it's known in the community). This is a word that has been around for some time, but it’s starting to find real purchase in the infrastructure engineering world. There are many reasons for this, but a good deal of credit needs to go to observability platform Honeycomb and its CEO Charity Majors. [caption id="attachment_26599" align="alignleft" width="225"] Charity Majors[/caption] Majors has been a passionate advocate for observability for years. You might even say Honeycomb evolved from that passion and her genuine belief that there is a better way for software engineers to work. With a career history spanning Parse and Facebook (who acquired Parse in 2011), Majors is well placed to understand, diagnose, and solve the challenges the software industry faces in terms of managing and maintaining complex distributed systems designed to work at scale. “It’s way easier to build a complex system than it is to run one or to understand one,” she told me when I spoke to her in January. “We’re unleashing all these poorly understood complex systems on the world, and later having to scramble to make sense of it.” Majors is talking primarily about her work as a systems engineer, but it’s clear (to me at least) that this is true in lots of ways across tech, from the reliability of mobile apps to the accuracy of algorithms. And ultimately, impenetrable complexity can be damaging. Unreliable systems, after all, cost money. The first step, Majors suggests, to counteracting the challenges of distributed systems, is an acceptance of a certain degree of impotence. We need humility. She talks of “a shift from an era when you could feel like your systems were up and working to one where you have to be comfortable with the fact that it never is.” While this can be “uncomfortable and unsettling for people in the beginning,” in reality it’s a positive step. It moves us towards a world where we build better software with better processes. And, most importantly, it cultivates more respect for people on all sides - engineers and users. Charity Majors’ (personal) history of observability Observability is central to Charity Majors’ and Honeycomb’s purpose. But it isn’t a straightforward concept, and it’s also one that has drawn considerable debate in recent months. Ironically, although the term is all about clarity, it has been mired in confusion, with the waters of its specific meaning being more than a little muddied. “There are a lot of people in this space who are still invested in ‘oh observability is a generic synonym for telemetry,’” Majors complains. However, she believes that “engineers are hungry for more technical terminology,” because the feeling of having to deal with problems for which you are not equipped - quite literally - is not uncommon in today’s industry. With all the debate around what observability is, and its importance to Honeycomb, Majors is keen to ensure its definition remains clear. “When Honeycomb started up… observability was around as a term, but it was just being used as a generic synonym for telemetry… when we started… the hardest thing was trying to think about how to talk about it... because we knew what we were doing was different,” Majors explains. Experimentation at Parse The route to uncovering the very specific - but arguably more useful - definition of observability was through a period of sustained experimentation while at Parse. “Around the time we got acquired... I was coming to this horrifying realisation that we had built a system that was basically un-debuggable by some of the best engineers in the world.” The key challenge for Parse was dealing with the scale of mobile applications. Parse customers would tell Majors and her team that the service was down for them, underlining Parse’s monitoring tools’ lack of capability to pick up these tiny pockets of failure (“Behold my wall of dashboards! They’re all green, everything is fine!” Majors would tell them). Scuba: The “butt-ugly” tool that formed the foundations of Honeycomb The monitoring tools Parse was using at the time weren’t that helpful because they couldn’t deal with high-cardinality dimensions. Put simply, if you wanted to look at things on a granular, user by user basis, you just couldn’t do it. “I tried everything out there… the one thing that helped us get a handle on this problem was this butt-ugly tool inside Facebook that was aggressively hostile to users and seemed very limited in its functionality, but did one thing really well… it let you slice and dice in real time on dimensions of arbitrarily high cardinality.” Despite its shortcomings, this set it apart from other monitoring tools which are “geared towards low cardinality dimensions,” Majors explains. [caption id="attachment_26601" align="alignright" width="225"] More than just a quick fix (Credit: Charity Majors)[/caption] So, when you’re looking for “needles in a haystack,” as Parse engineers often were, the level of cardinality is essential. “It was like night and day. It went from hours, days, or impossible, to seconds. Maybe a minute.” Observability: more than just a platform problem This experience was significant for Majors and set the tone for Honeycomb. Her experience of working with Scuba became a frame for how she would approach all software problems. “It’s not even just about, oh the site is down, debug it, it’s, like, how do I decide what to build?” It had, she says, “become core to how I experienced the world.” Over the course of developing Honeycomb, it became clear to Majors that the problems the product was trying to address were actually deep: “a pure function of complexity.” “Modern infrastructure has become so ephemeral you may not even have servers, and all of our services are far flung and loosely coupled. Some of them are someone else’s service,” Majors says. “So I realise that everyone is running into this problem and they just don’t have the language for it. All we have is the language of monitoring and metrics when... this is inherently a distributed systems problem, and the reason we can’t fix them is because we don’t have distributed systems tools.” Towards a definition of observability Looking over my notes, I realised that we didn’t actually talk that much about the definition of observability. At first I was annoyed, but in reality this is probably a good thing. Observability, I realised, is only important insofar as it produces real world effects on how people work. From the tools they use to the way they work together, observability, like other tech terms such as DevOps, only really have value to the extent that they are applied and used by engineers. [caption id="attachment_26606" align="alignleft" width="225"] It's not always easy to tell exactly what you're looking at (Credit: Charity Majors)[/caption] “Every single term is overloaded in the data space - every term has been used - and I was reading the dictionary definition of the word ‘observability’ and... it’s from control systems and it’s about how much can you understand and reason about the inner workings of these systems just by observing them from the outside. I was like oh fuck, that’s what we need to talk about!” In reality, then, observability is a pretty simple concept: how much can you understand and reason about the inner workings of these systems just by observing them from the outside. Read next: How Gremlin is making chaos engineering accessible [Interview] But things, as you might expect, get complicated when you try and actually apply the concept. It isn’t easy. Indeed, that’s one of the reasons Majors is so passionate about Honeycomb. Putting observability into practice Although Majors is a passionate advocate for Honeycomb, and arguably one of its most valuable salespeople, she warns against the tendency for tooling to be viewed as silver bullet solutions to problems. “A lot of people have been sold this magic spell idea which is that you don’t have to think about instrumentation or explaining your code back to yourself” Majors says. Erroneously, some people will think they “can just buy this tool for millions of dollars that will do it for you… it’s like write code, buy tool, get magic… and it doesn’t actually work, it never has and it never will.” This means that while observability is undoubtedly a tooling issue, it’s just as much a cultural issue too. With this in mind, you definitely shouldn’t make the mistake of viewing Honeycomb as magic. “It asks more of you up front,” Majors says. “There is no magic. At no point in the future are you going to get to just write code and lob it over the wall for ops to deal with. Those days are over, and anyone who is telling you anything else is selling you some very expensive magic beans. The systems of the future do require more of developers. They ask you to care a little bit more up front, in terms of instrumentation and operability, but over the lifetime of your code you reap that investment back hundreds or thousands of times over. We're asking you, and helping you, make the changes you need to deal with the coming Armageddon of complexity.” Observability is important, but it’s a means to an end: the end goal is to empower software engineers to practice software ownership. They need to own the full lifecycle of their code. How transparency can improve accountability Because Honeycomb demands more ‘up front’ from its users, this requires engineering teams to be transparent (with one another) and fully aligned. Think of it this way: if there’s no transparency about what’s happening and why, and little accountability for making sure things do or don’t happen inside your software, Honeycomb is going to be pretty impotent. We can only really get to this world when everyone starts to care properly about their code, and more specifically, how their code runs in production. “Code isn’t even interesting on its own… code is interesting when users interact with it,” Majors says. “it has to be in production.” That’s all well and good (if a little idealistic), but Majors recognises there’s another problem we still need to contend with. “We have a very underdeveloped set of tools and best practices for software ownership in production… we’ve leaned on ops to… be just this like repository of intuition… so you can’t put a software engineer on call immediately and have them be productive…” Observability as a force for developer well-being This is obviously a problem that Honeycomb isn’t going to fix. And yes, while it’s a problem the Honeycomb marketing team would love to fix, it’s not just about Honeycomb’s profits. It’s also about people’s well being. [caption id="attachment_26602" align="alignright" width="300"] The Honeycomb team (Credit: Charity Majors)[/caption] “You should want to have ownership. Ownership is empowering. Ownership gives you the power to fix the thing you know you need to fix and the power to do a good job… People who find ownership is something to be avoided - that’s a terrible sign of a toxic culture.” The impact of this ‘toxic culture’ manifests itself in a number of ways. The first is the all too common issue of developer burnout. This is because a working environment that doesn’t actively promote code ownership and accountability, leads to people having to work on code they don’t understand. They might, for example, be working in production environments they haven’t been trained to adequately work with. "You can’t just ship your code and go home for the night and let ops deal with it," Majors asserts. "If you ship a change and it does something weird, the best person to find that problem is you. You understand your intent, you have all the context loaded in your head. It might take you 10 minutes to find a problem that would take anyone else hours and hours." Superhero hackers The second issue is one that many developers will recognise: the concept of the 'superhero hacker'. Read next: Don’t call us ninjas or rockstars, say developers “I remember the days of like… something isn’t working, and we’d sit around just trying random things or guessing... it turns out that is incredibly inefficient. It leads to all these cultural distortions like the superhero hacker who does the best guessing. When you have good tooling, you don’t have to guess. You just look and see.” Majors continues on this idea: “the source of truth about your systems can’t live in one guy’s head. It has to live in a tool where everyone has access to the same information about the system, one single source of truth... Otherwise you’re gonna have that one guy who can’t go on vacation ever.” While a cynic might say well she would say that - it’s a product pitch for Honeycomb, they’d ultimately be missing the point. This is undoubtedly a serious issue that’s having a severe impact on our working lives. It leads directly to mental health problems and can even facilitate discrimination based on gender, race, age, and sexuality. At first glance, that might seem like a stretch. But when you’re not empowered - by the right tools and the right support - you quite literally have less power. That makes it much easier for you to be marginalized or discriminated against. Complexity stops us from challenging the status quo The problem really lies with complexity. Complexity has a habit of entrenching problems. It stops us from challenging the status quo by virtue of the fact that we simply don’t know how to. This is something Majors takes aim at. In particular, she criticises "the incorrect application of complexity to the business problem it solves." She goes on to say that “when this happens, humans end up plugging the dikes with their thumbs in a continuous state of emergency. And that is terrible for us as humans." How Honeycomb practices what it preaches Majors’ passion for what she believes is evidenced in Honeycomb's ethos and values. It’s an organization that is quite deliberately doing things differently from both a technical and cultural perspective. [caption id="attachment_26604" align="alignright" width="300"] Inside the Honeycomb HQ (Credit: Charity Majors)[/caption] Majors tells me that when Honeycomb started, the intention was to build a team that didn’t rely upon superstar engineers: “We made the very specific intention to not build a team of just super-senior expert engineers - we could have, they wanted to come work with us, but we wanted to hire some kids out of bootcamp, we wanted to hire a very well rounded team of lots of juniors and intermediates... This was a decision that I made for moral reasons, but I honestly didn’t know if I believed that it would be better, full disclosure - I honestly didn’t have full confidence that it would become the kind of high powered team that I felt so proud to work on earlier in my career. And yet... I am humbled to say this has been the most consistent high-performing engineering team that I have ever had the honor to work with. Because we empower them to collaborate and own the full lifecycle of their own code.” Breaking open the black boxes that sustain internal power structures This kind of workplace, where "the team is the unit you care about" is one that creates a positive and empowering environment, which is a vital foundation for a product like Honeycomb. In fact, the relationship between the product and the way the team works behind it is almost mimetic, as if one reflects the other. Majors says that "we’re baking" Honeycomb's organizational culture “into the product in interesting ways." [caption id="attachment_26603" align="alignleft" width="300"] Teamwork (Credit: Charity Majors)[/caption] She says that what’s important isn’t just the question of “how do we teach people to use Honeycomb, but how do we teach people to feel safe and understand their giant sprawling distributed systems. How do we help them feel oriented? How do we even help them feel a sense of safety and security?"   Honeycomb is, according to Majors, like an "outsourced brain." It’s a product that means you no longer need to worry about information about your software being locked in a single person’s brain, as that information should be available and accessible inside the product. This gives individuals safety and security because it means that typical power structures, often based on experience or being "the guy who’s been there the longest" become weaker. Black boxes might be mysterious but they're also pretty powerful. With a product like Honeycomb, or, indeed, the principles of observability more broadly, that mystery begins to lift, and the black box becomes ineffective. Honeycomb: building a better way of developing software and developing together In this context, Liz Fong-Jones’ move to Honeycomb seems fitting. Fong-Jones (who you can find on Twitter @lizthegrey) was a Staff SRE at Google and a high profile critic of the company over product ethics and discrimination. She announced her departure at the beginning of 2019 (in fact, Fong-Jones started at Honeycomb in the last week of February). By subsequently joining Honeycomb, she left an environment where power was being routinely exploited, for one where the redistribution of power is at the very center of the product vision. Honeycomb is clearly a product and a company that offers solutions to problems far more extensive and important than it initially thought it would. Perhaps we’re now living in a world where the problems it’s trying to tackle are more profound than they first appear. You certainly wouldn’t want to bet against its success with Charity Majors at the helm. Follow Charity Majors on Twitter: @mipsytipsy Learn more about Honeycomb and observability at honeycomb.io. You can try Honeycomb for yourself with a free trial.
Read more
  • 0
  • 0
  • 6533

article-image-listen-we-discuss-why-chaos-engineering-and-observability-will-be-important-in-2019-podcast
Richard Gall
21 Dec 2018
1 min read
Save for later

Listen: We discuss why chaos engineering and observability will be important in 2019 [Podcast]

Richard Gall
21 Dec 2018
1 min read
This week I published a post that explored some of the key trends in software infrastructure that security engineers, SREs, and SysAdmins should be paying attention to in 2019. There was clearly a lot to discuss - which is why I sat down with my colleague Stacy Matthews to discuss some of the topics explored in the post in a little more. Enjoy! https://soundcloud.com/packt-podcasts/why-observability-and-chaos-engineering-will-be-vital-in-2019 What do you think? Is chaos engineering too immature for widespread adoption? And how easy will it be to begin building for observability?
Read more
  • 0
  • 0
  • 3542

article-image-how-gremlin-is-making-chaos-engineering-accessible-interview
Richard Gall
14 Jun 2018
10 min read
Save for later

How Gremlin is making chaos engineering accessible [Interview]

Richard Gall
14 Jun 2018
10 min read
Despite considerable hype, chaos engineering doesn’t appear to have yet completely captured the imagination of the wider software engineering world. According to this year’s Skill Up survey, when asked, only 13% of developers said they were excited about it. But that doesn’t mean we should disregard - far from it. Like many of the best trends, it might blow up when we least expect. It might find its way onto your CTOs eyes in just a few months. As site reliability engineering grows as a discipline, and as businesses start to put a value on downtime, chaos engineering is likely to become a big part of the reliability and resilience toolkit. Gremlin, chaos engineering, and the end of the age of downtime “People are expected to always be up” says Matt Fornaciari, co-founder and CTO of Gremlin, a product that offers “failure as a service” to businesses. I spoke to Fornaciari last month to get a deeper insight on Gremlin and the team and ideas behind it. He believes the world has changed in recent years, and the days of service windows when sites would just be taken down for an hour or two for an update or change is over: “that’s unacceptable to people these days.” Fornaciari isn’t an unbiased observer, of course. The success of Gremlin depends on chaos engineering’s adoption and acceptance. However, he’s not going out on a limb; there’s clear VC interest in Gremlin. At the end of 2017 the company received their first round of funding - more than 7 million USD. It’s a cliche but money does talk - and in this instance it seems to be saying that this approach might change the way we think about building our software. Arguably, chaos engineering - and by extension Gremlin - is a response to other trends in software. “I’ve seen a lot of signals that this is the way the world’s going”, Fornaciari says. He’s referring here to broader trends like cloud and microservices. He explains that because microservices is all about modularity, and breaking aspects of your software infrastructure into smaller pieces “you end up with nodes in this network” which “adds network complexity.” Consequently, this additional complexity means there is more that can go wrong - it becomes more unreliable. Gremlin’s bid to democratize chaos engineering It’s important to note here that chaos engineering has been around for some time - it’s not a radically new methodology. But it’s largely been locked away in some of the world’s biggest tech companies, like Netflix and Amazon. Many of Gremlin’s leaders actually worked at those companies - Fornaciari has worked at Salesforce and Amazon, for example. “The main goal was to democratize chaos engineering… we’ve [the Gremlin team] done it at the bigger companies and we’re like you know what, everyone can benefit from this”. That is the essential point around chaos engineering. If it’s going to catch on in the mainstream tech world, it needs to be more accessible to different businesses. Fornaciari explains that many of Gremlin’s customers are larger organizations. These are companies for whom downtime is of utmost importance, where a site outage that lasts just an hour could cost thousands of dollars. That said, from a cultural perspective, many organizations find it difficult to adopt this sort of mindset. “Proving the value of something that doesn’t happen,” Fornaciari says, is one of the biggest challenges for Gremlin. This is particularly true when selling their tool. Pager pain: How Gremlin sells chaos engineering to customers This is how Gremlin does it: “We have three qualifying questions: do you measure your downtime? Do you have somebody who’s responsible for downtime? And do you actually have a dollar amount tied to it?” Presumably, for many organizations at least one answer to these questions is “no”. That’s why customer support is so important for Gremlin. “Customer success and developer advocacy are two of our biggest initiatives… I’ve told people as we’re recruiting them that half of our goal as a company is to educate people.” Gremlin’s challenges as a product and as a business reflect the wider difficulties of managing upwards. The tension between those ‘on the ground’ and those at a more senior and managerial level is one that Gremlin is acutely aware of. This is where a lot of push back comes from, Fornaciari explains: What we’ve seen so far is just push back from top down - like, why do we need this? We use the term pager pain to define the engineer on call - the closer you are to the ground the closer you are to the on call rotation and the more you feel those pains and the more you believe in this but as you raise up a couple of levels you maybe don’t feel that as much… if you don’t have that measure on uptime - unless someone is on the hook for that at a higher level there’s oftentimes a why do we need this, why are we going to spend money on breaking things. Pager pain is a nice concept - it captures the tension between different layers of management. It highlights the conflict between ‘what do we need?’ and ‘what can we do?’ Read next: Blockchain can solve tech's trust issues  Safety, simplicity and security To successfully sell Gremlin, the way the product is designed is everything. For that reason, the Gremlin team have three tenets built into their product: safety, security, simplicity. When you’ve got a “potentially dangerous tool,” as Fornaciari himself describes it, making sure things are safe and secure is absolutely essential. Arguably, the fact that chaos engineering is so hard to do well might be something that Gremlin can use to its advantage. “One thing we hear when we talk to companies about it is ‘well we’ll go build this ourselves’ and the fact is it’s a really hard thing to do, and a hard thing to do well.” Gremlin is walking on a bit of a tightrope. On the one hand chaos engineering is for everyone, but on the other it’s difficult and dangerous. It should be accessible, but not too accessible. “One of the reasons we don’t have a free offering is because we are a little worried about protecting our customers not doing any harm to people… I mean, this is essentially giving somebody a potentially dangerous tool.. If they’re not given the proper education then that could be a problem, right?” Gremlin aren’t the only chaos engineering product out there. As with any trend, there are plenty of software platforms and tools emerging for technologically forward thinking businesses. Fornaciari doesn’t see these as a threat - he’s confident, bullish even, about Gremlin’s place in the market. “There are a lot of tools out there that people can go and use but they really lack the safety and simplicity.” Alongside its philosophy of safety, security and simplicity, a big selling point, according to Fornaciari, is the experience and expertise that is built into Gremlin’s DNA. “We’ve got fifteen years of combined expertise in this space” he says. “Being the experts on it and having built it 3 or 4 times already in different big companies, it sort of gave us this leg up to go out there in the world.” But while Fornaciari is eager to assert Gremlin’s knowledge, there’s no trace of elitism - sharing knowledge is a core part of the product offering. “We actually built out customer success tooling so we can see if particular attacks fail for them we can actually proactively reach out and be like ‘hey we saw you were trying to do this, maybe you meant to do this’” Fornaciari explains. Controlled chaos: chaos engineering and the scientific method Control is central to Gremlin’s philosophy - it’s a combination of the team’s commitment to safety, security and simplicity. In fact, this element of control that distinguishes chaos engineering today, from what went before. Central to Gremlin’s mission to make chaos engineering accessible, is also redefining how it’s done. “If you’re familiar with the netflix chaos monkey mentality of randomly terminating services, well that’s a good start, but safety is really lacking. We talked more about this controlled chaos… this idea that you start fairly small with this small blast radius and then as you become more confident you grow it out and grow it out as opposed to just like ‘cool, let’s just chuck a grenade in here and see what happens.’” Fornaciari goes on to describe this ‘controlled chaos’ in a surprising way. “It’s much more like the scientific method actually. Applying that method to your infrastructure and your reliability in general.” This approach is essential if you’re going to do chaos engineering well. How to do chaos engineering effectively When I ask Fornaciari how engineering teams and businesses can do chaos engineering well he emphasizes the importance of starting with a hypothesis: “You need to have a hypothesis that you’re trying to prove.Throwing random chaos at something is fine - it’ll sort of surface some of the unknown unknowns for you. But really having a hypothesis that you’re trying to prove is the best way to get value out of this [chaos engineering].” If you’re going to take a scientific approach to testing your infrastructure using ‘chaos experiments’, managing scale is also incredibly important. Don’t run before you can walk is the message. “Keep it very small initially, then you start to grow the blast radius. You definitely want to make sure that you’re starting off with the smallest modicum that you can.” Given the potential dangers of throwing metaphorical gremlins into your system, starting where your comfortable makes a lot of sense. “Start in staging, start where your comfortable, build your confidence. Make sure your system behaves well in front of non-customer facing traffic before you go out to the world.” That said, Gremlin have had “some pretty bold customers” who go straight ahead and start running chaos experiments in production. “That was cool. It’s a little scary, but they were confident and they’ve been using Gremlin as part of their system ever since.” Chaos engineering requires confidence and control Ultimately, if chaos engineering is going to take off - as Fornaciari believes it will - engineers will need to be incredibly confident. That’s true on a number of levels. You need confidence that you’ll be able to handle a range of experiments and deploy them wisely. But you’ll also need confidence that you can manage the expectations of those in senior management. It’s not hard to see the value of chaos engineering. As Fornaciari says “if you prevent one outage one time, you’ve saved that money to pay for the tool to make sure it doesn’t happen again.” But it might be hard to find time for it. It might be hard to get buy in and investment in the tools you need to do it. Gremlin are certainly going to play an important part in helping engineers do that. But one of its biggest challenges - and perhaps one of its most noble missions too - is transforming a culture where people don’t really appreciate ‘pager pain’. If Fornaciari and Gremlin can help solve that, good luck to them. You can follow Matt Fornaciari on Twitter: @callmeforni
Read more
  • 0
  • 0
  • 6288
article-image-agile-devops-continuous-integration-interview-insights
Aaron Lazar
30 May 2018
7 min read
Save for later

Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner

Aaron Lazar
30 May 2018
7 min read
In the past few years, Agile software development has seen tremendous growth. There is a huge demand for software delivery solutions that are fast, yet flexible to numerous amendments. As a result, Continuous Integration (CI) and Continuous Delivery (CD) methodologies are gaining popularity. They are considered to be the cornerstones of DevOps and drive the possibilities of modern architectures like microservices and cloud native. Author’s Bio Nikhil Pathania, a DevOps practitioner at Siemens Gamesa Renewable Energy, started his career as an SCM engineer and later moved on to learn various tools and technologies in the fields of automation and DevOps. Throughout his career, Nikhil has promoted and implemented Continuous Integration and Continuous Delivery solutions across diverse IT projects. He is the author of Learning Continuous Integration with Jenkins. In this exclusive interview, Nikhil gives us a sneak peek into the trends and challenges of Continuous Integration in DevOps. Key Takeaways The main function of Continuous Integration is to provide feedback on integration issues. When practicing DevOps, a continuous learning attitude, sharp debugging skills, and an urge to improvise processes is needed Pipeline as a code is a way of describing a Continuous Integration pipeline in a pre-defined syntax One of the main reasons for Jenkin’s popularity is it’s growing support via plugins Making yourself familiar with a scripting language like Shell or Python will help you accomplish difficult tasks related to CI/CD Continuous Integration is built on Agile and requires a fair understanding of the 12 principles. Full Interview On the popularity of DevOps DevOps as a concept and culture is gaining a lot of traction these days. What is the reason for this rise in popularity? What role does Continuous Integration have to play in DevOps? To understand this, we need to look back at the history of software development. For a long period, the Waterfall model was the predominant software development methodology in practice. Later, when there was a sudden surge in the usage and development of software applications, the Waterfall model proved to be inefficient, thus giving rise to the Agile model. This new model proposed coding, building, testing, packaging, and releasing software in a quick and incremental fashion. As the Agile model gained momentum, more and more teams wanted to ship their applications faster and more frequently. This added a huge pressure on the release management process. To cope up with this pressure, engineers came up with new processes and techniques (collectively bundled as DevOps), such as the usage of improved branching strategies, Continuous Integration, Continuous Delivery, Automated environment provisioning, monitoring, and configuration. Continuous Integration involves continuous building and testing of your integrated code; it’s an integral part of DevOps, dealing with automated builds, testing, and more. Its core function is to provide a quick feedback on the integration issues. On your journey as a DevOps engineer You have been associated with DevOps for quite some time now and hold vast experience as a DevOps engineer and consultant. How and when did your journey start? Which tools did you master to help you with your day-to-day tasks? I started my career as a Software Configuration Engineer and was trained in SCM and IBM Rational Clearcase. After working as a Build and Release Engineer for a while, I turned towards new VCS tools such as Git, automation, and scripting. This is when I was introduced to Jenkins followed by a large number of other DevOps tools such as SonarQube, Artifactory, Chef, Teamcity, and more. It’s hard to spell out the list of tools that you are required to master since the list keeps increasing as the days pass by. There is always a new tool in the DevOps tool chain replacing the old one. A DevOps tool itself changes a lot in its usage and working over a period of time. A continuous learning attitude, sharp debugging skills, and an urge to improvise processes is what is needed, I’ll say. On the challenges of implementing Continuous Integration What are some of the common challenges faced by engineers in implementing Continuous Integration? Building the right mind-set in your organization: By this I mean preparing teams in your organisation to get Agile. Surprised! 50% of the time we spend at work is on migrating teams from old ways of working to the new ones. Implementing CI is one thing, while making the team, the project, the development process, and the release process ready for CI is another. Choosing the right VCS tool and CI tool: This is an important factor that will decide where your team will stand a few years down the line—rejoicing in the benefits of CI or shedding tears in distress. On how the book helps overcome these challenges How does your book 'Learning Continuous Integration with Jenkins' help DevOps professionals overcome the aforementioned challenges? This is why I have a whole chapter (Concepts of Continuous Integration) explaining how Continuous Integration came into existence and why projects need it. It also talks a little bit about the software development methodologies that gave rise to it. The whole book is based on implementing CI using Jenkins, Git, Artifactory, SonarQube, and more. About Pipeline as a Code Pipeline as a Code was a great introduction in Jenkins 2. How does it simplify Continuous Integration? Pipeline as a code is a way of describing your Continuous Integration pipeline in a pre-defined syntax. Since it’s in the form of code, it can be version-controlled along with your source code and there are endless possibilities of programming it, which is something you cannot get with GUI pipelines. On the future of Jenkins and competition Of late, tools such as TravisCI and CircleCI have got a lot of positive recognition. Do you foresee them going toe to toe with Jenkins in the near future? Over the past few years Jenkins has grown into a versatile CI/CD tool. What makes Jenkins interesting is its huge library of plugins that keeps growing. Whenever there is a new tool or technology in the software arena, you have a respective plugin in Jenkins for it. Jenkins is an open source tool backed by a large community of developers, which makes it ever-evolving. On the other hand, tools like TravisCI and CircleCI are cloud-based tools that are easy to start with, limited to CI in their functionality, and work with GitHub projects. They are gaining popularity mostly in teams and projects that are new. While it’s difficult to predict the future, what I can say for sure is that Jenkins will adapt to the ever-changing needs and demands of the software community. On key takeaways from the book Learning Continuous Integration with Jenkins Coming back to your book, what are the 3 key takeaways from it that readers will find to be particularly useful? In-depth coverage of the concepts of Continuous Integration. A step-by-step guide to implementing Continuous Integration, Continuous Delivery with Jenkins 2 using all the new features. A practical usage guide to Jenkins's future, the Blue Ocean. On the learning path for readers Finally, what learning path would you recommend for someone who wants to start practicing DevOps and, specifically, Continuous Integration? What are the tools one must learn? Are there any specific certifications to take in order to form a solid resume? To begin with, I would recommend learning a VCS tool (say Git), a CI/CD tool (Jenkins), a configuration management tool (Chef or Puppet, for example), a static code analysis tool, a cloud tool like AWS or Digital Ocean, and an artifactory management tool (say Artifactory). Learn Docker. Build a solid foundation in the Build, Release and Deployment processes. Learn lots of scripting languages (Python, Ruby, Groovy, Perl, PowerShell, and Shell to name a few), because the real nasty tasks are always accomplished by scripts. A good knowhow of the software development process and methodologies (Agile) is always nice to have. Linux and Windows administration will always come in handy. And above all, a continuous learning attitude, an urge to improvise the processes, and sharp debugging skills is what is needed. If you enjoyed reading this interview, check out Nikhil’s latest edition Learning Continuous Integration with Jenkins. Top 7 DevOps Tools in 2018 Everything you need to know about Jenkins X 5 things to remember when implementing DevOps
Read more
  • 0
  • 0
  • 3741