Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

How-To Tutorials - DevOps

49 Articles
article-image-5-reasons-poor-communication-can-sink-devsecops
Guest Contributor
27 Aug 2019
7 min read
Save for later

5 reasons poor communication can sink DevSecOps

Guest Contributor
27 Aug 2019
7 min read
In the last few years, a major shift has occurred within the software industry in terms of how companies organize themselves and manage product responsibilities. The result is a merging of development and operations roles under a single umbrella known as DevOps. However, that’s not where the story ends. More and more businesses are beginning to realize that cybersecurity strategy should not be treated as an independent activity. Software reliability is completely dependent upon security and risk management; otherwise, you leave your organization vulnerable to external attacks. The result of this thinking has been an increase in the scope of the DevOps role, adding a security aspect as well also known as DevSecOps. But not every company can easily shift to the DevSecOps model overnight, especially when communication issues get in the way. In this article, we'll cover five of the most common roadblocks and ways to avoid them. 1. Unclear responsibilities Now that the DevSecOps trend has gone mainstream in the software industry, you'll see many new job listings popping up in this area. Unfortunately, some companies are using the term as a catch-all to throw various disconnected duties at a single person. The result can be quite frustrating. Leadership and management need to set a clear scope for the responsibilities of DevSecOps engineers and integrate them directly with other parts of the organization, including development, quality assurance, and cloud administrators. DevSecOps can only work as a strategy if it's fully adopted and understood by people at every level. It's also important to be careful not to characterize the DevSecOps concept as a group of tools. The focus should always remain on the individuals performing their duties and not the applications that they use. Clarifying this line of distinction can make it easier to communicate within your organization. 2. Missing connection to end-users Engineers in the DevSecOps role should be involved at every phase of the software development lifecycle. Otherwise, they will not have a holistic view of the platform's security status and how it is functioning. In a worst-case scenario, the DevSecOps team is disconnected from end-users entirely. If an engineer does not understand how real people are using their application, then the product as a whole is likely doomed. User requirements should form the basis of every coding project, and supporting the development lifecycle is only possible if that link exists and is maintained. 3. Too many (and unsecured) communication tools Engineers in a DevSecOps role often spend the majority of their days coordinating between other groups within the organization. This activity can't succeed unless there is a strong communication tool set at their disposal. One mistake many companies make is deciding to invest in dozens of different chat, messaging, and conferencing apps in hopes that it will make things easier. The problem is that easy online communication comes at the price of privacy. Some platforms retain your data for their own internal use or even to sell - in the form of uniquely identifiable IP addresses - to advertisers. Though it’s relatively easy to hide your IP address, do you want to trust an app that plays fast and loose with your information in the first place? The problem is that all this ease of communication comes at a price which often includes data retention or sharing with third parties of uniquely identifiable information like IP addresses, which can be hidden a few different ways - but why trust an app that reveals it in the first place? One way a DevSecOps team can address this issue is to emphasize the security risk to decision-makers in regards to many popular tools. Slack, WhatsApp, Snapchat and others are recent examples of a popular messaging apps that are now taking flak because of different security risks they pose. When it comes to email, even Gmail, the most popular email service, has been caught allowing unfettered access to user email addresses. Our advice is to use an encrypted email tool such as ProtonMail or Mailfence rather than rely on the usual suspects with with better name recognition. The more communication tools you use, the larger the threat surface vulnerable to hackers. 4. Alert fatigue One key part of the DevSecOps suite of responsibilities is to streamline all monitoring and alerting functions across the organization. The goal is to make it easy for both managers and engineers to find out about live issues and quickly navigate to the source of the problem. In some cases, DevSecOps engineers will be asked to set very sensitive alerting protocols so that no potential problem is missed. There is a downside, though, because having too many notifications can lead to alert fatigue. This is especially true if your monitoring tools are throwing false positives all day long. A string of unnecessary alerts will cause people to stop paying attention to them. An approach to alerting should be well thought out and clearly documented in runbooks by the DevSecOps team. A runbook should explain exactly what an alert means and the steps required to address it. This level of documentation allows DevSecOps engineers to outsource incident response to a larger group. 5. Hidden dependencies Because of the wide scope of the DevSecOps role, sometimes organizations expect engineers to be fortune-tellers and be able to predict how changes will impact code, tests, security. This level of confidence cannot be reached unless there is clear and consistent communication across the company. Take for example a decision to add a firewall protection around a database server to block outside threats. This will probably seem like a simple change for the engineers working on the system, but they may not realize that a new firewall could cut off connections to other services within the same infrastructure. If DevSecOps was involved in the meetings and decision making, then this type of hidden dependency could have been uncovered earlier. The DevSecOps model can only succeed if the organization has a strong policy of change management. Any modification to a live system should be thoroughly vetted by representatives of all teams. At that time, risks can be weighed and approvals can be made. Changes should be scheduled at times when the impact will be minimal. Final thoughts When browsing job listings, you'll surely see an influx of roles mentioning both DevOps and DevSecOps. These roles can have an incredibly wide scope and often play a critical role in the success of a software company. But breakdowns in communication have the potential to derail the goals of DevSecOps putting the entire organization at risk. Modern software development is all about being agile, meaning that requirements gathering and coding all happen with great flexibility and fluidity. The same should be true for DevSecOps. The duties of this role should be evaluated and tweaked on a regular basis and clear communication is the best way to go about it. Author Bio Gary Stevens is a front-end developer. He’s a full-time blockchain geek and a volunteer working for the Ethereum foundation as well as an active Github contributor. Why do IT teams need to transition from DevOps to DevSecOps? The seven deadly sins of web design Dark Web Phishing Kits: Cheap, plentiful and ready to trick you
Read more
  • 0
  • 0
  • 3249

article-image-docker-19-03-introduces-an-experimental-rootless-docker-mode-that-helps-mitigate-vulnerabilities-by-hardening-the-docker-daemon
Savia Lobo
29 Jul 2019
3 min read
Save for later

Docker 19.03 introduces an experimental rootless Docker mode that helps mitigate vulnerabilities by hardening the Docker daemon

Savia Lobo
29 Jul 2019
3 min read
Tõnis Tiigi, a software engineer at Docker and also a maintainer of Moby/Docker engine, in his recent post on Medium, explained how users can now leverage Docker’s non-root user privileges with Docker 19.03 release. He explains the Docker engine provides functionalities which are often tightly coupled to that of the Linux Kernel. For instance, to create namespaces in Linux users need privileged capabilities this is because a component of container isolation is based on Linux namespaces. “Historically Docker daemon has always needed to be started by the root user”, Tiigi explains. Docker is looking forward to changing this notion by introducing rootless support. With the help of Moby and BuildKit maintainer, Akihiro Suda, “we added rootless support to BuildKit image builder in 2018 and from February 2019 the same rootless support was merged to Moby upstream and is available for all Docker users to try in experimental mode”, Tiigi mentions. The rootless mode will help reduce the security footprint of the daemon and expose Docker capabilities to systems where users cannot gain root privileges. Rootless Docker and its benefits As the name suggests, a rootless mode in Docker allows a user to run Docker daemon, including the containers, as a non-root user on the host. The benefit to this is, even if it gets compromised the attacker will not be able to gain root access to the host. Akhiro Suda in his presentation, “Hardening Docker daemon with Rootless mode” explains rootless mode does not entirely fix vulnerabilities and misconfigurations but can mitigate attacks. With rootless mode attacker would not be able to access files owned by other users, modify firmware and kernel with an undetectable malware, or perform ARP spoofing. A few Caveats to the rootless Docker mode Docker engineers say the rootless mode cannot be considered a replacement for the complete suite of Docker engine features. Some limitation to the rootless mode include: cgroups resource controls, apparmor security profiles, checkpoint/restore, overlay networks etc. do not work on rootless mode. Exposing ports from containers currently requires manual socat helper process. Only Ubuntu-based distros support overlay filesystems in rootless mode. Rootless mode is currently only provided for nightly builds that may not be as stable as you are used to. As a lot of Linux features that Docker needs require privileged capabilities, the rootless mode takes advantage of user namespaces. “User namespaces map a range of user ID-s so that the root user in the inner namespace maps to an unprivileged range in the parent namespace. A fresh process in user namespace also picks up a full set of process capabilities”, Tiigi explains. Source: Medium In the recent release of the experimental rootless mode on GitHub, engineers mention rootless mode allows running dockerd as an unprivileged user, using user_namespaces(7), mount_namespaces(7), network_namespaces(7). Users need to run dockerd-rootless.sh instead of dockerd. $ dockerd-rootless.sh --experimental As Rootless mode is experimental, users need to always run dockerd-rootless.sh with --experimental. To know more about the Docker rootless mode in detail, read Tõnis Tiigi’s Medium post. You can also check out Akihiro Suda’s presentation Hardening Docker daemon with Rootless mode. Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 Announcing Docker Enterprise 3.0 Public Beta! Are Debian and Docker slowly losing popularity?
Read more
  • 0
  • 0
  • 12487

article-image-what-is-hcl-hashicorp-configuration-language-how-does-it-relate-to-terraform-and-why-is-it-growing-in-popularity
Savia Lobo
18 Jul 2019
6 min read
Save for later

What is HCL (Hashicorp Configuration Language), how does it relate to Terraform, and why is it growing in popularity?

Savia Lobo
18 Jul 2019
6 min read
HCL (Hashicorp Configuration language), is rapidly growing in popularity. Last year's Octoverse report by GitHub showed it to be the second fastest growing language on the platform, more than doubling in contributors since 2017 (Kotlin was top, with GitHub contributors growing 2.6 times). However, despite its growth, it hasn’t had the level of attention that other programming languages have had. One of the reasons for this is that HCL is a configuration language. It's also part of a broader ecosystem of tools built by cloud automation company HashiCorp that largely center around Terraform. What is Terraform? Terraform is an infrastructure-as-code tool that makes it easier to define and manage your cloud infrastructure. HCL is simply the syntax that allows you to better leverage its capabilities. It gives you a significant degree of control over your infrastructure in a way that’s more ‘human-readable’ than other configuration languages such as YAML and JSON. HCL and Terraform are both important parts of the DevOps world. They are not only built for a world that has transitioned to infrastructure-as-code, but also one in which this transition demands more from engineers. By making HCL a more readable, higher-level configuration language, the language can better facilitate collaboration and transparency between cross-functional engineering teams. With all of this in mind, HCL’s growing popularity can be taken to indicate broader shifts in the software development world. HashiCorp clearly understands them very well and is eager to help drive them forward. But before we go any further, let's dive a bit deeper into why HCL was created, how it works, and how it sits within the Terraform ecosystem. Why did Hashicorp create HCL? The development of HCL was borne from of HashiCorp’s experience of trying multiple different options for configuration languages. “What we learned,” the team explains on GitHub, “is that some people wanted human-friendly configuration languages and some people wanted machine-friendly languages.” The HashiCorp team needed a compromise - something that could offer a degree of flexibility and accessibility. As the team outlines their thinking, it’s clear to see what the drivers behind HCL actually are. JSON, they say, “is fairly verbose and... doesn't support comments” while YAML is viewed as too complex for beginners to properly parse and use effectively. Traditional programming languages also pose problems. Again, they’re too sophisticated and demand too much background knowledge from users to make them a truly useful configuration language. Put together, this underlines the fact that with HCL HashiCorp wanted to build something that is accessible to engineers of different abilities and skill sets, while also being clear enough to enable appropriate levels of transparency between teams. It is “designed to be written and modified by humans.” Listen: Uber engineer Yuri Shkuro talks distributed tracing and observability on the Packt Podcast How does the Hashicorp Configuration Language work? HCL is not a replacement for the likes of YAML or JSON. The team’s aim “is not to alienate other configuration languages. It is,” they say, “instead to provide HCL as a specialized language for our tools, and JSON as the interoperability layer.” Effectively, it builds on some of the things you can get with JSON, but reimagines them in the context of infrastructure and application configuration. According to the documentation, we should see HCL as a “structured configuration language rather than a data structure serialization language.” HCL is “always decoded using an application-defined schema,” which gives you a level of flexibility. It quite means the application is always at the center of the language. You don't have to work around it. If you want to learn more about the HCL syntax and how it works at a much deeper level, the documentation is a good place to start, as is this page on GitHub. Read next: Why do IT teams need to transition from DevOps to DevSecOps? The advantages of HCL and Terraform You can’t really talk about the advantages of HCL without also considering the advantages of Terraform. Indeed, while HCL might well be a well designed configuration language that’s accessible and caters to a wide range of users and use cases, it’s only in the context of Terraform that its growth really makes sense. Why is Terraform so popular? To understand the popularity of Terraform, you need to place it in the context of current trends and today’s software marketplace for infrastructure configuration. Terraform is widely seen as a competitor to configuration management tools like Chef, Ansible and Puppet. However, Terraform isn’t exactly a configuration management - it’s more accurate to call it a provisioning tool (config management tools configure software on servers that already exist - provisioning tools set up new ones). This is important because thanks to Docker and Kubernetes, the need for configuration has radically changed - you might even say that it’s no longer there. If a Docker container is effectively self-sufficient, with all the configuration files it needs to run, then the need for ‘traditional’ configuration management begins to drop. Of course, this isn’t to say that one tool is intrinsically better than any other. There are use cases for all of these types of tools. But the fact remains is that Terraform suits use cases that are starting to grow. Part of this is due to the rise of cloud agnosticism. As multi-cloud and hybrid cloud architectures become prevalent, DevOps teams need tools that let them navigate and manage resources across different platforms. Although all the major public cloud vendors have native tools for managing resources, these can sometimes be restrictive. The templates they offer can also be difficult to reuse. Take Azure ARM templates, for example - it can only be used to create an Azure resource. In contrast, Terraform allows you to provision and manage resources across different cloud platforms. Conclusion: Terraform and HCL can make DevOps more accessible It’s not hard to see why ThoughtWorks sees Terraform as such an important emerging technology. (In the last edition of ThoughtWorks Radar is claimed that now is the time to adopt it.) But it’s also important to understand that HCL is an important element in the success of Terraform. It makes infrastructure-as-code not only something that’s accessible to developers that might have previously only dipped their toes in operations, but also something that can be more collaborative, transparent, and observable for team members. The DevOps picture will undoubtedly evolve over the next few years, but it would appear that HashiCorp is going to have a big part to play in it.
Read more
  • 0
  • 0
  • 13427
Banner background image

article-image-kubecon-cloudnativecon-eu-2019-highlights-microsofts-service-mesh-interface-enhancements-to-gke-virtual-kubelet-1-0-and-much-more
Savia Lobo
22 May 2019
7 min read
Save for later

KubeCon + CloudNativeCon EU 2019 highlights: Microsoft’s Service Mesh Interface, Enhancements to GKE, Virtual Kubelet 1.0, and much more!

Savia Lobo
22 May 2019
7 min read
The KubeCon+CloudNativeCon 2019 is live (May 21- May 23) at the Fira Gran Via exhibition center in Barcelona, Spain. This conference has a huge assemble of announcements for topics including Kubernetes, DevOps, and cloud-native application. There were many exciting announcements from Microsoft, Google, The Cloud Native Computing Foundation, and more!! Let’s have a brief overview of each of these announcements. Microsoft Kubernetes Announcements: Service Mesh Interface(SMI), Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and Helm 3 alpha Service Mesh Interface(SMI) Microsoft launched the Service Mesh Interface (SMI) specification, the company’s new community project for collaboration around Service Mesh infrastructure. SMI defines a set of common, portable APIs that provide developers with interoperability across different service mesh technologies including Istio, Linkerd, and Consul Connect. The Service Mesh Interface provides: A standard interface for meshes on Kubernetes A basic feature set for the most common mesh use cases Flexibility to support new mesh capabilities over time Space for the ecosystem to innovate with mesh technology To know more about the Service Mesh Interface, head over to Microsoft’s official blog. Visual Studio Code Kubernetes extension 1.0, Virtual Kubelet 1.0, and first alpha of Helm 3 Microsoft released its Visual Studio Code’s open source Kubernetes extension version 1.0. The extension brings native Kubernetes integration to Visual Studio Code, and is fully supported for production management of Kubernetes clusters. Microsoft has also added an extensibility API that makes it possible for anyone to build their own integration experiences on top of Microsoft’s baseline Kubernetes integration. Microsoft also announced Virtual Kubelet 1.0. Brendan Burns, Kubernetes cofounder and Microsoft distinguished engineer said, “The Virtual Kubelet represents a unique integration of Kubernetes and serverless container technologies, like Azure Container Instances. We developed it and in the context of the Cloud Native Computing Foundation, where it’s a sandbox project.” He further added, “With 1.0, we’re saying ‘It’s ready.’ We think we’ve done all the work that we need in order for people to take production level dependencies on this project.” Microsoft also released the first alpha of Helm 3. Helm is the defacto standard for packaging and deploying Kubernetes applications. Helm 3 is simpler, supports all the modern security, identity, and authorization features of today’s Kubernetes. Helm 3 allows users to revisit and simplify Helm’s architecture, due to the growing maturity of Kubernetes identity and security features, like role-based access control (RBAC), and advanced features, such as custom resource definitions (CRDs). Know more about Helm 3 in detail on Microsoft’s official blog post. Google announces enhancements to Google Kubernetes Engine; Stackdriver Kubernetes Engine Monitoring ‘generally available’ On the first day of the KubeCon+CloudNative Con 2019, yesterday, Google announced the three release channels for its Google Kubernetes Engine (GKE), Rapid, Regular and Stable. Google, in its official blog post states, “Each channel offers different version maturity and freshness, allowing developers to subscribe their cluster to a stream of updates that match risk tolerance and business requirements.” This new feature will be launched into alpha with the first release in the Rapid channel, which will give developers early access to the latest versions of Kubernetes. Google also announced the general availability of Stackdriver Kubernetes Engine Monitoring, a tool that gives users a GKE observability (metrics, logs, events, and metadata) all in one place, to help provide faster time-to-resolution for issues, no matter the scale. To know more about the three release channels and the Stackdriver Kubernetes Engine Monitoring in detail, head over to Google’s official blog post. Cloud Native Foundation announcements: Announcing Harbor 1.8,  launches a new online course ‘Cloud Native Logging with Fluentd’, Intuit Inc. wins the CNCF End User Award, and Kong Inc. is now a Gold Member Harbor 1.8 The VMWare team released Harbor 1.8, yesterday, with new features and improvements, including enhanced automation integration, security, monitoring, and cross-registry replication support. Harbor is an open source trusted cloud native registry project that stores, signs, and scans content. Harbor 1.8 also brings various  other capabilities for both administrators and end users: Health check API, which shows detailed status and health of all Harbor components. Harbor extends and builds on top of the open source Docker Registry to facilitate registry operations like the pushing and pulling of images. In this release, we upgraded our Docker Registry to version 2.7.1 Support for defining cron-based scheduled tasks in the Harbor UI. Administrators can now use cron strings to define the schedule of a job. Scan, garbage collection, and replication jobs are all supported. API explorer integration. End users can now explore and trigger Harbor’s API via the Swagger UI nested inside Harbor’s UI. Enhancement of the Job Service engine to include internal webhook events, additional APIs for job management, and numerous bug fixes to improve the stability of the service. To know more about this release, read Harbor 1.8 official blogpost. A new online course on ‘Cloud Native Logging with Fluentd’ The Cloud Native Computing Foundation and The Linux Foundation have together designed a new, self-paced and hands-on course Cloud Native Logging with Fluentd. This course will provide users with the necessary skills to deploy Fluentd in a wide range of production settings. Eduardo Silva, Principal Engineer at Arm Treasure Data, said, “This course will explore the full range of Fluentd features, from installing Fluentd and running it in a container, to using it as a simple log forwarder or a sophisticated log aggregator and processor.” “As we see the Fluentd project growing into a full ecosystem of third party integrations and components, we are thrilled that this course will be offered so more people can realize the benefits it provides”, he further added. To know more about this course and its benefits in detail, visit the official blogpost. Intuit Inc. won the CNCF End User Award At the conference, yesterday, CNCF announced that Intuit Inc. has won the CNCF End User Award in recognition of its contributions to the cloud native ecosystem. Intuit is an active user, contributor and developer of open source technologies. As a part of its journey to the public cloud, Intuit has advanced the way it leverages cloud native technologies in production, including CNCF projects like Kubernetes and OPA. To know more about this achievement by Intuit in detail, read the official blog post. Kong Inc. is now a Gold Member of the CNCF The CNCF announced that Kong Inc., which provides open source API and service lifecycle management tool has upgraded its membership to Gold. The company backs the Kong project, a cloud native, fast, scalable and distributed microservice abstraction layer. Kong Kong is focused on building a service control platform that acts as the nervous system for an organization’s modern software architectures by intelligently brokering information across all services. Dan Kohn, Executive Director of the Cloud Native Computing Foundation, said, “With their focus on open source and cloud native, Kong is a strong member of the open source community and their membership provides resources for activities like bug bounties and security audits that help our community continue to thrive.” Head over to CNCF’s official announcement post. More announcements can be expected from this conference, to stay updated visit KubeCon+CloudNativeCon 2019 official website. F8 Developer Conference Highlights: Redesigned FB5 app, Messenger update, new Oculus Quest and Rift S, Instagram shops, and more RSA Conference 2019 Highlights: Top 5 cybersecurity products announced NSA releases Ghidra, a free software reverse engineering (SRE) framework, at the RSA security conference
Read more
  • 0
  • 0
  • 3440

article-image-does-it-make-sense-to-talk-about-devops-engineers-or-devops-tools
Richard Gall
10 May 2019
6 min read
Save for later

Does it make sense to talk about DevOps engineers or DevOps tools?

Richard Gall
10 May 2019
6 min read
DevOps engineers are in high demand - the job represents an engineering unicorn, someone that understands both development and operations and can help to foster a culture where the relationship between the two is almost frictionless. But there's some debate as to whether it makes sense to talk about a DevOps engineer at all. If DevOps is a culture of a set of practices that improves agility and empowers engineers to take more ownership over their work, should we really be thinking about DevOps as a single job that someone can simply train for? The quotes in this piece are taken from DevOps Paradox by Viktor Farcic, which will be published in June 2019. The book features interviews with a diverse range of figures drawn from across the DevOps world. Is DevOps engineer a 'real' job or just recruitment spin? Nirmal Mehta (@normalfaults), Technology Consultant at Booz Allen Hamilton, says "There's no such thing as a DevOps engineer. There shouldn't even be a DevOps team, because to me DevOps is more of a cultural and philosophical methodology, a process, and a way of thinking about things and communicating within an IT organization..." Mehta is cyncical about organizations that put out job descriptions asking for DevOps engineers. It is, he argues, a way of cutting costs - a way of simply doing more with less. "A DevOps engineer is just a job posting that signals an organization wants to hire one less person to do twice as much work rather than hire both a developer and an operator." This view is echoed by other figures associated with the DevOps world. Mike Kail (@mdkail), CTO at Everest, says "I certainly don't view DevOps as a tool or a job title. In my view, at the core, it's a cultural approach to leveraging automation and orchestration to streamline both code development, infrastructure, application deployments and subsequently, the managing of those resources." Similarly, Damian Duportal (@DamienDuportal), Træfik's Developer Advocate, says "there is no such thing as a DevOps engineer or even a DevOps team.  The main purpose of DevOps is to focus on value, finding the optimal for the organization, and the value it will bring." For both Duportal and Kail, then, DevOps is primarily a cultural thing, something which needs to be embedded inside the practices of an organization. Is it useful to talk about a DevOps team? There are big question marks over the concept of a DevOps engineer. But what about a specific team? It's all well and good talking about organizational philosophy, but how do you actually affect change in a practical manner? Julian Simpson (@builddoctor), Neo4J's Global IT Manager is sceptical about the concept of a DevOps team: “Can we have something called a DevOps team? I don't believe so. You might spin up a team to solve a DevOps problem, but then I wouldn't even say we specifically have a DevOps problem. I'd say you just have a problem." DevOps consultant Chris Riley (@HoardingInfo) has a similar take, saying: “DevOps Engineer as a title makes sense to me, but I don't think you necessarily have DevOps departments, nor do you seek that out. Instead, I think DevOps is a principle that you spread throughout your entire development organization. Rather, you look to reform your organization in a way that supports those initiatives versus just saying that we need to build this DevOps unit, and there we go, we're done, we're DevOps. Because by doing that you really have to empower that unit and most organizations aren't willing to do that." However, Red Hat Solutions Architect Wian Vos  (@wianvos) has a different take. For Vos the idea of a DevOps team is actually crucial if you are to cultivate a DevOps mindset inside your organization: "Imagine... you and I were going to start a company. We're going to need a DevOps team because we have a burning desire to put out this awesome application. The questions we have when we're putting together a DevOps team is both ‘Who are we hiring?’ and ‘What are we hiring for? Are we going to hire DevOps engineers? No. In that team, we want the best application developers, the best tester, and maybe we want a great infrastructure guy and a frontend/backend developer. I want people with specific roles who fit together as a team to be that DevOps team." For Vos, it's not so much about finding and hiring DevOps engineers - people with a specific set of skills and experience - but rather building a team that's constructed in such a way that it can put DevOps principles into practice. Is there such a thing as a DevOps tool? One of the interesting things about DevOps is that the debate seems to lead you into a bit of a bind. It's almost as if the more concrete we try and make it - turning it into a job, or a team - the less useful it becomes. This is particularly true when we consider tooling. Surely thinking about DevOps technologically, rather than speculatively makes it more real? In general, it appears there is a consensus against the idea of DevOps tools. On this point Julian Simpson said "my original thinking about the movement from 2009 onwards, when the name was coined, was that it would be about collaboration and perhaps the tools would sort of come out of that collaboration." James Turnbull (@kartar), CEO of Rethink Robotics is critical of the notion of DevOps tools. He says "I don't think there are such things as DevOps tools. I believe there are tools that make the process of being a cross-functional team better... Any tool that facilitates building that cross-functionality is probably a DevOps tool to the point where the term is likely meaningless." When it comes to DevOps, everyone's still learning With even industry figures disagreeing on what terms mean, or which ones are relevant, it's pretty clear that DevOps will remain a field that's contested and debated. But perhaps this is important - if we expect it to simply be a solution to the engineering challenges we face, it's already failed as a concept. However, if we understand it as a framework or mindset for solving problems then that is when it acquires greater potency. Viktor Farcic is a Developer Advocate at CloudBees, a member of the Google Developer Experts and Docker Captains groups, and published author. His big passions are DevOps, Microservices, Continuous Integration, Delivery and Deployment (CI/CD) and Test-Driven Development (TDD).
Read more
  • 0
  • 0
  • 4623

article-image-announcing-docker-enterprise-3-0-public-beta
Savia Lobo
02 May 2019
3 min read
Save for later

Announcing Docker Enterprise 3.0 Public Beta!

Savia Lobo
02 May 2019
3 min read
Update: On July 22, 2019, the Docker team announced that the Docker Enterprise 3.0 will be generally available. He also added that more than 2,000 people have tried the Docker Enterprise 3.0 public beta program On April 24, the team at Docker announced Docker Enterprise 3.0, an end-to-end container platform that enables developers to quickly build and share any type of application (from legacy to cloud-native) and securely run them anywhere, from hybrid cloud to the edge. It is now available in Public Beta Docker Enterprise 3.0 delivers new desktop capabilities, advanced development productivity tools, a simplified and secure Kubernetes stack, and a managed service option to make Docker Enterprise 3.0 the platform for digital transformation. Jay Lyman, the Principal Analyst for 451 Research, “Docker’s new Enterprise 3.0 promises to automate the 'development to production' experience with new tooling that aims to reduce the friction between dev and ops teams.” What can you do with the new Docker Enterprise 3.0? Integrated Docker Desktop Enterprise Docker Desktop Enterprise provides a consistent development-to-production experience with a set of automation tools. This makes it possible to start with the developer desktop, deliver an integrated and secure image registry with access to the Hub ecosystem, and then deploy to an enterprise-ready and Kubernetes-conformant environment. Docker Kubernetes Services (DKS) can simplify the scaling and deployment of applications Compatible with Docker Compose, Kubernetes YAML and Helm charts, DKS provides an automated and repeatable way to install, configure, manage and scale Kubernetes-based applications across hybrid and multi-cloud. DKS includes enhanced security, access controls, and automated lifecycle management bringing a new level of security to Kubernetes that integrates seamlessly with the Docker Enterprise platform. Customers will also have the option to use Docker Swarm Services (DSS) as part of the platform’s orchestration services. Docker Applications for high-velocity innovation Docker Applications are based on the CNAB open standard. It removes the friction between Dev and Ops by enabling teams to collaborate on an application by defining a group of related containers that work together to form an application. It also eliminates the configuration overhead by integrating and automating the creation of the Docker Compose and Kubernetes YAML files, Helm charts, etc. It also includes Application Templates, Application Designer and Version Packs, using which Docker Applications makes it possible for flexible deployment across different environments, delivering on the “code once, deploy anywhere” promise. With the announcement of Docker Enterprise 3.0, Docker also introduced Docker Enterprise-as-a-service - a fully-managed service on-premise or in the cloud. To know more about this news in detail, head over to Docker’s official announcement. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 2358
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-visual-studio-code-can-help-bridge-the-gap-between-full-stack-development-and-devops
Richard Gall
16 Apr 2019
8 min read
Save for later

How Visual Studio Code can help bridge the gap between full-stack development and DevOps [Sponsored by Microsoft]

Richard Gall
16 Apr 2019
8 min read
The popularity of Visual Studio Code is an indication of how the developer landscape is changing. It’s also a testament to the way the development team behind it have been proactive in responding to these changes and using them to inform iterations of the product. But what are these changes, exactly? Well, one way of understanding them is to focus on the growth of full-stack development as a job role. Thanks to cloud native and increased levels of abstraction across the stack, developers are being asked to do more. They’re not so much writing specialized code for the part of the stack that they ‘own’ in the way they might have been five years ago. Instead, a more holistic approach has become the norm. This post is part of a series brought to you in conjunction with Microsoft. Download Learning Node.js Development for free from Microsoft here. This holistic approach is one where developers assemble applications, with an even greater focus - and indeed deliberation - on matters of infrastructure and architecture. Shipping code is important, but there is more to development roles than simply producing the building blocks of software. A good full stack developer is one who can shift between contexts. This could be context shifting in a literal sense, but it might also more broadly mean shifting between different types of problems. Crucial to this is an in-built DevOps mindset. Siloed compartmentalization is the enemy of the modern developer in just about every respect. This is why Visual Studio Code is so valuable today. It tackles compartmentalization, helping developers move between different problems and contexts seamlessly. In turn, this bridges the gap between what we would typically consider a full-stack engineer role and responsibilities and those of a DevOps engineer. Read next: DevOps engineering and full-stack development – 2 sides of the same agile coin How does Visual Studio Code support a DevOps mindset? From writing and optimizing code, to deploying to the cloud and managing those resources in the cloud, Visual Studio allows developers to do multiple things easily. Let's take a look at some of the features of the editor (or IDE, depending on your perspective) to see what this means in practical terms. VSCode helps developers assume responsibility for how their code performs Code optimization becomes important when you really start to take the issue of accountability seriously. Yes, that could be frustrating (especially if you’re a bad developer…) but it also makes writing code much more interesting. And Visual Studio Code has tools that help you to do just that. Like a good and friendly DevOps engineer, the text editor has a set of in-built features and extensions that can help you write cleaner and more performant code. There are, of course, plenty of extensions to help improve code, but before we even get there, Visual Studio Code has out of the box features that can aid productivity. IntelliSense, for example, Microsoft’s useful autocomplete feature, takes some of the leg work out of writing code. But it’s once you get into the extensions that you can begin to see how much support Visual Studio Code can bring. IntelliSense has several useful extensions - one for npm, one for Node.js modules, for example, but there are plenty of other useful tools, from code snippets to Chrome Debugger and Prettier, an extension for the popular code formatter that has seen 5 million downloads in a matter of months. Individually these tools might not seem like a big deal. But it’s the fact that you have access to these various features - if you want them, of course - that marks VSC out as somewhere that wants you to be productive, that wants you to focus on code health. That’s not to say that Atom, emacs, or any other editor doesn’t do this - it’s just that VSC steps in and takes up and performs in lieu of your brain. Which means you can focus your mind elsewhere. VSCode has features that encourages really effective developer collaboration DevOps is, fundamentally, a methodology about the way people build software together. Conversely, code editors often feel like a personal decision - something as unique to you as any other piece of technology that allows you to interface with information and with other people. And although VSC can feel as personal as any code editor (check out the themes - Dracula is particularly lovely…), because it is more than a typical code editor, it offers plenty of ways to enhance project management and collaboration. Perhaps the most notable example of how Visual Studio Code supports collaboration is the Live Share extension.  This nifty extension allows you to work on code with a colleague working on a different machine, as if you were collaborating on a Word document. As well as being pretty novel, the implications of this could be greater than we realise. By making it easier to open up the way we work at a very individual - almost atomized level - the way we work together to build and deploy code immediately changes. Software engineering has always been a team sport but coding itself was all too often a solitary activity. With Live Share you can quite deliberately begin to break down silos in a way that you've never been able to do before. Okay, but what does Live Share really have to do with DevOps? Well, if we can improve collaboration between people, we can immediately cut through any residual siloes. But more than that, it also opens new work practices for the future. Pair Programming, for example, might feel like a bit of a flash in the plan programming practice, but the principles become the norm if its better facilitated. And while an individual agile technique certainly does not make DevOps magically happen, making it normal can go a long way in helping to improve code quality and accountability. Visual Studio Code brings the cloud to full-stack developers (and vice versa) Collaboration, productivity - great. Code optimization - also great. But if you really want to think about how Visual Studio Code bridges the gap between full-stack development and DevOps, you have to look at the ways in which it allows developers to leverage Azure. An article published by TechCrunch in 2016 argued that cloud - managed services - had ‘killed DevOps’. That might have been a bit of an overstatement, but if you follow the logic you can begin to see that the world is moving towards a world where cloud actually and supports and facilitates DevOps processes. It almost removes the need for a separate operations engineer to deploy your applications appropriately. But even if the lack of friction here is appealing, you will nevertheless to be open to a new way of working. And that is never straightforward. And that’s where Visual Studio Code can help. With Azure App Service or Azure Functions extensions, you can deploy and manage Azure resources directly from your editor. That’s a productivity bonus, but it also immediately makes the concept of cloud or serverless more accessible, especially if you’re new to it. In VSC you’re literally working with it in a familiar space. The conclusion of that TechCrunch article is instructive: “DevOps as a team may be gone sooner than later, but its practices will be carried on to whole development teams so that we can continue to build upon what it has brought us in the past years.” Arguably, Visual Studio Code is an editor that takes us to that point - one where DevOps practices are embedded in development teams. Visual Studio Code: built for the needs of individual developers and the future of the industry Choosing a text editor is a subjective thing. Part of it is about what you’re trying to achieve, and what projects you’re working on, but often it’s not even about that: it’s simply about what feels right. This means that there are always going to be some developers for whom Visual Studio Code doesn’t work – and that’s fine. But unlike many other text editors, Visual Studio Code is so adaptable and flexible to every developers' individual needs, that it’s actually relatively straightforward to find a way to make it work for you. In turn, this means its advantages in terms of cloud native development, and improved collaboration aren’t at the expense of that subjective feeling of ‘yes... this works for me’. And if you’re going to build great software, both things are important. Try Visual Code for yourself. Download it now. Learn how to use Node.js on Azure by downloading Learning Node.js with Azure for free from Microsoft.
Read more
  • 0
  • 0
  • 5613

article-image-chef-goes-open-source-ditching-the-loose-open-core-model
Richard Gall
02 Apr 2019
5 min read
Save for later

Chef goes open source, ditching the Loose Open Core model

Richard Gall
02 Apr 2019
5 min read
Chef, the infrastructure automation tool, has today revealed that it is going completely open source. In doing so, the project has ditched the loose open core model. The news is particularly intriguing as it comes at a time when the traditional open source model appears to be facing challenges around its future sustainability. However, it would appear that from Chef's perspective the switch to a full open source license is being driven by a crowded marketplace where automation tools are finding it hard to gain a foothold inside organizations trying to automate their infrastructure. A further challenge for this market is what Chef has identified as 'The Coded Enterprise' - essentially technologically progressive organizations driven by an engineering culture where infrastructure is primarily viewed as code. Read next: Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Why is Chef going open source? As you might expect,  there's actually more to Chef's decision than pure commercialism. To get a good understanding, it's worth picking apart Chef's open core model and how this was limiting the project. The limitations of Open Core The Loose Open Core model has open source software at its center but is wrapped in proprietary software. So, it's open at its core, but is largely proprietary in how it is deployed and used by businesses. While at first glance this might make it easier to monetize the project, it also severely limits the projects ability to evolve and develop according to the needs of people that matter - the people that use it. Indeed, one way of thinking about it is that the open core model positions your software as a product - something that is defined by product managers and lives and dies by its stickiness with customers. By going open source, your software becomes a project, something that is shared and owned by a community of people that believe in it. Speaking to TechCrunch, Chef Co-Founder Adam Jacob said "in the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect... the value was always in the totality of the product." Read next: Chef Language and Style Removing the friction between product and project Jacob published an article on Medium expressing his delight at the news. It's an instructive look at how Chef has been thinking about itself and the challenges it faces. "Deciding what’s in, and what’s out, or where to focus, was the hardest part of the job at Chef," Jacob wrote. "I’m stoked nobody has to do it anymore. I’m stoked we can have the entire company participating in the open source community, rather than burning out a few dedicated heroes. I’m stoked we no longer have to justify the value of what we do in terms of what we hold back from collaborating with people on." So, what's the deal with the Chef Enterprise Automation Stack? As well as announcing that Chef will be open sourcing its code, the organization also revealed that it was bringing together Chef Automate, Chef Infra, Chef InSpec, Chef Habitat and Chef Workstation under one single solution: the Chef Enterprise Automation Stack. The point here is to simplify Chef's offering to its customers to make it easier for them to do everything they can to properly build and automate reliable infrastructure. Corey Scobie, SVP of Product and Engineering said that "the introduction of the Chef Enterprise Automation Stack builds on [the switch to open source]... aligning our business model with our customers’ stated needs through Chef software distribution, services, assurances and direct engagement. Moving forward, the best, fastest, most reliable way to get Chef products and content will be through our commercial distributions.” So, essentially the Chef Enterprise Automation Stack will be the primary Chef distribution that's available commercially, sitting alongside the open source project. What does all this mean for Chef customers and users? If you're a Chef user or have any questions or concerns, the team have put together a very helpful FAQ. You can read it here. The key points for Chef users Existing commercial and non-commercial users don't need to do anything - everything will continue as normal. However, anyone else using current releases should be aware that support will be removed from those releases in 12 months time. The team have clarified that "customers who choose to use our new software versions will be subject to the new license terms and will have an opportunity to create a commercial relationship with Chef, with all of the accompanying benefits that provides." A big step for Chef - could it help determine the evolution of open source? This is a significant step for Chef and it will be of particular interest to its users. But even for those who have no interest in Chef, it's nevertheless a story that indicates that there's a lot of life in open source despite the challenges it faces. It'll certainly interesting to see whether Chef makes it work and what impact it has on the configuration management marketplace.
Read more
  • 0
  • 0
  • 2167

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 5532

article-image-18-people-in-tech-every-programmer-and-software-engineer-needs-to-follow-in-2019
Richard Gall
02 Jan 2019
9 min read
Save for later

18 people in tech every programmer and software engineer needs to follow in 2019

Richard Gall
02 Jan 2019
9 min read
After a tumultuous 2018 in tech, it's vital that you surround yourself with a variety of opinions and experiences in 2019 if you're to understand what the hell is going on. While there are thousands of incredible people working in tech, I've decided to make life a little easier for you by bringing together 18 of the best people from across the industry to follow on Twitter. From engineers at Microsoft and AWS, to researchers and journalists, this list is by no means comprehensive but it does give you a wide range of people that have been influential, interesting, and important in 2018. (A few of) the best people in tech on Twitter April Wensel (@aprilwensel) April Wensel is the founder of Compassionate Coding, an organization that aims to bring emotional intelligence and ethics into the tech industry. In April 2018 Wensel wrote an essay arguing that "it's time to retire RTFM" (read the fucking manual). The essay was well received by many in the tech community tired of a culture of ostensible caustic machismo and played a part in making conversations around community accessibility an important part of 2018. Watch her keynote at NodeJS Interactive: https://www.youtube.com/watch?v=HPFuHS6aPhw Liz Fong-Jones (@lizthegrey) Liz Fong-Jones is an SRE and Dev Advocate at Google Cloud Platform, but over the last couple of years she has become an important figure within tech activism. First helping to create the NeverAgain pledge in response to the election of Donald Trump in 2016, then helping to bring to light Google's fraught internal struggle over diversity, Fong-Jones has effectively laid the foundations for the mainstream appearance of tech activism in 2018. In an interview with Fast Company, Fong-Jones says she has accepted her role as a spokesperson for the movement that has emerged, but she's committed to helping to "equipping other employees to fight for change in their workplaces–whether at Google or not –so that I’m not a single point of failure." Ana Medina (@Ana_M_Medina) Ana Medina is a chaos engineer at Gremlin. Since moving to the chaos engineering platform from Uber (where she was part of the class action lawsuit against the company), Medina has played an important part in explaining what chaos engineering looks like in practice all around the world. But she is also an important voice in discussions around diversity and mental health in the tech industry - if you get a chance to her her talk, make sure you take the chance, and if you don't, you've still got Twitter... Sarah Drasner (@sarah_edo) Sarah Drasner does everything. She's a Developer Advocate at Microsoft, part of the VueJS core development team, organizer behind Concatenate (a free conference for Nigerian developers), as well as an author too. https://twitter.com/sarah_edo/status/1079400115196944384 Although Drasner specializes in front end development and JavaScript, she's a great person to follow on Twitter for her broad insights on how we learn and evolve as software developers. Do yourself a favour and follow her. Mark Imbriaco (@markimbriaco) Mark Imbriaco is the technical director at Epic Games. Given the company's truly, er, epic year thanks to Fortnite, Imbriaco can offer an insight on how one of the most important and influential technology companies on the planet are thinking. Corey Quinn (@QuinnyPig) Corey Quinn is an AWS expert. As the brain behind the Last Week in AWS newsletter and the voice behind the Screaming in the Cloud podcast (possibly the best cloud computing podcast on the planet), he is without a doubt the go-to person if you want to know what really matters in cloud. The range of guests that Quinn gets on the podcast is really impressive, and sums up his online persona: open, engaged, and always interesting. Yasmine Evjen (@YasmineEvjen) Yasmine Evjen is a Design Advocate at Google. That means that she is not only one of the minds behind Material Design, she is also someone that is helping to demonstrate the importance of human centered design around the world. As the presenter of Centered, a web series by the Google Design team about the ways human centered design is used for a range of applications. If you haven't seen it, it's well worth a watch. https://www.youtube.com/watch?v=cPBXjtpGuSA&list=PLJ21zHI2TNh-pgTlTpaW9kbnqAAVJgB0R&index=5&t=0s Suz Hinton (@noopkat) Suz Hinton works on IoT programs at Microsoft. That's interesting in itself, but when she's not developing fully connected smart homes (possibly), Hinton also streams code tutorials on Twitch (also as noopkat). Chris Short (@ChrisShort) If you want to get the lowdown on all things DevOps, you could do a lot worse than Chris Short. He boasts outstanding credentials - he's a CNCF ambassador, has experience with Red Hat and Ansible - but more importantly is the quality of his insights. A great place to begin is with DevOpsish, a newsletter Short produces, which features some really valuable discussions on the biggest issues and talking points in the field. Dan Abramov (@dan_abramov) Dan Abramov is one of the key figures behind ReactJS. Along with @sophiebits,@jordwalke, and @sebmarkbage, Abramov is quite literally helping to define front end development as we know it. If you're a JavaScript developer, or simply have any kind of passing interest in how we'll be building front ends over the next decade, he is an essential voice to have on your timeline. As you'd expect from someone that has helped put together one of the most popular JavaScript libraries in the world, Dan is very good at articulating some of the biggest challenges we face as developers and can provide useful insights on how to approach problems you might face, whether day to day or career changing. Emma Wedekind (@EmmaWedekind) As well as working at GoTo Meeting, Emma Wedekind is the founder of Coding Coach, a platform that connects developers to mentors to help them develop new skills. This experience makes Wedekind an important authority on developer learning. And at a time when deciding what to learn and how to do it can feel like such a challenging and complex process, surrounding yourself with people taking those issues seriously can be immensely valuable. Jason Lengstorf (@jlengstorf) Jason Lengstorf is a Developer Advocate at GatsbyJS (a cool project that makes it easier to build projects with React). His writing - on Twitter and elsewhere - is incredibly good at helping you discover new ways of working and approaching problems. Bridget Kromhout (@bridgetkromhout) Bridget Kromhout is another essential voice in cloud and DevOps. Currently working at Microsoft as Principal Cloud Advocate, Bridget also organizes DevOps Days and presents the Arrested DevOps podcast with Matty Stratton and Trevor Hess. Follow Bridget for her perspective on DevOps, as well as her experience in DevRel. Ryan Burgess (@burgessdryan) Netflix hasn't faced the scrutiny of many of its fellow tech giants this year, which means it's easy to forget the extent to which the company is at the cutting edge of technological innovation. This is why it's well worth following Ryan Burgess - as an engineering manager he's well placed to provide an insight on how the company is evolving from a tech perspective. His talk at Real World React on A/B testing user experiences is well worth watching: https://youtu.be/TmhJN6rdm28 Anil Dash (@anildash) Okay, so chances are you probably already follow Anil Dash - he does have half a million followers already, after all - but if you don't follow him, you most definitely should. Dash is a key figure in new media and digital culture, but he's not just another thought leader, he's someone that actually understands what it takes to actually build this stuff. As CEO of Glitch, a platform for building (and 'remixing') cool apps, he's having an impact on the way developers work and collaborate. 6 years ago, Dash wrote an essay called 'The Web We Lost'. In it, he laments how the web was becoming colonized by a handful of companies who built the key platforms on which we communicate and engage with one another online. Today, after a year of protest and controversy, Dash's argument is as salient as ever - it's one of the reasons it's vital that we listen to him. Jessie Frazelle (@jessfraz) Jessie Frazelle is a bit of a superstar. Which shouldn't really be that surprising - she's someone that seems to have a natural ability to pull things apart and put them back together again and have the most fun imaginable while doing it. Formerly part of the core Docker team, Frazelle now works at GitHub, where her knowledge and expertise is helping to develop the next Microsoft-tinged chapter in GitHub's history. I was lucky enough to see Jessie speak at ChaosConf in September - check out her talk: https://youtu.be/1hhVS4pdrrk Rachel Coldicutt (@rachelcoldicutt) Rachel Coldicutt is the CEO of Doteveryone, a think tank based in the U.K. that champions responsible tech. If you're interested in how technology interacts with other aspects of society and culture, as well as how it is impacting and being impacted by policymakers, Coldicutt is a vital person to follow. Kelsey Hightower (@kelseyhightower) Kelsey Hightower is another superstar in the tech world - when he talks, you need to listen. Hightower currently works at Google Cloud, but he spends a lot of time at conferences evangelizing for more effective cloud native development. https://twitter.com/mattrickard/status/1073285888191258624 If you're interested in anything infrastructure or cloud related, you need to follow Kelsey Hightower. Who did I miss? That's just a list of a few people in tech I think you should follow in 2019 - but who did I miss? Which accounts are essential? What podcasts and newsletters should we subscribe to?
Read more
  • 0
  • 0
  • 9371
article-image-key-trends-in-software-infrastructure-in-2019
Richard Gall
17 Dec 2018
10 min read
Save for later

Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity

Richard Gall
17 Dec 2018
10 min read
Software infrastructure has, over the last decade or so, become a key concern for developers of all stripes. Long gone are narrowly defined job roles; thanks to DevOps, accountability for how code is now shared between teams on both development and deployment sides. For anyone that’s ever been involved in the messy frustration of internal code wars, this has been a welcome change. But as developers who have traditionally sat higher up the software stack dive deeper into the mechanics of deploying and maintaining software, for those of us working in system administration, DevOps, SRE, and security (the list is endless, apologies if I’ve forgotten you), the rise of distributed systems only brings further challenges. Increased complexity not only opens up new points of failure and potential vulnerability, at a really basic level it makes understanding what’s actually going on difficult. And, essentially, this is what it will mean to work in software delivery and maintenance in 2019. Understanding what’s happening, minimizing downtime, taking steps to mitigate security threats - it’s a cliche, but finding strategies to become more responsive rather than reactive will be vital. Indeed, many responses to these kind of questions have emerged this year. Chaos engineering and observability, for example, have both been gaining traction within the SRE world, and are slowly beginning to make an impact beyond that particular job role. But let’s take a deeper look at what is really going to matter in the world of software infrastructure and architecture in 2019. Observability and the rise of the service mesh Before we decide what to actually do, it’s essential to know what’s actually going on. That seems obvious, but with increasing architectural complexity, that’s getting harder. Observability is a term that’s being widely thrown around as a response to this - but it has been met with some cynicism. For some developers, observability is just a sexed up way of talking about good old fashioned monitoring. But although the two concepts have a lot in common, observability is more of an approach, a design pattern maybe, rather than a specific activity. This post from The New Stack explains the difference between monitoring and observability incredibly well. Observability is “a measure of how well internal states of a system can be inferred from knowledge of its external outputs.” which means observability is more a property of a system, rather than an activity. There are a range of tools available to help you move towards better observability. Application management and logging tools like Splunk, Datadog, New Relic and Honeycomb can all be put to good use and are a good first step towards developing a more observable system. Want to learn how to put monitoring tools to work? Check out some of these titles: AWS Application Architecture and Management [Video]     Hands on Microservices Monitoring and Testing       Software Architecture with Spring 5.0      As well as those tools, if you’re working with containers, Kubernetes has some really useful features that can help you more effectively monitor your container deployments. In May, Google announced StackDriver Kubernetes Monitoring, which has seen much popularity across the community. Master monitoring with Kubernetes. Explore these titles: Google Cloud Platform Administration     Mastering Kubernetes      Kubernetes in 7 Days [Video]        But there’s something else emerging alongside observability which only appears to confirm it’s importance: that thing is the notion of a service mesh. The service mesh is essentially a tool that allows you to monitor all the various facets of your software infrastructure helping you to manage everything from performance to security to reliability. There are a number of different options out there when it comes to service meshes - Istio, Linkerd, Conduit and Tetrate being the 4 definitive tools out there at the moment. Learn more about service meshes inside these titles: Microservices Development Cookbook     The Ultimate Openshift Bootcamp [Video]     Cloud Native Application Development with Java EE [Video]       Why is observability important? Observability is important because it sets the foundations for many aspects of software management and design in various domains. Whether you’re an SRE or security engineer, having visibility on the way in which your software is working will be essential in 2019. Chaos engineering Observability lays the groundwork for many interesting new developments, chaos engineering being one of them. Based on the principle that modern, distributed software is inherently unreliable, chaos engineering ‘stress tests’ software systems. The word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. Using something called chaos experiments - adding something unexpected into your system, or pulling a piece of it out like a game of Jenga - chaos engineering helps you to better understand the way it will act in various situations. In turn, this allows you to make the necessary changes that can help ensure resiliency. Chaos engineering is particularly important today simply because so many people, indeed, so many things, depend on software to actually work. From an eCommerce site to a self driving car, if something isn’t working properly there could be terrible consequences. It’s not hard to see how chaos engineering fits alongside something like observability. To a certain extent, it’s really another way of achieving observability. By running chaos experiments, you can draw out issues that may not be visible in usual scenarios. However, the caveat is that chaos engineering isn’t an easy thing to do. It requires a lot of confidence and engineering intelligence. Running experiments shouldn’t be done carelessly - in many ways, the word ‘chaos’ is a bit of a misnomer. All testing and experimentation on your software should follow a rigorous and almost scientific structure. While chaos engineering isn’t straightforward, there are tools and platforms available to make it more manageable. Gremlin is perhaps the best example, offering what they describe as ‘resiliency-as-a-service’. But if you’re not ready to go in for a fully fledged platform, it’s worth looking at open source tools like Chaos Monkey and ChaosToolkit. Want to learn how to put the principles of chaos engineering into practice? Check out this title: Microservice Patterns and Best Practices       Learn the principles behind resiliency with these SRE titles: Real-World SRE       Practical Site Reliability Engineering       Better integrated security and code testing Both chaos engineering and observability point towards more testing. And this shouldn’t be surprising: testing is to be expected in a world where people are accountable for unpredictable systems. But what’s particularly important is how testing is integrated. Whether it’s for security or simply performance, we’re gradually moving towards a world where testing is part of the build and deploy process, not completely isolated from it. There are a diverse range of tools that all hint at this move. Archery, for example, is a tool designed for both developers and security testers to better identify and assess security vulnerabilities at various stages of the development lifecycle. With a useful dashboard, it neatly ties into the wider trend of observability. ArchUnit (sounds similar but completely unrelated) is a Java testing library that allows you to test a variety of different architectural components. Similarly on the testing front, headless browsers continue to dominate. We’ve seen some of the major browsers bringing out headless browsers, which will no doubt delight many developers. Headless browsers allow developers to run front end tests on their code as if it were live and running in the browser. If this sounds a lot like PhantomJS, that’s because it is actually quite a bit like PhantomJS. However, headless browsers do make the testing process much faster. Smarter software purchasing and the move to hybrid cloud The key trends we’ve seen in software architecture are about better understanding your software. But this level of insight and understanding doesn’t matter if there’s no alignment between key decision makers and purchasers. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This can manifest itself in various ways. Essentially, it’s a symptom of decision makers being disconnected from engineers buried deep in their software. This is by no means a new problem, cloud coming to define just about every aspect of software, it’s now much easier for confusion to take hold. The best thing about cloud is also the worst thing - the huge scope of opportunities it opens up. It makes decision making a minefield - which provider should we use? What parts of it do we need? What’s going to be most cost effective? Of course, with hybrid cloud, there's a clear way of meeting those issues. But it's by no means a silver bullet. Whatever cloud architecture you have, strong leadership and stakeholder management are essential. This is something that ThoughtWorks references in its most recent edition of Radar (November 2018). Identifying two trends they call ‘bounded buy’ and ‘risk commensurate vendor strategy’ ThoughtWorks highlights how organizations can find their SaaS of choice shaping their strategy in its own image (bounded buy) or look to outsource business critical applications, functions or services. T ThoughtWorks explains: “This trade-off has become apparent as the major cloud providers have expanded their range of service offerings. For example, using AWS Secret Management Service can speed up initial development and has the benefit of ecosystem integration, but it will also add more inertia if you ever need to migrate to a different cloud provider than it would if you had implemented, for example, Vault”. Relatedly, ThoughtWorks also identifies a problem with how organizations manage cost. In the report they discuss what they call ‘run cost as architecture fitness function’ which is really an elaborate way of saying - make sure you look at how much things cost. So, for example, don’t use serverless blindly. While it might look like a cheap option for smaller projects, your costs could quickly spiral and leave you spending more than you would if you ran it on a typical cloud server. Get to grips with hybrid cloud: Hybrid Cloud for Architects       Building Hybrid Clouds with Azure Stack     Become an effective software and solutions architect in 2019: AWS Certified Solutions Architect - Associate Guide     Architecting Cloud Computing Solutions     Hands-On Cloud Solutions with Azure       Software complexity needs are best communicated in a simple language: money In practice, this takes us all the way back to the beginning - it’s simply the financial underbelly of observability. Performance, visibility, resilience - these matter because they directly impact the bottom line. That might sound obvious, but if you’re trying to make the case, say, for implementing chaos engineering, or using a any other particular facet of a SaaS offering, communicating to other stakeholders in financial terms can give you buy-in and help to guarantee alignment. If 2019 should be about anything, it’s getting closer to this fantasy of alignment. In the end, it will keep everyone happy - engineers and businesses
Read more
  • 0
  • 0
  • 6069

article-image-kelsey-hightower-on-serverless-and-security-on-kubernetes-at-kubecon-cloudnativecon
Prasad Ramesh
14 Dec 2018
4 min read
Save for later

Kelsey Hightower on Serverless and Security on Kubernetes at KubeCon + CloudNativeCon

Prasad Ramesh
14 Dec 2018
4 min read
In a stream hosted earlier this week by The New Stack, Kelsey Hightower, developer advocate, Google Cloud Platform, talked about the serverless and security aspects of Kubernetes. The stream was from KubeCon + CloudNativeCon 2018. What are you exploring right now with respect to serverless? There are many managed services these days. Database, security etc is fully managed i.e., serverless. People have been on this trajectory for a while if you consider DNS, email, and even Salesforce. Now we have serverless since managed services are ‘eating that world as well’. That world being the server side world and related workloads. How are managed services eating the server side world? If someone has to run and build an API, one approach would be to use Kubernetes and manage the cluster and build the container, run it on Kubernetes and manage that. Even if it a fully managed cluster, you may still have to manage the things around Kubernetes. Another approach is to deal with a higher form of extraction. Serverless is coupled often with FaaS (Function as a Service). There are a lot of abstractions in terms of resources, i.e., resources are abstracted more these days. Hightower talks about a test: “If I walk up to a platform and the delta between me and my code is short, you’re probably closer to the serverless mindset.” This is different from creating a VM, then installing something, configuring something, and then running some code—this is not really serverless. Serverless in a Kubernetes context The point of view should be—can we improve the experience on Kubernetes by adopting some things from serverless? You can add a layer that does functions, so developers can stop worrying about containers and focus on the source. The big picture is—who autoscales the whole cluster? Kubernetes and just an additional layer can’t really be called serverless but it is going in that direction. Over time, if you do enough so that people don’t have to think about or even know that Kubernetes is there, you’re getting closer to being truly serverless. Security in Kubernetes Hightower loves the granular controls of serverless technologies. Comparing the serverless security model to other models For a long time in the industry, companies have been trying to do a least privilege approach. That is, limiting the access of applications so that it can perform only a specific action that is required. So if one server is compromised and it does not have access to anything else, then the effects are isolated. The Kubernetes approach can be different. The cloud providers try to make sure that all the credentials needed to do important things are segmented from VM, cloud functions, app engine or Kubernetes. Imagine if Kubernetes is where everything lives free. Instead of one machine being taken down, it is now easier for the whole cluster to be taken down in one shot. This is called ‘broadening the blast radius’. If you have Kubernetes and you give it keys to everything in your cluster, then everything is compromised when the Kubernetes API is compromised. Having just one cluster trades off on security. Another approach to serverless security A different security model is where you explicitly give credentials that may be needed. So there is no scope to ask for any credentials etc, it will not be allowed. You can also go wrong on a serverless but the system is better defined in ways that it limits what can be done. It’s easier to secure when the attack surface is smaller. For serverless security the same principles from engineering techniques apply, you just have to apply it to these new platforms. So you just need knowledge about what these new platforms are doing. The same principles apply, admins just have a different layer of abstraction that they may add some additional security to. The more people use the system, more flaws are continuously found. It takes a community to identify flaws and patch them. So as a community is more mature, dedicated security researchers come up and patch flaws before they can be exploited. To see the complete talk where Hightower talks about his views on what he is working on, go to The New Stack YouTube Channel. DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes NeuVector upgrades Kubernetes container security with the release of Containerd and CRI-O run-time support
Read more
  • 0
  • 0
  • 3405

article-image-its-black-friday-but-whats-the-business-and-developer-cost-of-downtime
Richard Gall
23 Nov 2018
4 min read
Save for later

It's Black Friday: But what's the business (and developer) cost of downtime?

Richard Gall
23 Nov 2018
4 min read
Black Friday is back, and, as you've probably already noticed, with a considerable vengeance. According to Adobe Analytics data, online spending is predicted to hit $3.7 billion over this holiday season in the U.S, up from $2.9 billion in 2017. But while consumers clamour for deals and businesses reap the rewards, it's important to remember there's a largely hidden plane of software engineering labour. Without this army of developers, consumers will most likely be hitting their devices in frustration, while business leaders will be missing tough revenue targets - so, as we enter into Black Friday let's pour one out for all those engineers on call and trying their best to keep eCommerce sites on their feet. Here's to the software engineers keeping things running on Black Friday Of course, the pain that hits on days like Black Friday and Cyber Monday can be minimised with smart planning and effective decision making long before those sales begin. However, for engineering teams under-resourced and lacking the right tools, that is simply impossible. This means that software engineers are left in a position where they're treading water, knowing that they're going to be sinking once those big days come around. It doesn't have to be like this. With smarter leadership and, indeed, more respect for the intensive work engineers put in to make websites and apps actually work, revenue driving platforms can become more secure, resilient and stable. Chaos engineering platform Gremlin publishes the 'true cost of downtime' This is the central argument of chaos engineering platform Gremlin, who we've covered a number of times this year. To coincide with Black Friday the team has put together what they believe is the 'true cost of downtime'. On the one hand this is a good marketing hook for their chaos engineering platform, but, cynicism aside, it's also a good explanation of why the principles of chaos engineering can be so valuable from both a business and developer perspective. Estimating the annual revenue of some of the biggest companies in the world, Gremlin has been then created an interactive table to demonstrate what the cost of downtime for each of those businesses would be, for the length of time you are on the page. For 20 minutes downtime, Amazon.com would have lost a staggering $4.4 million. For Walgreens it's more than $80,000. Gremlin provide some context to all this, saying: "Enterprise commerce businesses typically rely on a complex microservices architecture, from fulfillment, to website security, ability to scale with holiday traffic, and payment processing - there is a lot that can go wrong and impact revenue, damage customer trust, and consume engineering time. If an ecommerce site isn’t 100% online and performant, it’s losing revenue." "The holiday season is especially demanding for SREs working in ecommerce. Even the most skilled engineering teams can struggle to keep up with the demands of peak holiday traffic (i.e. Black Friday and Cyber Monday). Just going down for a few seconds can mean thousands in lost revenue, but for some sites, downtime can be exponentially more expensive." For Gremlin, chaos engineering is clearly the answer to many of the problems days like Black Friday poses. While it might not work for every single organization, it's nevertheless true that failing to pay attention to the value of your applications and websites at an hour by hour level could be incredibly damaging. With outages on Facebook, WhatsApp, and Instagram happening earlier this week, these problems aren't hidden away - they're in full view of the public. What does remain hidden, however, is the work and stress that goes in to tackling these issues and ensuring things are working as they should be. Perhaps it's time to start learning the lessons of Black Friday - business revenues will be that little bit healthier, but engineers will also be that little bit happier. 
Read more
  • 0
  • 0
  • 5007
article-image-observability-as-code-secrets-as-a-service-and-chaos-katas-thoughtworks-outlines-key-engineering-techniques-to-trial-and-assess
Richard Gall
14 Nov 2018
5 min read
Save for later

Observability as code, secrets as a service, and chaos katas: ThoughtWorks outlines key engineering techniques to trial and assess

Richard Gall
14 Nov 2018
5 min read
ThoughtWorks has just published vol. 19 of its essential radar report. As always, it's a vital insight into what's beginning to emerge in the technology field. In the techniques quadrant of its radar, there were some really interesting new entries. Let's take a look at some of them now, so you can better plan and evaluate your roadmap and skill set for 2019. 8 of the best new techniques you should be trialling (according to ThoughtWorks) 1% canary: a way to build better feedback loops This sounds like a weird one, but the concept is simple. It's essentially about building a quick feedback loop to a tiny segment of customers - say, 1%. This can allow engineering teams to learn things quickly and make changes on other aspects of the project as it evolves. Bounded buy: a smarter way to buy out-of-the-box software solutions Bounded buy mitigates the scope creep that can cause headaches for businesses dealing with out-of-the-box software. It means those responsible for purchasing software focus only on solutions that are modular, with each 'piece' directly connecting into a particular department's needs or workflow. Crypto shredding: securing sensitive data Crypto shredding is a method of securing data that might otherwise be easily replicated or copied. Essentially, it overwrites sensitive data with encryption keys which can easily be removed or deleted. It adds an extra layer of control over a large data set - a technique that could be particularly useful in a field like healthcare. Four key metrics - focus on what's most important to build a high performance team Building a high performance team, can be challenging. Accelerate, the team behind the State of DevOps report, highlighted key drivers that engineers and team leaders should focus on: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. According to ThoughtWorks "each metric creates a virtuous cycle and focuses the teams on continuous improvement." Observability as code - breaking through the limits of traditional monitoring tools Observability has emerged as a bit of a buzzword over the last 12 months. But in the context of microservices, and increased complexity in software architecture, it is nevertheless important. However, the means through which you 'do' observability - a range of monitoring tools and dashboards - can be limiting in terms of making adjustments and replicating dashboards. This is why treating observability as code is going to become increasingly more important. It makes sense - if infrastructure as code is the dominant way we think about building software, why shouldn't it be the way we monitor it too? Run cost as architecture fitness function There's a wide assumption that serverless can save you money. This is true when you're starting out, or want to do something quickly, but it's less true as you scale up. If you're using serverless functions repeatedly, you're likely to be paying a lot - more than if you has a slightly less fashionable cloud or on premise server. To combat this complacency, you should instead watch how much services cost against the benefit delivered by them. Seems obvious, but easy to miss if you've just got excited about going serverless. Secrets as a service Without wishing to dampen what sounds incredibly cool, secrets as a service are ultimately just elaborate password managers. They can help organizations more easily decouple credentials, API keys from their source code, a move which should ensure improved security - and simplicity. By using credential rotation, organizations can be much better prepared at tackling and mitigating any security issues. AWS has 'Secrets Manager' while HashiCorp's Vault offers similar functionality. Security chaos engineering In the last edition of Radar, security chaos engineering was in the assess phase - which means ThoughtWorks thinks it's worth looking at, but maybe too early to deploy. With volume 19, security chaos engineering has moved into trial. Clearly, while chaos engineering more broadly has seen slower adoption, it would seem that over the last 12 months the security field has taken chaos engineering to heart. 2 new software engineering techniques to assess Chaos katas If chaos engineering is finding it hard to gain mainstream adoption, perhaps chaos katas is the way forward. This is essentially a technique that helps engineers deploy chaos practices in their respective domains using the training approach known as kata - a Japanese word that simply refers to a set of choreographed movements. In this context, the 'katas' are a set of code patterns that implement failures in a structured way, which engineers can then identify and explore. This is essentially a bottom up way of doing chaos engineering that also gives engineers a deeper insight into their software infrastructure. Infrastructure configuration scanner The question of who should manage your infrastructure is still a tricky one, with plenty of conflicting perspectives. However, from a productivity and agility perspective, putting the infrastructure in the hands of engineers makes a lot of sense. Of course, this could feel like an extra burden - but with an infrastructure configuration scanner, like Scout2 or Watchmen, engineers can ensure that everything is configured correctly. Software engineering techniques need to maintain simplicity as complexity increases There's clearly a diverse range of techniques on the ThoughtWorks Radar. Ultimately, however, the picture that emerges is one where efficiency and observability are key. A crucial part of software engineering will managing increased complexity and developing new tools and processes to instil some degree of simplicity and clarity. Was there anything ThoughtWorks missed?
Read more
  • 0
  • 0
  • 5312

article-image-chaos-conf-2018-recap-chaos-engineering-hits-maturity-as-community-moves-towards-controlled-experimentation
Richard Gall
12 Oct 2018
11 min read
Save for later

Chaos Conf 2018 Recap: Chaos engineering hits maturity as community moves towards controlled experimentation

Richard Gall
12 Oct 2018
11 min read
Conferences can sometimes be confusing. Even at the most professional and well-planned conferences, you sometimes just take a minute and think what's actually the point of this? Am I learning anything? Am I meant to be networking? Will anyone notice if I steal extra food for the journey home? Chaos Conf 2018 was different, however. It had a clear purpose: to take the first step in properly forging a chaos engineering community. After almost a decade somewhat hidden in the corners of particularly innovative teams at Netflix and Amazon, chaos engineering might feel that its time has come. As software infrastructure becomes more complex, less monolithic, and as business and consumer demands expect more of the software systems that have become integral to the very functioning of life, resiliency has never been more important but more challenging to achieve. But while it feels like the right time for chaos engineering, it hasn't quite established itself in the mainstream. This is something the conference host, Gremlin, a platform that offers chaos engineering as a service, is acutely aware of. On the hand it's actively helping push chaos engineering into the hands of businesses, but on the other its growth and success, backed by millions of VC cash (and faith), depends upon chaos engineering becoming a mainstream discipline in the DevOps and SRE worlds. It's perhaps this reason that the conference felt so important. It was, according to Gremlin, the first ever public chaos engineering conference. And while it was relatively small in the grand scheme of many of today's festival-esque conferences attended by thousands of delegates (Dreamforce, the Salesforce conference, was also running in San Francisco in the same week), the fact that the conference had quickly sold out all 350 of its tickets - with more hoping on waiting lists - indicates that this was an event that had been eagerly awaited. And with some big names from the industry - notably Adrian Cockcroft from AWS and Jessie Frazelle from Microsoft - Chaos Conf had the air of an event that had outgrown its insider status before it had even began. The renovated cinema and bar in San Francisco's Mission District, complete with pinball machines upstairs, was the perfect container for a passionate community that had grown out of the clean corporate environs of Silicon Valley to embrace the chaotic mess that resembles modern software engineering. Kolton Andrus sets out a vision for the future of Gremlin and chaos engineering Chaos Conf was quick to deliver big news. They keynote speech, by Gremlin co-founder Kolton Andrus launched Gremlin's brand new Application Level Fault Injection (ALFI) feature, which makes it possible to run chaos experiments at an application level. Andrus broke the news by building towards it with a story of the progression of chaos engineering. Starting with Chaos Monkey, the tool first developed by Netflix, and moving from infrastructure to network, he showed how, as chaos engineering has evolved, it requires and faciliates different levels of control and insight on how your software works. "As we're maturing, the host level failures and the network level failures are necessary to building a robust and resilient system, but not sufficient. We need more - we need a finer granularity," Andrus explains. This is where ALFI comes in. By allowing Gremlin users to inject failure at an application level, it allows them to have more control over the 'blast radius' of their chaos experiments. The narrative Andrus was setting was clear, and would ultimately inform the ethos of the day - chaos engineering isn't just about chaos, it's about controlled experimentation to ensure resiliency. To do that requires a level of intelligence - technical and organizational - about how the various components of your software work, and how humans interact with them. Adrian Cockcroft on the importance of historical context and domain knowledge Adrian Cockcroft (@adrianco) VP at AWS followed Andrus' talk. In it he took the opportunity to set the broader context of chaos engineering, highlighting how tackling system failures is often a question of culture - how we approach system failure and think about our software. Developers love to learn things from first principles" he said. "But some historical context and domain knowledge can help illuminate the path and obstacles." If this sounds like Cockcroft was about to stray into theoretical territory, he certainly didn't. He offered plenty of practical frameworks for thinking through potential failure. But the talk wasn't theoretical - Cockcroft offered a taxonomy of failure that provides a useful framework for thinking through potential failure at every level. He also touched on how he sees the future of resiliency evolving, focusing on: Observability of systems Epidemic failure modes Automation and continuous chaos The crucial point Cockcroft makes is that cloud is the big driver for chaos engineering. "As datacenters migrate to the cloud, fragile and manual disaster recovery will be replaced by chaos engineering" read one of his slides. But more than that, the cloud also paves the way for the future of the discipline, one where 'chaos' is simply an automated part of the test and deployment pipeline. Selling chaos engineering to your boss Kriss Rochefolle, DevOps engineer and author of one of the best selling DevOps books in French, delivered a short talk on how engineers can sell chaos to their boss. He takes on the assumption that a rational proposal, informed by ROI is the best way to sell chaos engineering. He suggests instead that engineers need to play into emotions, and presenting chaos engineer as a method for tackling and minimizing the fear of (inevitable failure. Follow Kriss on Twitter: @crochefolle Walmart and chaos engineering Vilas Veraraghavan, the Director of Engineering was keen to clarify that Walmart doesn't practice chaos. Rather it practices resiliency - chaos engineering is simply a method the organization uses to achieve that. It was particularly useful to note the entire process that Vilas' team adopts when it comes to resiliency, which has largely developed out of Vilas' own work building his team from scratch. You can learn more about how Walmart is using chaos engineering for software resiliency in this post. Twitter's Ronnie Chen on diving and planning for failure Ronnie Chen (@rondoftw) is an engineering manager at Twitter. But she didn't talk about Twitter. In fact, she didn't even talk about engineering. Instead she spoke about her experience as a technical diver. By talking about her experiences, Ronnie was able to make a number of vital points about how to manage and tackle failure as a team. With mortality rates so high in diving, it's a good example of the relationship between complexity and risk. Chen made the point that things don't fail because of a single catalyst. Instead, failures - particularly fatal ones - happen because of a 'failure cascade'. Chen never makes the link explicit, but the comparison is clear - the ultimate outcome (ie. success or failure) is impacted by a whole range of situational and behavioral factors that we can't afford to ignore. Chen also made the point that, in diving, inexperienced people should be at the front of an expedition. "If you're inexperienced people are leading, they're learning and growing, and being able to operate with a safety net... when you do this, all kinds of hidden dependencies reveal themselves... every undocumented assumption, every piece of ancient team lore that you didn't even know you were relying on, comes to light." Charity Majors on the importance of observability Charity Majors (@mipsytipsy), CEO of Honeycomb, talked in detail about the key differences between monitoring and observability. As with other talks, context was important: a world where architectural complexity has grown rapidly in the space of a decade. Majors made the point that this increase in complexity has taken us from having known unknowns in our architectures, to many more unknown unknowns in a distributed system. This means that monitoring is dead - it simply isn't sophisticated enough to deal with the complexities and dependencies within a distributed system. Observability, meanwhile, allows you to to understand "what's happening in your systems just by observing it from the outside." Put simply, it lets you understand how your software is functioning from your perspective - almost turning it inside out. Majors then linked the concept to observability to the broader philosophy of chaos engineering - echoing some of the points raised by Adrian Cockcroft in his keynote. But this was her key takeaway: "Software engineers spend too much time looking at code in elaborately falsified environments, and not enough time observing it in the real world." This leads to one conclusion - the importance of testing in production. "Accept no substitute." Tammy Butow and Ana Medina on making an impact Tammy Butow (@tammybutow) and Ana Medina  (@Ana_M_Medina) from Gremlin took us through how to put chaos engineering into practice - from integrating it into your organizational culture to some practical tests you can run. One of the best examples of putting chaos into practice is Gremlin's concept of 'Failure Fridays', in which chaos testing becomes a valuable step in the product development process, to dogfood it and test out how a customer experiences it. Another way which Tammy and Ana suggested chaos engineering can be used was as a way of testing out new versions of technologies before you properly upgrade in production. To end, their talk, they demo'd a chaos battle between EKS (Kubernetes on AWS) and AKS (Kubernetes on Azure), doing an app container attack, a packet loss attack and a region failover attack. Jessie Frazelle on how containers can empower experimentation Jessie Frazelle (@jessfraz) didn't actually talk that much about chaos engineering. However, like Ronnie Chen's talk, chaos engineering seeped through what she said about bugs and containers. Bugs, for Frazelle, are a way of exploring how things work, and how different parts of a software infrastructure interact with each other: "Bugs are like my favorite thing... some people really hate when they get one of those bugs that turns out to be a rabbit hole and your kind of debugging it until the end of time... while debugging those bugs I hate them but afterwards, I'm like, that was crazy!" This was essentially an endorsement of the core concept of chaos engineering - injecting bugs into your software to understand how it reacts. Jessie then went on to talk about containers, joking that they're NOT REAL. This is because they're made up of  numerous different component parts, like Cgroups, namespaces, and LSMs. She contrasted containers with Virtual machines, zones and jails, which are 'first class concepts' - in other worlds, real things (Jessie wrote about this in detail last year in this blog post). In practice what this means is that whereas containers are like Lego pieces, VMs, zones, and jails are like a pre-assembled lego set that you don't need to play around with in the same way. From this perspective, it's easy to see how containers are relevant to chaos engineering - they empower a level of experimentation that you simply don't have with other virtualization technologies. "The box says to build the death star. But you can build whatever you want." The chaos ends... Chaos Conf was undoubtedly a huge success, and a lot of credit has to go to Gremlin for organizing the conference. It's clear that the team care a lot about the chaos engineering community and want it to expand in a way that transcends the success of the Gremlin platform. While chaos engineering might not feel relevant to a lot of people at the moment, it's only a matter of time that it's impact will be felt. That doesn't mean that everyone will suddenly become a chaos engineer by July 2019, but the cultural ripples will likely be felt across the software engineering landscape. But without Chaos Conf, it would be difficult to see chaos engineering growing as a discipline or set of practices. By sharing ideas and learning how other people work, a more coherent picture of chaos engineering started to emerge, one that can quickly make an impact in ways people wouldn't have expected six months ago. You can watch videos of all the talks from Chaos Conf 2018 on YouTube.
Read more
  • 0
  • 0
  • 5525