Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

How-To Tutorials - Cloud & Networking

768 Articles
article-image-mastering-promql-a-comprehensive-guide-to-prometheus-query-language
Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
Save for later

Mastering PromQL: A Comprehensive Guide to Prometheus Query Language

Rob Chapman, Peter Holmes
07 Nov 2024
15 min read
This article is an excerpt from the book, "Observability with Grafana", by Rob Chapman, Peter Holmes. This book provides a holistic understanding of observability concepts using the Grafana Labs tools, teaching you how to fully leverage the LGTM stack.Introduction PromQL, or Prometheus Query Language, is a powerful tool designed to work with Prometheus, an open-source systems monitoring and alerting toolkit. Initially developed by SoundCloud in 2012 and later accepted by the Cloud Native Computing Foundation in 2016, Prometheus has become a crucial component of modern infrastructure monitoring. PromQL allows users to query data stored in Prometheus, enabling the creation of insightful dashboards and setting up alerts based on the performance metrics of applications and systems. This article will explore the core functionalities of PromQL, including how it interacts with metrics data and how it can be used to effectively monitor and analyze system performance. Introducing PromQL Prometheus was initially developed by SoundCloud in 2012; the project was accepted by the Cloud Native Computing Foundation in 2016 as the second incubated project (after Kubernetes), and version 1.0 was released shortly after. PromQL is an integral part of Prometheus, which is used to query stored data and produce dashboards and alerts. Before we delve into the details of the language, let’s briefly look at the following ways in which Prometheus-compatible systems  interact with metrics data: Ingesting metrics: Prometheus-compatible systems accept a timestamp, key-value labels, and a sample value. As the details of the Prometheus Time Series Database (TSDB) are  quite complicated, the following diagram shows a simplified example of how an individual sample for a metric is stored once it has been ingested:           Figure 5.1 – A simplified view of metric data stored in the TSDB The labels or dimensions of a metric: Prometheus labels provide metadata to identify data of interest. These labels create metrics, time series, and samples: * Each unique __name__ value creates a metric. In the preceding figure, the metric is app_ frontend_requests. * Each unique set of labels creates a time series. In the preceding figure, the set of all labels is the time series. * A time series will contain multiple samples, each with a unique timestamp. The preceding figure shows a single sample, but over time, multiple samples will be collected for each  time series. * The number of unique values for a metric label is referred to as the cardinality of the l abel. Highly cardinal labels should be avoided, as they signifi cantly increase the storage costs of the metric. The following diagram shows a single metric containing two time series and five samples:        Figure 5.2 – An example of samples from multiple time series In Grafana, we can see a representation of the time series and samples from a metric. To do this, follow these steps: 1. In your Grafana instance, select Explore in the menu. 2. Choose your Prometheus data source, which will be labeled as grafanacloud-<team>prom (default). 3. In the Metric dropdown, choose app_frontend_requests_total, and under Options, set Format to Table, and then click on Run query. Th is will show you all the samples and time series in the metric over the selected time range. You should see data like this:    Figure 5.3 – Visualizing the samples and time series that make up a metric Now that we understand the data structure, let’s explore PromQL. An overview of PromQL features In this section, we will take you through the features that PromQL has. We will start with an explanation of the data types, and then we will look at how to select data, how to work on multiple datasets, and how to use functions. As PromQL is a query language, it’s important to know how to manipulate data to produce alerts and dashboards. Data types PromQL offers three data types, which are important, as the functions and operators in PromQL will work diff erently depending on the data types presented: Instant vectors are a data type that stores a set of time series containing a single sample, all sharing the same timestamp – that is, it presents values at a specifi c instant in time:                             Figure 5.4 – An instant vector Range vectors store a set of time series, each containing a range of samples with different timestamps:                              Figure 5.5 – Range vectors Scalars are simple numeric values, with no labels or timestamps involved. Selecting data PromQL offers several tools for you to select data to show in a dashboard or a list, or just to understand a system’s state. Some of these are described in the following table: Table 5.1 – The selection operators available in PromQL In addition to the operators that allow us to select data, PromQL offers a selection of operators to compare multiple sets of data. Operators between two datasets Some data is easily provided by a single metric, while other useful information needs to be created from multiple metrics. The following operators allow you to combine datasets. Table 5.2 – The comparison operators available in PromQL Vector matching is an initially confusing topic; to clarify it, let’s consider examples for the three cases of vector matching – one-to-one, one-to-many/many-to-one, and many-to-many. By default, when combining vectors, all label names and values are matched. This means that for each element of the vector, the operator will try to find a single matching element from the second vector.  Let’s consider a simple example: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinnamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinnamon} ‚ 14{color=green,smell=jungle} A{} + B{}: 29{color=blue,smell=ocean} 39 {color=red,smell=cinnamon} A{} + on (color) B{} or A{} + ignoring (smell) B{}: 29{color=blue} 39{color=red} 41{color=green} When color=blue and smell=ocean, A{} + B{} gives 10 + 19 = 29, and when color=red and smell=cinnamon, A{} + B{} gives 31 + 8 = 29. The other elements do not match the two vectors so are ignored. When we sum the vectors using on (color), we will only match on the color label; so now, the two green elements match and are summed. This example works when there is a one-to-one relationship of labels between vector A and vector B. However, sometimes there may be a many-to-one or one-to-many relationship – that is, vector A or vector B may have more than one element that matches the other vector. In these cases, Prometheus will give an error, and grouping syntax must be used. Let’s look at another example to illustrate this: Vector A: 7{color=blue,smell=ocean} 5{color=red,smell=cinamon} 2{color=blue,smell=powder} Vector B: 20{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} + on (color) group_left  B{}: 27{color=blue,smell=ocean} 13{color=red,smell=cinamon} 22{color=blue,smell=powder} Now, we have two different elements in vector A with color=blue. The group_left command will use the labels from vector A but only match on color. This leads to the third element of the combined vector having a value of 22, when the item matching in vector B has a different smell. The group_right operator will behave in the opposite direction. The final option is a many-to-many vector match. These matches use the logical operators and, unless, and or to combine parts of vectors A and B. Let’s see some examples: Vector A: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} Vector B: 19{color=blue,smell=ocean} 8{color=red,smell=cinamon} ‚ 14{color=green,smell=jungle} A{} and B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} A{} unless B{}: 27{color=green,smell=grass} A{} or B{}: 10{color=blue,smell=ocean} 31{color=red,smell=cinamon} 27{color=green,smell=grass} 14{color=green,smell=jungle} Unlike the previous examples, mathematical operators are not being used here, so the values of the elements are the values from vector A, but only the elements of A that match the logical condition in B are returned. ConclusionPromQL is an essential component of Prometheus, offering users a flexible and powerful means of querying and analyzing time-series data. By understanding its data types and operators, users can craft complex queries that provide deep insights into system performance. The language supports a variety of data selection and comparison operations, allowing for precise monitoring and alerting. Whether working with instant vectors, range vectors, or scalars, PromQL enables developers and operators to optimize their use of Prometheus for monitoring and alerting, ensuring systems remain performant and reliable. As organizations continue to embrace cloud-native architectures, mastering PromQL becomes increasingly vital for maintaining robust and efficient systems. Author BioRob Chapman is a creative IT engineer and founder at The Melt Cafe, with two decades of experience in the full application life cycle. Working over the years for companies such as the Environment Agency, BT Global Services, Microsoft, and Grafana, Rob has built a wealth of experience on large complex systems. More than anything, Rob loves saving energy, time, and money and has a track record for bringing production-related concerns forward so that they are addressed earlier in the development cycle, when they are cheaper and easier to solve. In his spare time, Rob is a Scout leader, and he enjoys hiking, climbing, and, most of all, spending time with his family and six children.Peter Holmes is a senior engineer with a deep interest in digital systems and how to use them to solve problems. With over 16 years of experience, he has worked in various roles in operations. Working at organizations such as Boots UK, Fujitsu Services, Anaplan, Thomson Reuters, and the NHS, he has experience in complex transformational projects, site reliability engineering, platform engineering, and leadership. Peter has a history of taking time to understand the customer and ensuring Day-2+ operations are as smooth and cost-effective as possible.
Read more
  • 0
  • 0
  • 1050

article-image-automating-ocr-and-translation-with-google-cloud-functions-a-step-by-step-guide
Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
Save for later

Automating OCR and Translation with Google Cloud Functions: A Step-by-Step Guide

Agnieszka Koziorowska, Wojciech Marusiak
05 Nov 2024
15 min read
This article is an excerpt from the book, "Google Cloud Associate Cloud Engineer Certification and Implementation Guide", by Agnieszka Koziorowska, Wojciech Marusiak. This book serves as a guide for students preparing for ACE certification, offering invaluable practical knowledge and hands-on experience in implementing various Google Cloud Platform services. By actively engaging with the content, you’ll gain the confidence and expertise needed to excel in your certification journey.Introduction In this article, we will walk you through an example of implementing Google Cloud Functions for optical character recognition (OCR) on Google Cloud Platform. This tutorial will demonstrate how to automate the process of extracting text from an image, translating the text, and storing the results using Cloud Functions, Pub/Sub, and Cloud Storage. By leveraging Google Cloud Vision and Translation APIs, we can create a workflow that efficiently handles image processing and text translation. The article provides detailed steps to set up and deploy Cloud Functions using Golang, covering everything from creating storage buckets to deploying and running your function to translate text. Google Cloud Functions Example Now that you’ve learned what Cloud Functions is, I’d like to show you how to implement a sample Cloud Function. We will guide you through optical character recognition (OCR) on Google Cloud Platform with Cloud Functions. Our use case is as follows: 1. An image with text is uploaded to Cloud Storage. 2. A triggered Cloud Function utilizes the Google Cloud Vision API to extract the text and identify the source language. 3. The text is queued for translation by publishing a message to a Pub/Sub topic. 4. A Cloud Function employs the Translation API to translate the text and stores the result in the translation queue. 5. Another Cloud Function saves the translated text from the translation queue to Cloud Storage. 6. The translated results are available in Cloud Storage as individual text files for each translation. We need to download the samples first; we will use Golang as the programming language. Source files can be downloaded from – https://github.com/GoogleCloudPlatform/golangsamples. Before working with the OCR function sample, we recommend enabling the Cloud Translation API and the Cloud Vision API. If they are not enabled, your function will throw errors, and the process will not be completed. Let’s start with deploying the function: 1. We need to create a Cloud Storage bucket.  Create your own bucket with unique name – please refer to documentation on bucket naming under following link: https://cloud.google.com/storage/docs/buckets We will use the following code: gsutil mb gs://wojciech_image_ocr_bucket 2. We also need to create a second bucket to store the results: gsutil mb gs://wojciech_image_ocr_bucket_results 3. We must create a Pub/Sub topic to publish the finished translation results. We can do so with the following code: gcloud pubsub topics create YOUR_TOPIC_NAME. We used the following command to create it: gcloud pubsub topics create wojciech_translate_topic 4. Creating a second Pub/Sub topic to publish translation results is necessary. We can use the following code to do so: gcloud pubsub topics create wojciech_translate_topic_results 5. Next, we will clone the Google Cloud GitHub repository with some Python sample code: git clone https://github.com/GoogleCloudPlatform/golang-samples 6. From the repository, we need to go to the golang-samples/functions/ocr/app/ file to be able to deploy the desired Cloud Function. 7. We recommend reviewing the included go files to review the code and understand it in more detail. Please change the values of your storage buckets and Pub/Sub topic names. 8. We will deploy the first function to process images. We will use the following command: gcloud functions deploy ocr-extract-go --runtime go119 --trigger-bucket wojciech_image_ocr_bucket --entry-point  ProcessImage --set-env-vars "^:^GCP_PROJECT=wmarusiak-book- 351718:TRANSLATE_TOPIC=wojciech_translate_topic:RESULT_ TOPIC=wojciech_translate_topic_results:TO_LANG=es,en,fr,ja" 9. After deploying the first Cloud Function, we must deploy the second one to translate the text.  We can use the following code snippet: gcloud functions deploy ocr-translate-go --runtime go119 --trigger-topic wojciech_translate_topic --entry-point  TranslateText --set-env-vars "GCP_PROJECT=wmarusiak-book- 351718,RESULT_TOPIC=wojciech_translate_topic_results" 10. The last part of the complete solution is a third Cloud Function that saves results to Cloud Storage. We will use the following snippet of code to do so: gcloud functions deploy ocr-save-go --runtime go119 --triggertopic wojciech_translate_topic_results --entry-point SaveResult  --set-env-vars "GCP_PROJECT=wmarusiak-book-351718,RESULT_ BUCKET=wojciech_image_ocr_bucket_results" 11. We are now free to upload any image containing text. It will be processed first, then translated and saved into our Cloud Storage bucket. 12. We uploaded four sample images that we downloaded from the Internet that contain some text. We can see many entries in the ocr-extract-go Cloud Function’s logs. Some Cloud Function log entries show us the detected language in the image and the other extracted text:  Figure 7.22 – Cloud Function logs from the ocr-extract-go function 13. ocr-translate-go translates detected text in the previous function:  Figure 7.23 – Cloud Function logs from the ocr-translate-go function 14. Finally, ocr-save-go saves the translated text into the Cloud Storage bucket:  Figure 7.24 – Cloud Function logs from the ocr-save-go function 15. If we go to the Cloud Storage bucket, we’ll see the saved translated files:  Figure 7.25 – Translated images saved in the Cloud Storage bucket 16. We can view the content directly from the Cloud Storage bucket by clicking Download next to the file, as shown in the following screenshot:  Figure 7.26 – Translated text from Polish to English stored in the Cloud Storage bucket Cloud Functions is a powerful and fast way to code, deploy, and use advanced features. We encourage you to try out and deploy Cloud Functions to understand the process of using them better. At the time of writing, Google Cloud Free Tier offers a generous number of free resources we can use. Cloud Functions offers the following with its free tier: 2 million invocations per month (this includes both background and HTTP invocations) 400,000 GB-seconds, 200,000 GHz-seconds of compute time 5 GB network egress per month Google Cloud has comprehensive tutorials that you can try to deploy. Go to https://cloud.google.com/functions/docs/tutorials to follow one. Conclusion In conclusion, Google Cloud Functions offer a powerful and scalable solution for automating tasks like optical character recognition and translation. Through this example, we have demonstrated how to use Cloud Functions, Pub/Sub, and the Google Cloud Vision and Translation APIs to build an end-to-end OCR and translation pipeline. By following the provided steps and code snippets, you can easily replicate this process for your own use cases. Google Cloud's generous Free Tier resources make it accessible to get started with Cloud Functions. We encourage you to explore more by deploying your own Cloud Functions and leveraging the full potential of Google Cloud Platform for serverless computing. Author BioAgnieszka is an experienced Systems Engineer who has been in the IT industry for 15 years. She is dedicated to supporting enterprise customers in the EMEA region with their transition to the cloud and hybrid cloud infrastructure by designing and architecting solutions that meet both business and technical requirements. Agnieszka is highly skilled in AWS, Google Cloud, and VMware solutions and holds certifications as a specialist in all three platforms. She strongly believes in the importance of knowledge sharing and learning from others to keep up with the ever-changing IT industry.With over 16 years in the IT industry, Wojciech is a seasoned and innovative IT professional with a proven track record of success. Leveraging extensive work experience in large and complex enterprise environments, Wojciech brings valuable knowledge to help customers and businesses achieve their goals with precision, professionalism, and cost-effectiveness. Holding leading certifications from AWS, Alibaba Cloud, Google Cloud, VMware, and Microsoft, Wojciech is dedicated to continuous learning and sharing knowledge, staying abreast of the latest industry trends and developments.
Read more
  • 0
  • 0
  • 417

article-image-mastering-prometheus-sharding-boost-scalability-with-efficient-data-management
William Hegedus
28 Oct 2024
15 min read
Save for later

Mastering Prometheus Sharding: Boost Scalability with Efficient Data Management

William Hegedus
28 Oct 2024
15 min read
This article is an excerpt from the book, Mastering Prometheus, by William Hegedus. Become a Prometheus master with this guide that takes you from the fundamentals to advanced deployment in no time. Equipped with practical knowledge of Prometheus and its ecosystem, you’ll learn when, why, and how to scale it to meet your needs.IntroductionIn this article, readers will dive into techniques for optimizing Prometheus, a powerful open-source monitoring tool, by implementing sharding. As data volumes increase, so do the challenges associated with high cardinality, often resulting in strained single-instance setups. Instead of purging data to reduce load, sharding offers a viable solution by distributing scrape jobs across multiple Prometheus instances. This article explores two primary sharding methods: by service, which segments data by use case or team, and by dynamic relabeling, which provides a more flexible, albeit complex, approach to distributing data. By examining each method’s setup and trade-offs, the article offers practical insights for scaling Prometheus while maintaining efficient access to critical metrics across instances.Sharding Prometheus Chances are that if you’re looking to improve your Prometheus architecture through sharding, you’re hitting one of the limitations we talked about and it’s probably cardinality. You have a Prometheus instance that’s just got too much data in it, but… you don’t want to get rid of any data. So, the logical answer is… run another Prometheus instance! When you split data across Prometheus instances like this, it’s referred to as sharding. If you’re familiar with other database designs, it probably isn’t sharding in the traditional sense. As previously established, Prometheus TSDBs do not talk to each other, so it’s not as if they’re coordinating to shard data across instances. Instead, you predetermine where data will be placed by how you configure the scrape jobs on each instance. So, it’s more like sharding scrape jobs than sharding the data. Th ere are two main ways to accomplish this: sharding by service and sharding via relabeling. Sharding by service This is arguably the simpler of the two ways to shard data across your Prometheus instances. Essentially, you just separate your Prometheus instances by use case. This could be a Prometheus instance per team, where you have multiple Prometheus instances and each one covers services owned by a specific team so that each team still has a centralized location to see most of the data they care about. Or, you could arbitrarily shard it by some other criteria, such as one Prometheus instance for virtualized infrastructure, one for bare-metal, and one for containerized infrastructure. Regardless of the criteria, the idea is that you segment your Prometheus instances based on use case so that there is at least some unifi cation and consistency in which Prometheus gets which scrape targets. This makes it at least a little easier for other engineers and developers to reason when thinking about where the metrics they care about are located. From there, it’s fairly self-explanatory to get set up. It only entails setting up your scrape job in different locations. So, let’s take a look at the other, slightly more involved way of sharding your Prometheus instances. Sharding with relabeling Sharding via relabeling is a much more dynamic way of handling the sharding of your Prometheus scrape targets. However, it does have some tradeoff s. The biggest one is the added complexity of not necessarily knowing which Prometheus instance your scrape targets will end up on. As opposed to the sharding by service/team/domain example we already discussed, sharding via relabeling does not shard scrape jobs in a way that is predictable to users. Now, just because sharding is unpredictable to humans does not mean that it is not deterministic. It is consistent, but just not in a way that it will be clear to users which Prometheus they need to go to to find the metrics they want to see. There are ways to work around this with tools such as Th anos (which we’ll discuss later in this book) or federation (which we’ll discuss later in this chapter). The key to sharding via relabeling is the hashmod function, which is available during relabeling in Prometheus. The hashmod function works by taking a list of one or more source labels, concatenating them, producing an MD5 hash of it, and then applying a modulus to it. Then, you store the output of that and in your next step of relabeling, you keep or drop targets that have a specific hashmod value output. What’s relabeling again? For a refresher on relabeling in Prometheus, consult Chapter 4’s section on it. For this chapter, the type of relabeling we’re doing is standard relabeling (as opposed to metric relabeling) – it happens before a scrape occurs. Let’s look at an  example of how this works logically before diving into implementing it in our kubeprometheus stack. We’ll just use the Python REPL to keep it quick:  >>> from hashlib import md5 >>> SEPARATOR = ";" >>> MOD = 2 >>> targetA = ["app=nginx", "instance=node2"] >>> targetB = ["app=nginx", "instance=node23"] >>> hashA = int(md5(SEPARATOR.join(targetA).encode("utf-8")). hexdigest(), 16) >>> hashA 286540756315414729800303363796300532374 >>> hashB = int(md5(SEPARATOR.join(targetB).encode("utf-8")). hexdigest(), 16) >>> hashB 139861250730998106692854767707986305935 >>> print(f"{targetA} % {MOD} = ", hashA % MOD) ['app=nginx', 'instance=node2'] % 2 = 0 >>> print(f"{targetB} % {MOD} = ", hashB % MOD) ['app=nginx', 'instance=node23'] % 2 = 1As you can see, the hash of the app and instance labels has a modulus of 2 applied to it. For node2, the result is 0. For node23, the result is 1. Since the modulus is 2, those are the only possible values. Therefore, if we had two Prometheus instances, we would configure one to only keep targets where the result is 0, and the other would only keep targets where the result is 1 – that’s how we would shard our scrape jobs. The modulus value that you choose should generally correspond to the number of Prometheus instances that you wish to shard your scrape jobs across. Let’s look at how we can accomplish this type of sharding across two Prometheus instances using kube-prometheus. Luckily for us, kube-prometheus has built-in support for sharding Prometheus instances using relabeling by way of support via the Prometheus Operator. It’s a built-in option on Prometheus CRD objects. Enabling it is as simple as updating our prometheusSpec in our Helm values to specify the number of shards.  Additionally, we’ll need to clean up the names of our Prometheus instances; otherwise, Kubernetes won’t allow the new Pod to start due to character constraints. We can tell kube-prometheus to stop including kube-prometheus in the names of our resources, which will shorten the names. To do this, we’ll set cleanPrometheusOperatorObjectNames: true. The new values being added to our Helm values file from Chapter 2 look like this:  prometheus: prometheusSpec: shards: 2 cleanPrometheusOperatorObjectNames: trueThe full values file is available in this GitHub repository, which was linked at the beginning of this chapter. With that out of the way, we can apply these new values to get an additional Prometheus instance running to shard our scrape jobs across the two. The helm command to accomplish this is as follows:  $ helm upgrade --namespace prometheus \ --version 47.0.0 \ --values ch6/values.yaml \ mastering-prometheus \ prometheus-community/kube-prometheus-stackOnce that command completes, you should see a new pod named prometheus-masteringprometheus-kube-shard-1-0 in the output of kubectl get pods. Now, we can see the relabeling that’s taking place behind the scenes so that we can understand how it works and how to implement it in Prometheus instances not running via the Prometheus Operator. Port-forward to either of the two Prometheus instances (I chose the new one) and we can examine the configuration in our browsers at http://localhost:9090/config: $ kubectl port-forward \ pod/prometheus-mastering-prometheus-kube-shard-1-0 \ 9090The relevant section we’re looking for is the sequential parts of relabel_configs, where hashmod is applied and then a keep action is applied based on the output of hashmod and the shard number of the Prometheus instance. It should look like this:  relabel_configs: [ . . . ] - source_labels: [__address__] separator: ; regex: (.*) modulus: 2 target_label: __tmp_hash replacement: $1 action: hashmod - source_labels: [__tmp_hash] separator: ; regex: "1" replacement: $1 action: keepAs we can see, for each s crape job, a modulus of 2 is taken from the hash of the __address__ label, and its result is stored in a new label called __tmp_hash. You can store the result in whatever you want to name your label – there’s nothing special about __tmp_hash. Additionally, you can choose any one or more source labels you wish – it doesn’t have to be __address__. However, it’s recommended that you choose labels that will be unique per target – so instance and __address__ tend to be your best options. After calculating the modulus of the hash, the next step is the crucial one that determines which scrape targets the Prometheus shard will scrape. It takes the value of the __tmp_hash label and matches it against its shard number (shard numbers start at 0), and keeps only targets that match. The Prometheus Operator does the heavy lifting of automatically applying these two relabeling steps to all configured scrape jobs, but if you’re managing your own Prometheus configuration directly, then you will need to add them to every scrape job that you want to shard across Prometheus instances – there is currently no way to do it globally. It’s worth mentioning that sharding in this way does not guarantee that your scrape jobs are going to be evenly spread out across your number of shards. We can port-forward to the other Prometheus instance and run a quick PromQL query to easily see that they’re not evenly distributed across my two shards. I’ll port forward to port 9091 on my local host so that I can open both instances simultaneously: $ kubectl port-forward \ pod/prometheus-mastering-prometheus-kube-0 \ 9091:9090 Then, we can run this simple query to see how many scrape targets are assigned to each Prometheus instance: count(up) In my setup, there are eight scrape targets on shard 0 and 16 on shard 1. You can attempt to microoptimize scrape target sharding by including more unique labels in the source_label values for the hashmod operation, but it may not be worth the effort – as you add more unique scrape targets, they’ll begin to even out. One of the practical pain points you may have noticed already with sharding is that it’s honestly kind of a pain to have to navigate to multiple Prometheus instances to run queries. One of the ways we can try to make this easier is through federating our Prometheus instances. Conclusion In conclusion, sharding Prometheus is an effective way to manage the challenges posed by data volume and cardinality in your system. Whether you opt for sharding by service or through dynamic relabeling, both approaches offer ways to distribute scrape jobs across multiple Prometheus instances. While sharding via relabeling introduces more complexity, it also provides flexibility and scalability. However, it is important to consider the trade-offs, such as uneven distribution of scrape jobs and the need for tools like Thanos or federation to simplify querying across instances. By applying these strategies, you can ensure a more efficient and scalable Prometheus architecture. Author BioWill Hegedus has worked in tech for over a decade in a variety of roles, most recently in Site Reliability Engineering. After becoming the first SRE at Linode, an independent cloud provider, he came to Akamai Technologies by way of an acquisition.Now, Will manages a team of SREs focused on building an internal observability platform for Akamai&rsquo;s Connected Cloud. His team's responsibilities include managing a global fleet of Prometheus servers ingesting millions of data points every second.Will is an open-source advocate with contributions to Prometheus, Thanos, and other CNCF projects related to Kubernetes and observability. He lives in central Virginia with his wonderful wife, 4 kids, 3 cats, 2 dogs, and bearded dragon.
Read more
  • 0
  • 0
  • 470

article-image-rookout-and-appdynamics-team-up-to-help-enterprise-engineering-teams-debug-at-speed-with-deep-code-insights
Richard Gall
20 Feb 2020
3 min read
Save for later

Rookout and AppDynamics team up to help enterprise engineering teams debug at speed with Deep Code Insights

Richard Gall
20 Feb 2020
3 min read
It's not acknowledged enough that the real headache when it comes to software faults and performance problems isn't so much the problems themselves, but instead the process of actually identifying those problems. Sure, problems might slow you down, but wading though your application code to actually understand what's happened can sometimes grind engineering teams to a halt. For enterprise engineering teams, this can be particularly fatal. Agility is hard enough when you're dealing with complex applications and the burden of legacy software; but when things go wrong, any notion of velocity can be summarily discarded to the trashcan. However, a new partnership between debugging platform Rookout and APM company AppDynamics, announced at AppDynamics' Transform 2020 event, might just change that. The two organizations have teamed up, with Rookout's impressive debugging capabilities now available to AppDynamics customers in the form of a new product called Deep Code Insights. [caption id="attachment_31042" align="alignleft" width="696"] Live debugging of an application in production in Deep Code Insights[/caption]                 What is Deep Code Insights? Deep Code Insights is a new product for AppDynamics customers that combines the live-code debugging capabilities offered by Rookout with AppDynamic's APM platform. The advantage for developers could be substantial. Jerrie Pineda, Enterprise Software Architect at Maverik says that "Rookout helps me get the debugging data I need in seconds instead of waiting for several hours." This means, he explains, "[Maverik's] mean time to resolution (MTTR) for most issues is slashed up to 80%.” What does Deep Code Insights mean for AppDynamics? For AppDynamics, Deep Code Insights allows the organization to go one step further in its mission to "make it easier for businesses to understand their own software." At least that's how AppDynamics' VP of corporate development and strategy at Kevin Wagner puts it. "Together [with Rookout], we are narrowing the gaps between indicating a code-related problem impacting performance, pinpointing the direct issue within the line of code, and deploying a solution quickly for a seamless customer experience," he says. What does Deep Code Insights mean for Rookout? For Rookout, meanwhile, the partnership with AppDynamics is a great way for the company to reach out to a wider audience of users working at large enterprise organizations. The company received $8,000,000 in Series A funding back in August. This has provided a solid platform on which it is clearly looking to build and grow. Rookout's Co-Founder and CEO Or Weis describes the partnership as "obvious." "We want to bring the next-gen developer workflow to enterprise customers and help them increase product velocity," he says. Learn more about Rookout: www.rookout.com Learn more about AppDynamics: www.appdynamics.com  
Read more
  • 0
  • 0
  • 13022

article-image-5-reasons-why-you-should-use-an-open-source-data-analytics-stack-in-2020
Amey Varangaonkar
28 Jan 2020
7 min read
Save for later

5 reasons why you should use an open-source data analytics stack in 2020

Amey Varangaonkar
28 Jan 2020
7 min read
Today, almost every company is trying to be data-driven in some sense or the other. Businesses across all the major verticals such as healthcare, telecommunications, banking, insurance, retail, education, etc. make use of data to better understand their customers, optimize their business processes and, ultimately, maximize their profits. This is a guest post sponsored by our friends at RudderStack. When it comes to using data for analytics, companies face two major challenges: Data tracking: Tracking the required data from a multitude of sources in order to get insights out of it. As an example, tracking customer activity data such as logins, signups, purchases, and even clicks such as bookmarks from platforms such as mobile apps and websites becomes an issue for many eCommerce businesses. Building a link between the Data and Business Intelligence: Once data is acquired, transforming it and making it compatible for a BI tool can often prove to be a substantial challenge. A well designed data analytics stack comes is essential in combating these challenges. It will ensure you're well-placed to use the data at your disposal in more intelligent ways. It will help you drive more value. What does a data analytics stack do? A data analytics stack is a combination of tools which when put together, allows you to bring together all of your data in one platform, and use it to get actionable insights that help in better decision-making. As seen the diagram above illustrates, a data analytics stack is built upon three fundamental steps: Data Integration: This step involves collecting and blending data from multiple sources and transforming them in a compatible format, for storage. The sources could be as varied as a database (e.g. MySQL), an organization’s log files, or event data such as clicks, logins, bookmarks, etc from mobile apps or websites. A data analytics stack allows you to use all of such data together and use it to perform meaningful analytics. Data Warehousing: This next step involves storing the data for the purpose of analytics. As the complexity of data grows, it is feasible to consolidate all the data in a single data warehouse. Some of the popular modern data warehouses include Amazon’s Redshift, Google BigQuery and platforms such as Snowflake and MarkLogic. Data Analytics: In this final step, we use a visualization tool to load the data from the warehouse and use it to extract meaningful insights and patterns from the data, in the form of charts, graphs and reports. Choosing a data analytics stack - proprietary or open-source? When it comes to choosing a data analytics stack, businesses are often left with two choices - buy it or build it. On one hand, there are proprietary tools such as Google Analytics, Amplitude, Mixpanel, etc. - where the vendors alone are responsible for their configuration and management to suit your needs. With the best in class features and services that come along with the tools, your primary focus can just be project management, rather than technology management. While using proprietary tools have their advantages, there are also some major cons to them that revolve mainly around cost, data sharing, privacy concerns, and more. As a result, businesses today are increasingly exploring the open-source alternatives to build their data analytics stack. The advantages of open source analytics tools Let's now look at the 5 main advantages that open-source tools have over these proprietary tools. Open source analytics tools are cost effective Proprietary analytics products can cost hundreds of thousands of dollars beyond their free tier. For small to medium-sized businesses, the return on investment does not often justify these costs. Open-source tools are free to use and even their enterprise versions are reasonably priced compared to their proprietary counterparts. So, with a lower up-front costs, reasonable expenses for training, maintenance and support, and no cost for licensing, open-source analytics tools are much more affordable. More importantly, they're better value for money. Open source analytics tools provide flexibility Proprietary SaaS analytics products will invariably set restrictions on the ways in which they can be used. This is especially the case with the trial or the lite versions of the tools, which are free. For example, full SQL is not supported by some tools. This makes it hard to combine and query external data alongside internal data. You'll also often find that warehouse dumps provide no support either. And when they do, they'll probably cost more and still have limited functionality. Data dumps from Google Analytics, for instance, can only be loaded into Google BigQuery. Also, these dumps are time-delayed. That means the loading process can be very slow.. With open-source software, you get complete flexibility: from the way you use your tools, how you combine to build your stack, and even how you use your data. If your requirements change - which, let's face it, they probably will - you can make the necessary changes without paying extra for customized solutions. Avoid vendor lock-in Vendor lock-in, also known as proprietary lock-in, is essentially a state where a customer becomes completely dependent on the vendor for their products and services. The customer is unable to switch to another vendor without paying a significant switching cost. Some organizations spend a considerable amount of money on proprietary tools and services that they heavily rely on. If these tools aren't updated and properly maintained, the organization using it is putting itself at a real competitive disadvantage. This is almost never the case with open-source tools. Constant innovation and change is the norm. Even if the individual or the organization handling the tool moves on, the community catn take over the project and maintain it. With open-source, you can rest assured that your tools will always be up-to-date without heavy reliance on anyone. Improved data security and privacy Privacy has become a talking point in many data-related discussions of late. This is thanks, in part, to data protection laws such as the GDPR and CCPA coming into force. High-profile data leaks have also kept the issue high on the agenda. An open-source stack analytics running inside your cloud or on-prem environment gives complete control of your data. This lets you decide which data is to be used when, and how. It lets you dictate how third parties can access and use your data, if at all. Open-source is the present It's hard to counter the fact that open-source is now mainstream. Companies like Microsoft, Apple, and IBM are now not only actively participating in the open-source community, they're also contributing to it. Open-source puts you on the front foot when it comes to innovation. With it, you'll be able to leverage the power of a vibrant developer community to develop better products in more efficient ways. How RudderStack helps you build an ideal open-source data analytics stack RudderStack is a completely open-source, enterprise-ready platform to simplify data management in the most secure and reliable way. It works as a perfect data integration platform by routing your event data from data sources such as websites, mobile apps and servers, to multiple destinations of your choice - thus helping you save time and effort. RudderStack integrates effortlessly with a multitude of destinations such as Google Analytics, Amplitude, MixPanel, Salesforce, HubSpot, Facebook Ads, and more, as well as popular data warehouses such as Amazon Redshift or S3. If performing efficient clickstream analytics is your goal, RudderStack offers you the perfect data pipeline to collect and route your data securely. Learn more about Rudderstack by visiting the RudderStack website, or check out its GitHub page to find out how it works.
Read more
  • 0
  • 0
  • 10724

article-image-10-tech-startups-for-2020-that-will-help-the-world-build-more-resilient-secure-and-observable-software
Richard Gall
30 Dec 2019
10 min read
Save for later

10 tech startups for 2020 that will help the world build more resilient, secure, and observable software

Richard Gall
30 Dec 2019
10 min read
The Datadog IPO in September marked an important moment for the tech industry. This wasn’t just because the company was the fourth tech startup to reach a $10 billion market cap in 2019, but also because it announced something that many people, particularly those in and around Silicon Valley, have been aware of for some time: the most valuable software products in the world aren’t just those that offer speed, and efficiency, they’re those that provide visibility, and security across our software systems. It shouldn’t come as a surprise. As software infrastructure becomes more complex, constantly shifting and changing according to the needs of users and businesses, the ability to assume some degree of control emerges as particularly precious. Indeed, the idea of control and stability might feel at odds with a decade or so that has prized innovation at speed. The mantra ‘move fast and break things’ is arguably one of the defining ones of the last decade. And while that lust for change might never disappear, it’s nevertheless the case that we’re starting to see a mindset shift in how business leaders think about technology. If everyone really is a tech company now, there’s now a growing acceptance that software needs to be treated with more respect and care. The Datadog IPO, then, is just the tip of an iceberg in which monitoring, observability, security, and resiliency tools have started to capture the imagination of technology leaders. While what follows is far from exhaustive, it does underline some of the key players in a growing field. Whether you're an investor or technology decision maker, here are ten tech startups you should watch out for in 2020 from across the cloud and DevOps space. Honeycomb Honeycomb has been at the center of the growing conversation around observability. Designed to help you “own production in hi-res,” what makes it unique in the market is that it allows you to understand and visualize your systems through high-cardinality dimensions (eg. at a user by user level, rather than, say, browser type or continent). The driving force behind Honeycomb is Charity Majors, its co-founder and former CEO. I was lucky enough to speak to her at the start of the year, and it was clear that she has an acute understanding of the challenges facing engineering teams. What was particularly striking in our conversation is how she sees Honeycomb as a tool for empowering developers. It gives them ownership over the code they write and the systems they build. “Ownership gives you the power to fix the thing you know you need to fix and the power to do a good job…” she told me. “People who find ownership is something to be avoided – that’s a terrible sign of a toxic culture.” Honeycomb’s investment status At the time of writing, Honeycomb has received $26.9 million in funding, with $11.4 million series A back in September. Firehydrant “You just got paged. Now what?” That’s the first line that greets you on the FireHydrant website. We think it sums up many of the companies on this list pretty well; many of the best tools in the DevOps space are designed to help tackle the challenges on-call developers face. FireHydrant isn't a tech startup with the profile of Honeycomb. However, as an incident management tool that integrates very neatly into a massive range of workflow tools, we’re likely to see it gain traction in 2020. We particularly like the one-click post mortem feature - it’s clear the product has been built in a way that allows developers to focus on the hard stuff and minimize the things that can just suck up time. FireHydrant’s investment status FireHydrant has raised $1.5 million in seed funding. NS1 Managing application traffic can be business-critical. That’s why NS1 exists; with DNS, DHCP and IP address management capabilities, it’s arguably one of the leading tools on the planet for dealing with the diverse and extensive challenges that come with managing massive amounts of traffic across complex interlocking software applications and systems. The company boasts an impressive roster of clients, including DropBox, The Guardian and LinkedIn, which makes it hard to bet against NS1 going from strength to strength in 2020. Like all software adoption, it might take some time to move beyond the realms of the largest and most technically forward-thinking organizations, but it’s surely only a matter of time until it the importance of smarter and more efficient becomes clear to even the smallest businesses. NS1’s investment status NS1 has raised an impressive $78.4 million in funding from investors (although it’s important to note that it’s one of the oldest companies on this list, founded all the way back in 2013). It received $33 million in series C funding at the beginning of October. Rookout “It’s time to liberate your data” Rookout implores us. For too long, the startup’s argument goes, data has been buried inside our applications where it’s useless for developers and engineers. Once it has been freed, it can help inform how we go about debugging and monitoring our systems. Designed to work for modern architectural and deployment patterns such as Kubernetes and serverless, Rookout is a tool that not only brings simplicity in the midst of complexity, it can also save engineering teams a serious amount of time when it comes to debugging and logging - the company claims by 80%. Like FireHydrant, this means engineers can focus on other areas of application performance and resilience. Rookout’s investment status Back in August, Rookout raised $8 million in Series A funding, taking its total funding amount to $12.2 million dollars. LaunchDarkly Feature flags or toggles are a concept that have started to gain traction in engineering teams in the last couple of years or so. They allow engineering teams to “modify system behavior without changing code” (thank you Martin Fowler). LaunchDarkly is a platform specifically built to allow engineers to use feature flags. At a fundamental level, the product allows DevOps teams to deploy code (ie. change features) quickly and with minimal risk. This allows for testing in production and experimentation on a large scale. With support for just about every programming language, it’s not surprising to see LaunchDarkly boast a wealth of global enterprises on its list of customers. This includes IBM and NBC. LaunchDarkly’s investment status LaunchDarkly raised $44 million in series C funding early in 2019. To date, it has raised $76.3 million. It’s certainly one to watch closely in 2020; it's ability to help teams walk the delicate line between innovation and instability is well-suited to the reality of engineering today. Gremlin Gremlin is a chaos engineering platform designed to help engineers to ‘stress test’ their software systems. This is important in today’s technology landscape. With system complexity making unpredictability a day-to-day reality, Gremlin lets you identify weaknesses before they impact customers and revenue. Gremlin’s mission is to “help build a more reliable internet.” That’s not just a noble aim, it’s an urgent one too. What’s more, you can see that the business is really living out its mission. With Gremlin Free launching at the start of 2019, and the second ChaosConf taking place in the fall, it’s clear that the company is thinking beyond the core product: they want to make chaos engineering more accessible to a world where resilience can feel impossible in the face of increasing complexity. Gremlin’s investment status Since being founded back in 2016 by CTO Matt Fornaciari and CEO Kolton Andrus, Gremlin has raised $26.8Million in funding from Redpoint Ventures, Index Ventures, and Amplify Partners. Cockroach Labs Cockroach Labs is the organization behind CockroachDB, the cloud-native distributed SQL database. CockroachDB’s popularity comes from two things: it’s ability to scale from a single instance to thousands, and it’s impressive resilience. Indeed, its resilience is where it takes its name from. Like a cockroach, CockroachDB is built to keep going even after everything else has burned to the ground. It’s been an interesting year for CockroachLabs and CockroachDB - in June the company changed the CockroachDB core licence from open source Apache license to the Business Source License (BSL), developed by the MariaDB team. The reason for this was ultimately to protect the product as it seeks to grow. The BSL still means the source code is accessible for any use other than for a DBaaS (you’ll need an enterprise license for that). A few months later, the company took another step in pushing forward in the market with $55 million series C funding. Both stories were evidence that CockroachLabs is setting itself up for a big 2020. Although the database market will always be seriously competitive, with resilience as a core USP it’s hard to bet against Cockroach Labs. Cockroaches find a way, right? CockroachLabs investment status CockroachLabs total investment, following on from that impressive round of series C funding is now $108.5 million. Logz.io Logz.io is another platform in the observability space that you really need to watch out for in 2020. Built on the ELK stack (ElasticSearch, Logstash, and Kibana), what makes Logz.io really stand out is the use of machine learning to help identify issues across thousands and thousands of logs. Logz.io has been on ‘ones to watch’ lists for a number of years now. This was, we think, largely down to the rising wave of AI hype. And while we wouldn’t want to underplay its machine learning capabilities, it’s perhaps with the increasing awareness of the need for more observable software systems that we’ll see it really pack a punch across the tech industry. Logz.io’s investment status To date, Logz.io has raised $98.9 million. FaunaDB Fauna is the organization behind FaunaDB. It describes itself as “a global serverless database that gives you ubiquitous, low latency access to app data, without sacrificing data correctness and scale.” The database could be big in 2020. With serverless likely to go from strength to strength, and JAMstack increasing as a dominant approach for web developers, everything the Fauna team have been doing looks as though it will be a great fit for the shape of the engineering landscape in the future. Fauna’s investment status In total, Fauna has raised $32.6 million in funding from investors. Clubhouse One thing that gets overlooked when talking about DevOps and other issues in software development processes is simple project management. That’s why Clubhouse is such a welcome entry on this list. Of course, there are a massive range of project management tools available at the moment. But one of the reasons Clubhouse is such an interesting product is that it’s very deliberately built with engineers in mind. And more importantly, it appears it’s been built with an acute sense of the importance of enjoyment in a project management product. Clubhouse’s investment status Clubhouse has, to date, raised $16 million. As we see a continuing emphasis on developer experience, the tool is definitely one to watch in a tough marketplace. Conclusion: embrace the unpredictable The tech industry feels as unpredictable as the software systems we're building and managing. But while there will undoubtedly be some surprises in 2020, the need for greater security and resilience are themes that no one should overlook. Similarly, the need to gain more transparency and build for observability are critical. Whether you're an investor, business leader, or even an engineer, then, exploring the products that are shaping and defining the space is vital.
Read more
  • 0
  • 0
  • 13528
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-beyond-kubernetes-key-skills-for-infrastructure-and-ops-engineers-in-2020
Richard Gall
20 Dec 2019
5 min read
Save for later

Beyond Kubernetes: Key skills for infrastructure and ops engineers in 2020

Richard Gall
20 Dec 2019
5 min read
For systems engineers and those working in operations, the move to cloud and the rise of containers in recent years has drastically changed working practices and even the nature of job roles. But that doesn’t mean you can just learn Kubernetes and then rest on your laurels. To a certain extent, the broad industry changes we’ve seen haven’t stabilised into some sort of consensus but rather created a field where change is only more likely - and where things are arguably even less stable. This isn’t to say that you have anything to fear as an engineer. But you should be open minded about the skills you learn in 2020. Here’s a list of 5 skills you should consider spending some time developing in the new year. Scripting and scripting languages Scripting is a well-established part of many engineers’ skill set. The reasons for this are obvious: they allow you to automate tasks and get things done quickly. If you don’t know scripting, then of course you should learn it. But even if you do it’s worth thinking about exploring some new programming languages. You might find that a fresh approach - like learning, for example, Go if you mainly use Python - will make you more productive, or will help you to tackle problems with greater ease than you have in the past. Learn Linux shell scripting with Learn Linux Shell Scripting: the Fundamentals of Bash 4.4. Find out how to script with Python in Mastering Python Scripting for System Administrators. Infrastructure automation tools and platforms With the rise of hybrid and multi-cloud, infrastructure automation platforms like Ansible and Puppet have been growing more and more important to many companies. While Kubernetes has perhaps dented their position in the wider DevOps tooling marketplace (if, indeed, that’s such a thing), they nevertheless remain relevant in a world where managing complexity appears to be a key engineering theme. With Puppet looking to continually evolve and Ansible retaining a strong position on the market, they remain two of the most important platforms to explore and learn. However, there are a wealth of other options too - Terraform in particular appears to be growing at an alarming pace even if it hasn’t reached critical mass, but Salt and Chef are also well worth learning too. Get started with Ansible, fast - learn with Ansible Quick Start Guide. Cloud architecture and design Gone are the days when cloud was just a rented server. Gone are the days when it offered a simple (or relatively simple, at least) solution to storage and compute problems. With trends like multi and hybrid cloud becoming the norm, serverless starting to gain traction at the cutting edge of software development, being able to piece together various different elements is absolutely crucial. Indeed, this isn’t a straightforward skill that you can just learn with some documentation and training materials. Of course those help, but it also requires sensitivity to business needs, an awareness of how developers work, as well as an eye for financial management. However, if you can develop the broad range of skills needed to architect cloud solutions, you will be a very valuable asset to a business. Become a certified cloud architect with Packt's new Professional Cloud Architect – Google Cloud Certification Guide. Security and resilience With the increase in architectural complexity, the ability to ensure security and resilience is now both vital but also incredibly challenging. Fortunately, there are many different tools and techniques available for doing this, each one relevant to different job roles - from service meshes to monitoring platforms, to chaos engineering, there are many ways that engineers can take on stability and security challenges head on. Whatever platforms you’re working with, make it your mission to learn what you need to improve the security and resilience of your systems. Learn how to automate cloud security with Cloud Security Automation. Pushing DevOps forward No one wants to hear about the growth of DevOps - we get that. It’s been growing for almost a decade now; it certainly doesn’t need to be treated to another wave of platitudes as the year ends. So, instead of telling you to simply embrace DevOps, a smarter thing to do would be to think about how you can do DevOps better. What do your development teams need in terms of support? And how could they help you? In theory the divide between dev and ops should now be well and truly broken - the question that remains is that how should things evolve once that silo has been broken down? Okay, so maybe this isn’t necessarily a single skill you can learn. But it’s something that starts with conversation - so make sure you and those around you are having better conversations in 2020. Search the latest DevOps eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 5131

article-image-new-for-2020-in-operations-and-infrastructure-engineering
Richard Gall
19 Dec 2019
5 min read
Save for later

New for 2020 in operations and infrastructure engineering

Richard Gall
19 Dec 2019
5 min read
It’s an exciting time if you work in operations and software infrastructure. Indeed, you could even say that as the pace of change and innovation increases, your role only becomes more important. Operations and systems engineers, solution architects, everyone - you’re jobs are all about bringing stability, order and control into what can sometimes feel like chaos. As anyone that’s been working in the industry knows, managing change, from a personal perspective, requires a lot of effort. To keep on top of what’s happening in the industry - what tools are being released and updated, what approaches are gaining traction - you need to have one eye on the future and the wider industry. To help you with that challenge and get you ready for 2020, we’ve put together a list of what’s new for 2020 - and what you should start learning. Learn how to make Kubernetes work for you It goes without saying that Kubernetes was huge in 2019. But there are plenty of murmurs and grumblings that it’s too complicated and adds an additional burden for engineering and operations teams. To a certain extent there’s some truth in this - and arguably now would be a good time to accept that just because it seems like everyone is using Kubernetes, it doesn’t mean it’s the right solution for you. However, having said that, 2020 will be all about understanding how to make Kubernetes relevant to you. This doesn’t mean you should just drop the way you work and start using Kubernetes, but it does mean that spending some time with the platform and getting a better sense of how it could be used in the future is a useful way to spend your learning time in 2020. Explore Packt's extensive range of Kubernetes eBooks and videos on the Packt store. Learn how to architect If software has eaten the world, then by the same token perhaps complexity has well and truly eaten software as we know it. Indeed, Kubernetes is arguably just one of the symptoms and causes of this complexity. Another is the growing demand for architects in engineering and IT teams. There are a number of different ‘architecture’ job roles circulating across the industry, from solutions architect to application architect. While they each have their own subtle differences, and will even vary from company to company, they’re all roles that are about organizing and managing different pieces into something that is both stable and value-driving. Cloud has been particularly instrumental in making architect roles more prominent in the industry. As organizations look to resist the pitfalls of lock-in and better manage resources (financial and otherwise), it will be down to architects to balance business and technology concerns carefully. Learn how to architect cloud native applications. Read Architecting Cloud Computing Solutions. Get to grips with everything you need to know to be a software architect. Pick up Software Architect's Handbook. Artificial intelligence It’s strange that the hype around AI doesn’t seem to have reached the world of ops. Perhaps this is because the area is more resistant to the spin that comes with AI, preferring instead to focus more on the technical capabilities of tools and platforms. Whatever the case, it’s nevertheless true that AI will play an important part in how we manage and secure infrastructure. From monitoring system health, to automating infrastructure deployments and configuration, and even identifying security threats, artificial intelligence is already an important component for operations engineers and others. Indeed, artificial intelligence is being embedded inside products and platforms that ops teams are using - this means the need to ‘learn’ artificial intelligence is somewhat reduced. But it would be wrong to think it’s something that can just be managed from a dashboard. In 2020 it will be essential to better understand where and how artificial intelligence can fit into your operations and architectural toolchain. Find artificial intelligence eBooks and videos in Packt's collection of curated data science bundles. Observability, monitoring, tracing, and logging One of the challenges of software complexity is understanding exactly what’s going on under the hood. Yes, the network might be unreliable, as the saying goes, but what makes things even worse is that we’re not even sure why. This is where observability and the next generation of monitoring, logging and tracing all come into play. Having detailed insights into how applications and infrastructures are performing, how resources are being managed, and what things are actually causing problems is vitally important from a team perspective. Without the ability to understand these things, it can put pressure on teams as knowledge becomes siloed inside the brains of specific engineers. It makes you vulnerable to failure as you start to have points of failure at a personnel level. There are, of course, a wide range of tools and products available that can make monitoring and tracing easy (or easier, at least). But understanding which ones are right for your needs still requires some time learning and exploring the options out there. Make sure you do exactly that in 2020. Learn how to monitor distributed systems with Learn Centralized Logging and Monitoring with Kubernetes. Making serverless a reality We’ve talked about serverless a lot this year. But as a concept there’s still considerable confusion about what role it should play in modern DevOps processes. Indeed, even the nomenclature is a little confusing. Platforms using their own terminology, such as ‘lambdas’ and ‘functions’, only adds to the sense that serverless is something amorphous and hard to pin down. So, in 2020, we need to work out how to make serverless work for us. Just as we need to consider how Kubernetes might be relevant to our needs, we need to consider in what ways serverless represents both a technical and business opportunity. Search Packt's library for the latest serverless eBooks and videos. Explore more technology eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 4234

article-image-operations-and-infrastructure-engineering-in-2019-what-really-mattered
Richard Gall
18 Dec 2019
6 min read
Save for later

Operations and infrastructure engineering in 2019: what really mattered

Richard Gall
18 Dec 2019
6 min read
Everything is unreliable, right? If we didn’t realise it before, 2019 was the year when we fully had to accept the reality of the systems we’re building and managing. That was scary, sure, but it was also liberating. But we shouldn’t get carried away: given how highly distributed software systems are now part and parcel in a range of different industries, the issue of reliability and resilience isn’t purely an academic issue: in many instances, it’s urgent and critical. That makes the work of managing and building software infrastructure an incredibly vital role. Back in 2015 I wrote that Docker had turned us all into SysAdmins, but on reflection it may be more accurate to say that we’ve now entered a world where cloud and the infrastructure-as-code revolution has turned everyone into a software developer. Kubernetes is everywhere Kubernetes is arguably the definitive technology of 2019. With the move to containers now fully mainstream, Kubernetes is an integral in helping engineers to deploy and manage containers at scale. The other important element to Kubernetes is that it all but kills off dreaded infrastructure lock-in. It gives you the freedom to build across different environments, and inside a more heterogeneous software infrastructure. From a tooling and skill set perspective that’s a massive win. Although conversations about flexibility and agility have been ongoing in the tech industry for years, with Kubernetes we are finally getting to a place where that’s a reality. This isn’t to say it’s all plain sailing - Kubernetes’ complexity is a point of complaint for many, with many people suggesting that compared to, say, Docker, the developer experience leaves a lot to be desired. But insofar as DevOps and cloud-native have almost become the norm for many engineering teams, Kubernetes casts a huge shadow. Indeed, even if it’s not the right option for you right now, it’s hard to escape the fact that understanding it, and being open to using it in the future, is crucial. Find an extensive range of Kubernetes content in our new cloud bundles.  Serverless and NoOps This year serverless has really come into its own. Although it was certainly gaining traction in 2018, the last 12 months have demonstrated its value as more and more teams have been opting to forgo servers completely. There have been a few arguments about whether serverless is going to kill off containers. It’s not hard to see where this comes from, but in reality there’s no chance that this is going to happen. The way to think of serverless is to see it as an additional option that can be used when speed and agility are particularly important. For large-scale application development and deployment, containers running on ‘traditional’ cloud servers will be the dominant architectural approach. The companion trend to serverless is NoOps. Given the level of automation and abstraction that serverless can give you, the need to configure environments to ensure code runs properly all but disappears - code runs through ‘functions’ that get fired when needed. So, the thinking goes, the need for operations becomes very small indeed. But before anyone starts worrying about their jobs, the death of operations is greatly exaggerated. As noted above, serverless is just one option - it’s not redefining the architectural landscape. It might mean that the way we understand ‘ops’ evolves (just as ‘dev’ has), but it certainly won’t kill it off. Discover and search serverless eBooks and videos on the Packt store. Chaos engineering In the introduction I mentioned that one of the strange quandaries of our contemporary distributed software world is that we’ve essentially made things more unreliable at a time when software systems are being used in ever more critical applications. From healthcare to self-driving cars, we’re entering a world where unreliability is both more common and potentially more damaging. This is where chaos engineering comes in. Although it first appeared on ThoughtWorks Radar back in November 2017 and hasn’t yet moved out of its ‘Trial’ quadrant, in reality chaos engineering has been manifesting itself in a whole host of ways in 2019. Indeed, it’s possible that the term itself is misleading. While it suggests a wholesale methodology, in truth, there are different ways in which the core principles behind it - essentially stress-testing your software in order to manage unpredictability and improve resilience - are being used in different ways for both testing and security purposes. Tools like Gremlin have done a lot to help promote chaos engineering and make it more accessible to organizations that maybe wouldn't see themselves as having the resources to perform cutting-edge approaches. It appears the ground-work has been done, which means it will be interesting to see how it evolves in 2020. Observability: service meshes and tracing One of the biggest challenges when dealing with complex software systems - and one of the reasons why they are necessarily unreliable - is because it can be difficult (sometimes impossible) to get an understanding of what’s actually going on. This is why the debate around observability and monitoring has moved on. It’s no longer enough to have a set of discrete logs and metrics. Chances are that they won’t capture the subtleties of what’s happening, or won’t be able to provide you with context that helps you to actually understand where errors are coming from. What’s more, a lack of observability and the wrong monitoring set up can cause all sorts of issues inside a team. At a time when the role of the on call developer has never been more discussed and, indeed, important, ensuring there’s a level of transparency is the only way to guarantee that all developers are able to support each other and solve problems as they emerge. From this perspective, then, observability has a cultural impact as much as it does a technical one. Learn distributed tracing with Yuri Shkuro from Uber's observability engineering team: find Mastering Distributed Tracing on the Packt store.         Not sure what to learn for 2020? Start exploring thousands of tech eBooks and videos on the Packt store.
Read more
  • 0
  • 0
  • 3456

article-image-was-2019-the-year-the-world-caught-the-kubernetes-fever
Guest Contributor
17 Dec 2019
8 min read
Save for later

Was 2019 the year the world caught the Kubernetes fever?

Guest Contributor
17 Dec 2019
8 min read
In the current IT landscape, phrases such as “containerized applications” and “container deployment” are thrown around so often, that the meanings and connotations behind them often get tampered, and ultimately forgotten. In the case of Kubernetes, however, the opposite seems to be coming true. Although it might seem hyperbolic to refer to the modern interaction with software management as being heavily influenced by the “Age of Kubernetes”-  the accelerating growth of Kubernetes as one of the most widely adopted open-source project, with over 2300 active contributors to Kubernetes’s repository on GitHub bears witness to the massive influence that the orchestration platform has had. Originally developed by Google, and launched in 2014- Kubernetes has come a really long way since it’s advent. Although there are other similar container orchestration platforms available on the market, the most notable ones being Docker Swarm and Apache Mesos; Kubernetes has established itself as the de-facto orchestration platform in use today. Having said that, as a quick Google search might reveal- with a whopping 26,400,000 results- Kubernetes has risen to the top of the totem pole over the course of the year. However, before we can get into rationalizing the reasons that drive the world’s obsession with the container orchestration platform, we’d like to provide our readers with a quick snapshot of everything Kubernetes is and everything that it is not. Kubernetes: A Brief Overview The transition from the traditional deployment era, where organizations used to rely on applications being run on physical servers to the virtual deployment era, in which the highly popular concept of virtualization was introduced- to the container deployment era, which saw the employment of  ‘containers’ that are significantly lighter in weight, as compared to virtual machines (VMs)- these changes ultimately led to the creation of a container orchestration market, which is a huge contributing factor to the growing popularity of Kubernetes and other similar platforms. Having said that, however, as we’ve already mentioned above- the features that Kubernetes offers to organizations enable it to have a certain edge over its competition. Originally developed by Google in 2014, having descended from an old-school container orchestration platform called ‘Borg,’ Kubernetes is an open-source container orchestration platform that reduces the workload for both large and small companies, by automating the deployment, scaling and management of containerized applications. Bearing witness to the effectiveness and reliability of the container orchestration application is the fact that it is imbursed by gigantic digital entities such as Google, Microsoft, Cisco, Intel, and Red Hat. Furthermore, on their website, Kubernetes cites several testimonials from colossal corporations such as Spotify, Nav, Capital One, Comcast- which further goes on to demonstrate the reliability of the benefits offered by the container orchestration platform. What functions does Kubernetes perform? Taking into consideration the fact that most organizations, regardless of how large or small they might be, are deploying hundreds and thousands of containerized instances daily- the complexity of the situation requires platforms such as Kubernetes to step in and help organizations manage and automate containerized processes while taking into account the context of the microservice architecture as well. Kubernetes aids development teams by deploying applications and helping in the management of the containerized applications by performing the following functions: Deployment: Perhaps the most significant function that Kubernetes performs includes the deployment of a specified number of containers to a host, along with ensuring that the containers are functioning as they are supposed to, that is, without any malfunctions, etc. Rollouts: A rollout refers to a change in the original deployment of a container. Kubernetes allows development teams to take the management of their containerized tasks to the next level, by automating the initiation of the container deployment, along with offering them the option of pausing, resuming or rolling back any rollouts. Discovery of service: Kubernetes automates the exposure of a specified container to the internet, or to other containers, by allotting to containers a DNS name or an IP address. Since the increasing threats and risks of cyber-attacks, it has become essential to protect your IP address. To do so use a VPN as it not only hides the IP address but also provides protection against IP spoofing. Managing storage: A monumental advantage that Kubernetes offers organization is the liberty to allocate persistent local or cloud storage to specified containers as needed. Load scaling and balancing: Kubernetes allows for organizations to maintain stability across the network by automatically load balancing and scaling in the instance that traffic to a certain container increases. Self-healing: A feature unique to Kubernetes, the widely popular container orchestration platform seeks to improve the availability on the network through restarting or replacing a failed container. Moreover, Kubernetes can also automate the removal of containers that appear to be damaged, or fail to meet the health-check requirements. Are there any limitations to Kubernetes’s power? Up till now, we’ve done nothing but present facts regarding Kubernetes. Often times, however, organizations tend to overlook the limitations of an effective management tool. Despite the numerous advantages that organizations get to reap with the integration of Kubernetes, the fact that Kubernetes is not a traditional software and functions on a container level, rather than at the hardware-level should always be kept in mind. In order to make the most effective use of the container orchestration platform, it is essential that companies take into account the limitations of Kubernetes- which consist of the following: Kubernetes does not build applications, neither does it deploy source code. Kubernetes is not responsible for providing organizations with services centric to applications. Examples of these application-level services include middleware (message buses) and other data-processing frameworks such as Spark, caches, amongst many others. Kubernetes does not offer to organizations logging, monitoring, and alerting solutions, instead it provides integrations and mechanisms which then enable organizations to collect and export metrics. In addition to these limitations, it should also be mentioned that despite the constant referral of Kubernetes as an orchestration tool- it is not just that. Instead of simply orchestrating or managing the containerized applications by propagating a defined workflow, Kubernetes eliminates the need for orchestration altogether and consists of components that constantly drive the current state of the network into providing the desired result to the organization. Furthermore, Kubernetes also gives rise to a system without any centralized control, which makes it much more easier to use. Explaining Kubernetes’s popularity Now that we’ve hopefully jogged up our reader’s memories by providing them with a rundown of everything Kubernetes- let’s get down to business. Taking into consideration the ever-increasing growth and popularity of the container orchestration platform, particularly it’s a spike in 2019- readers might be left wondering with the question; “Why is Kubernetes so popular?” Well, the short explanation behind Kubernetes’s popularity is simple- it’s highly effective. The longer explanation, on the other hand, however, can be broken down into the following main reasons: Kubernetes saves time: In the digital age, time is more crucial than ever. As more and more organizations get digitized, time plays a monumental role in routine operations, especially where development teams are concerned. The staggering popularity of Kubernetes is deeply rooted in how time-effective, a platform is since it allows organizations to effectively handle all facets of container orchestration without having to fill out forms or send emails to request new machines to run applications. 2. Kubernetes is highly cost-effective: For most enterprises, the driving force behind their operations is the knowledge that their business goal is being fulfilled. Kubernetes can actually contribute to that since it allows for organizations to partake in better resource utilization. As we’ve already mentioned above, Kubernetes is a much more improved alternative to VMs, since it focuses solely on containers, which are light-weight, and thus require less CPU and memory resources. 3. Kubernetes can run on the cloud, as well as on-premise: An unprecedented, but widely welcomed feature that Kubernetes offers is that it is cloud-agnostic. The term ‘cloud-agnostic’ implies that Kubernetes can run on cloud-based services, as well as on-premise. This offers organizations with the luxury of not having to redesign or alter their infrastructure or applications to accommodate Kubernetes. Additionally, companies are also providing software that helps organizations manage the running of Kubernetes, whether it is on a cloud-based server or on-premise. Final Words We hope that we’ve made it clear what Kubernetes does, and the reasons that led to its rise in popularity. Having said that, however, it is still equally important that organizations take into consideration the limitations of the container orchestration system, and integrate it within their companies smartly- which ultimately enables organizations to leverage better benefits! Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. DevOps mistakes which developers should avoid! Chaos engineering comes to Kubernetes thanks to Gremlin Understanding the role AIOps plays in the present-day IT environment
Read more
  • 0
  • 0
  • 3178
article-image-devops-mistakes-which-developers-should-avoid
Guest Contributor
16 Dec 2019
9 min read
Save for later

DevOps mistakes which developers should avoid!

Guest Contributor
16 Dec 2019
9 min read
DevOps is becoming recognized as a vital pillar of digital transformation. Because of this, CIOs are becoming enthusiastic regarding how DevOps and open source can completely transform the enterprise culture. All organizations want to succeed and reach their development goals across all projects. However, in reality, the entire journey is not at all easy as it seems, and it often requires collective efforts and time. In this entire journey, there are some common failures which teams are likely to come across. In this post, we’ll discuss DevOps mistakes which everyone should know and must avoid. Before that, it is necessary to understand the importance of DevOps in today’s world. Importance of DevOps in today’s world DevOps often describes a culture and a set of processes which brings development and operations together to complete software development. It enables organizations not just to create but also improve products at a faster pace than they can with some approaches to software development. DevOps adoption rate is increasing by each passing day. According to Statista many business organizations are shifting towards the DevOps culture and there is an increase of 17% in 2018 from the previous year.  DevOps culture is instrumental in today’s world. The following points briefly highlight the need for DevOps in this era. It reduces costs and other IT stuff. Results in greater competencies. It provides better communication and cooperation opportunities. The development cycle is fast and innovative. Deployment failures are reduced to a great extent. Eight DevOps Mistakes Many people still don’t fully understand what DevOps means. Without prior knowledge and understanding, many DevOps initiatives fail to get off the ground successfully. Following is a brief description of DevOps’ mistakes, and how they can be avoided to start a successful DevOps journey. 1. Rigid DevOps Process Compliance with core DevOps tenets is vital for DevOps success; organizations have to make adjustments in active response to meet organization demands. Enterprises have to make sure that while the main DevOps pillars remain stable while implementing DevOps; they make the internal adjustments needed in internal benchmarking of the expected consequences. Instrumenting codebases in a gritty manner and making them more and more partitioned results in more flexibility and provide DevOps team the ultimate power to backtrack and recognize the root cause of diversion in the event of failed outcomes. But, all adjustments have to be made while staying within the boundaries defined by DevOps. 2. Oversimplification of the process Indeed DevOps is a complex process. To implement DevOps, enterprises often go on a DevOps Engineer hiring spree or at times, create a new and isolated one. The DevOps department is then responsible for managing the DevOps framework and strategy, and it needlessly adds new processes that are often lengthy and complicated. Instead of creating an isolated DevOps department, organizations should focus on optimizing their processes to make operational products that leverage the right set of resources. For successful implementation of DevOps, organizations must be capable enough to manage the DevOps framework, leverage functional experts, and other resources that can manage DevOps related tasks like budgeting goals, resource management, and process tracking. DevOps requires a cultural overhaul. Organizations must consider a phased and measured transition to DevOps implementation by educating and training employees on these new processes. Also, they should have the right frameworks to enable careful collaboration. 3. Not preparing for a cultural change When you have the right tools for DevOps practices, you likely might come across a new challenge. The challenge will be trying to make your teams use the tools for fast development, continuous delivery, automated testing, and monitoring. Is your DevOps culture ready for all such things? For example, agile methodologies usually mandate that you ship new code once a week, or once a day. It results in the failure of agile methods. You might also face the same conceptual issues with DevOps. It can be like pulling on a smooth road with a car with no fuel in it. To prevent this situation, plan for a transition period. Leave enough time for the development and operational team to get used to new practices. Also, make sure that they have a chance to gain experience with the new processes and tools. Ensure that before adopting DevOps, you've got a matured Dev and Ops culture. 4. Creating a single DevOps team The most common mistake which most organizations and enterprises make is to create a brand-new team and task them with addressing all the burdens of a DevOps initiative. It is challenging and complicated for both development and operations to deal with a new group that coordinates with everyone. DevOps started with the idea of enhancing collaborations between teams involved in the development of software like security, DBMS, and QA. However, it is not only about development and operations. If you create a new side to address DevOps, you’re making things more complicated. The secret ingredient here is simplicity. Focus on culture by encouraging a mindset of automation, quality, and stability. For instance, you might involve everyone in a conversation regarding your architecture, or about some problems found in production environments in which all the relevant players need to be well aware of how their work influences others. “DevOps is not about a single dedicated team but about organizations that progress together as a DevOps team.” 5. Not including the security team DevOps is about more than merely putting the development and operations teams together. It is a continuous process of automation and software development, including audit, compliance, and security. Many organizations make the mistake of not following their security practices in advance. According to a CA Technologies survey, security concerns were the number-one obstacle to DevOps as cited by 38% of the respondents. Similarly, the Puppet survey found that high-performing DevOps teams spend 50% less time remediating security issues than low performers. These high performing teams found different ways to communicate their security objectives and to establish security in the early phases of their development process. All DevOps practitioners should evaluate the controls, recognize the risks, and understand the processes. In the end, security is always an integral part of DevOps practices, such as DevSecOps (a practice in which development and operations is integrated with security). For example, if you have some security issues in production, you can address them within your DevOps pipeline through the tools which the security team already uses. DevOps and security practices should be followed strictly, and there should be no compromises. Moreover, other measures should be adopted to avoid cyber-criminals invading the DevOps culture. Invest in cybersecurity markets has become a necessity to avoid situations where attacker can carry out attacks like that of spear phishing and phishing. It is found that out of all attacks on various organizations, 95% of them were a result of spear phishing. 6. Incorrect use of incident management DevOps teams must have a robust incident management process in place. The incident management needs to be utterly proactive and an ongoing process. It means that having a documented incident management process is imperative to define the incident responses. For example, a total downtime event will have a different response workflow in comparison to a minor latency problem. The failure to not do so can often lead to missed timelines and preventable projects delay. 7. Not utilizing purposeful automation DevOps needs organizations to adopt and implement purposeful automation. For DevOps, it is essential to take automation across the complete development lifecycle. It includes continuous delivery, continuous integration, and deployment for velocity and quality outcomes. Purposeful end-to-end automation is a crucial successful DevOps implementation. Therefore, organizations should look at the complete automation of the CI and CD pipeline. However, at the same time, organizations need to identify various opportunities for automation across functions and processes. This helps to reduce the need for manual handoffs for complicated integrations that need new management in multiple format deployments. Editor’s Note: Do you use or plan to use Azure for DevOps? If you want to know all about Azure DevOps services, we recommend our latest cookbook, ‘Azure DevOps Server 2019 Cookbook - Second Edition’ written by Tarun Arora and Utkarsh Shigihalli. The recipes in this book will help you achieve skills you need to break down the invisible silos between your software development teams and transform them into a modern cross-functional software development team. 8. Wrong-way to measure project success DevOps promises for faster delivery. But, if that acceleration comes at the cost of quality, then the DevOps program is a failure. Enterprises looking at deploying DevOps should use the right metrics to understand project growth and success. For this reason, it is imperative to consider metrics that align velocity with success. Do focus on the right parameters as it is essential to drive intelligent automation decisions. Conclusion Now organizations are rapidly running towards DevOps to stand with competition and become successful but they often make big mistakes. There are mistakes that people commit while implementing a DevOps culture. However, all these mistakes are avoidable, and hopefully, the points mentioned above have successfully cleared your vision to a great extent. After you overcome all the mistakes and adopt DevOps practices, your organization will surely enjoy improved client satisfaction, and employee morale increased productivity, and agility- all of which helps in growing your business. If you plan to accelerate deployment of high-quality software by automating build and releases using CI/CD pipelines in Azure, we suggest you to check out Azure DevOps Server 2019 Cookbook - Second Edition, which will help you create and release extensions to the Azure DevOps marketplace and reach the million-strong developer ecosystem for feedback. Author Bio Rebecca James is an enthusiastic cybersecurity journalist. A creative team leader, editor of PrivacyCrypts. Abel Wang explains the relationship between DevOps and Cloud-Native Can DevOps promote empathy in software engineering? 7 crucial DevOps metrics that you need to track
Read more
  • 0
  • 0
  • 4364

article-image-ansible-role-patterns-and-anti-patterns-by-lee-garrett-its-debian-maintainer
Vincy Davis
16 Dec 2019
6 min read
Save for later

Ansible role patterns and anti-patterns by Lee Garrett, its Debian maintainer

Vincy Davis
16 Dec 2019
6 min read
At DebConf held last year, Lee Garrett, a Debian maintainer for Ansible talked about some of the best practices in the open-source, configuration management tool. Ansible runs on Unix-like systems and configures both Unix-like and Microsoft Windows. It uses a simple syntax written in YAML, which is a human-readable data serialization language and uses SSH to connect to the node machines. Ansible is a helpful tool for creating a group of machines, describing their configuration and actions. Ansible is used to implement software provisioning, application-deployment security, compliance, and orchestration solutions. When compared to other configuration management tools like Puppet, Chef, SaltStack, etc, Ansible is very easy to setup. Garett says that due to its agentless nature, users can easily control any machine with an SSH daemon using Ansible. This will assist users in controlling any Debian installed machine using Ansible. It also supports the configuration of many things like networking equipment and Windows machines. Interested in more of Ansible? [box type="shadow" align="" class="" width=""]Get an insightful understanding of the design and development of Ansible from our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. This book will help you grasp the true power of Ansible automation engine by tackling complex, real-world actions with ease. The book also presents the fully automated Ansible playbook executions with encrypted data.[/box] What are Ansible role patterns? Ansible uses a playbook as an entry point for provisioning and defines automation through the YAML format. A playbook requires a predefined pattern to organize them and also needs other files to facilitate the sharing and reusing of provisioning. This is when a ‘role’ comes into the picture.  An Ansible role which is an independent component allows the reuse of common configuration steps. It contains a set of tasks that can be used to configure a host such that it will serve a certain function like configuring a service. Roles are defined using YAML files with a predefined directory structure. A role directory structure contains directories like defaults, vars, tasks, files, templates, meta, handlers.  Some tips for creating good Ansible role patterns An ideal role must have a ‘roles/<role>/task/main.yml’ format, thus specifying the name of the role, it’s tasks, and main.yml. At the beginning of each role, users are advised to check for necessary conditions like the ‘assert’ tasks to inspect if the variables are defined or not. Another prerequisite involves installing packages, using apps on CentOS machines and Yum (the default package manager tool in CentOS) or by using the git checkout.  Templating of files with abstraction is another important factor where variables are defined and put into templates to create the actual config file. Garrett also points out that a template module has a validate parameter which helps the user to check if the config file has any syntax errors. The syntax error can fail the playbook even before deploying the config file. For example, he says, “use Apache with the right parameters to do a con check on the syntax of the file. So that way you never end up with a state where there's a broken configure something there.”  Garrett also recommends putting sensible defaults in the ‘roles/defaults/main.yml’ layout which will make the defaults override the variables on specific cases. He further adds that a role should ideally run in the check mode. Ansible playbook has a --check which basically is “just a dry run” of a user’s complete playbook and --diff will display file or file mode changes in the playbook. Further, he adds that a variable can be defined in the default and in the Var's folder. However, the latter folder is hard to override and should be avoided, warns Garrett. What are some typical anti-patterns in Ansible? The shell and command modules are used in Ansible for executing commands on remote servers. Both modules require command names followed by a list of arguments.  The shell module is used when a command is to be executed in the remote servers in a particular shell. Garrett says that new Ansible users generally end up using the shell or command module in the same way as the wget computer program. According to him, this practice is wrong, since “there's currently I think thousands of three hundred different modules in ansible so there's likely a big chance that whatever you want to do there already a module for that just did that thing.”  He also asserts that these two modules have several problems as the shell module gets interrupted by the actual shells, so if the user has any special variables in the shell string and if their PlayBook is running in the check mode then the shell and the command module won't run.  Another drawback of these modules is that they will always refer back to change while running a command which makes its exit value zero. This means that the user will have to probably get the output and then check if there is any standard error present in it.  Next, Garrett explored some examples to show the alternatives to the shell/command module - the ‘slurp’ module. The slurp module will “slope the whole file and a 64 encoded” and will also enable access to the actual content with ‘path file.contents’. The best thing about this module is that it will never return any change and works great in the check mode. In another example, Garrett showed that when fetching a URL, the shell command ends up getting downloaded every time the playbook runs, thus throwing an error each time. This can again be avoided by using the ‘uri’ module instead of the shell module. The uri module will define the URL every time a file is to be retrieved thus helping the user to write and create a parameter. At the end of the talk, Garrett also threw light on the problems with using the set_facts module and shares its templates. Watch the full video on Youtube. You can also learn all about custom modules, plugins, and dynamic inventory sources in our book ‘Mastering Ansible’ written by James Freeman and Jesse Keating. Read More Ansible 2 for automating networking tasks on Google Cloud Platform [Tutorial] Automating OpenStack Networking and Security with Ansible 2 [Tutorial] Why choose Ansible for your automation and configuration management needs? Ten tips to successfully migrate from on-premise to Microsoft Azure Why should you consider becoming ‘AWS Developer Associate’ certified?
Read more
  • 0
  • 0
  • 4823

article-image-ten-tips-to-successfully-migrate-from-on-premise-to-microsoft-azure
Savia Lobo
13 Dec 2019
11 min read
Save for later

Ten tips to successfully migrate from on-premise to Microsoft Azure 

Savia Lobo
13 Dec 2019
11 min read
The decision to start using Azure Cloud Services for your IT infrastructure seems simple. However, to succeed, a cloud migration requires hard work and good planning. At Microsoft Ignite 2018, Eric Berg, an Azure Lead Architect at COMPAREX, a Microsoft MVP Azure + Cloud and Data Center Management, shared ‘Ten tips for a successful migration from on-premises to Azure’, based on their day-to-day learnings. Eric shares known issues, common pitfalls, and best practices to get started. Further Reading To gain a deep understanding of various Azure services related to infrastructure, applications, and environments, you can check out our book Microsoft Azure Administrator – Exam Guide AZ-103 by Sjoukje Zaal. This book is also an effective guide for acquiring the skills needed to pass the Exam AZ-103, with effective mock tests and solutions so that you can confidently crack this exam. Tip #1: Have your Azure Governance Set One needs to have a basic plan of what they are going to do with Azure. Consider Azure Governance as the basis for Cloud Adoption. Berg says, “if you don't have a plan for what you do with Azure, it will hurt you.” To run something on Azure is good, but to keep it secure is the key thing. Here, Governance rule sets help users to audit and figure out if everything is running as expected. One of the key parts of Azure Governance is Networking. Hence one should consider a networking concept that suits both the company and the business. Microsoft is moving really fast; in 2018, to connect to the US and Europe you had to use a VPN then came global v-net peering, and now we have ESRI virtual WAN. Such advancements allow a concept to further grow and always use the top of the edge technologies while adoption of such a rule set enables customers to try a lot of things on their own. Tip #2: Think about different requirements From an IT perspective, every organization wants control, focus on its IT, and also to ensure that everything is compliant. Many organizations also want to write policies in place. On the other hand, the human resource department section wants to be totally agile and innovative and wants to consume services and self-service without feeling the need to communicate with IT. “I've seen so many human resource departments doing their own contracts with external partners building some fancy new hiring platforms and IT didn't know anything about it,” Berg points out. When it comes to Cloud, each and every member of the company should be aware and should be involved. It is simply not just an IT-dependent decision, but is company dependent. Tip #3: Assess your infrastructure Berg says organizations should assess their environment. Migrating your servers as they are to Azure is not the right thing to do. This is because in Azure the decision between 8 and 16 gigabytes of RAM is a decision between 100 and 200 percent of the cost. Hence, right scaling or a good assessment is extremely important and this cannot be achieved by running a script once for 10 minutes and you know what your VMs are doing. Instead, you should at least run an assessment for one month or even three months to see some peaks and some low times. This is like a good assessment where you know what you really need to migrate your systems better. Keep a check on your inventory and also on your contracts to check if you are allowed to migrate your ERP system or CRM system to Azure. As some contracts state that the “deployment of this solution outside of the premises of the company needs some extra contract and some extra cost,” Berg warns. Migrating to Azure is technically easy but difficult from a contract perspective. Also, you should define your needs for migration to a cloud platform. If you don't get value out of your migration don't do it. Berg advises, don't migrate to Azure because everybody does or because it's cool or fancy. Tip #4: Do not rebuild your on-premises structures on Cloud Cloud needs trust. Organizations often try to bring in the old stuff on the on-premises infrastructures such as the external DMZ, the internal DMZ, and also 15 security layers. Berg said they use intune, a cloud-based service in the enterprise mobility management (EMM) space that helps enable your workforce to be productive while keeping your corporate data protected, along with Office 365 on a cloud.  In tune doesn't stick to a DMZ; even if you want to deploy your application or use the latest tech such as BOTS, cognitive services, etc. It may not fit totally into a structured network design on the cloud. On the other hand, there will be disconnected subscriptions, i.e. there will be subscriptions with no connection to your on-premises network. This problem has to be dealt with on a security level. New services need new ways. If you are not agile your IT won't be agile. If you need 16 days or six weeks to deploy a server and you want to stick to those rules and processes, then Azure won't be beneficial for you as there will be no value in it for you. Tip #5: Azure consumption is billed If you spin up a VM that costs $25,000 a month you have to pay for it. The M-series VMs have 128 cores 4 terabytes of RAM and are simply amazing. If you deployed using Windows Server and SQL Server Enterprise, the cost goes up to $58,000 a month for just one VM. When you migrate to Azure and you start integrating new things you probably have to change your own business model. To implement tech such as facial recognition, and others you have to set up a cost management tool for usage tracking. There are many usage APIs and third-party tools available. Proper cost management into the Azure infrastructure helps to divide costs. If you put everything into one subscription, one resource group, where everyone is the owner. Here, the problem won’t be the functioning but you will not be able to figure out who's responsible for what. Instead, a good structure of subscriptions, a good role-based access control, a good tagging policy will help you to figure out cost better. Tip #6: Identity is the new perimeter Azure Ad is the center of everything. To access a user’s data center is not easy these days as it needs access within the premises, then into the data center, then log into the user’s own premises infrastructure. If anyone has a user’s login ID, they are inside the user’s Azure AD, the user’s visa VPN, and also on their on-premises data center. Hence identity is a key part of security. “So, don’t think about using MFA, use MFA. Don't think about using Privileged Identity Management, use it because that's the only way to secure your infrastructure probably and get an insight into who is using what in my infrastructure and how is it going,” Berg warns. In the modern workplace, one can work from anywhere. However, one needs to have proper security levels in place. Secure devices, secure identity, secure access ways to MFA, and so on. Stay cautious. Tip #7: Include your users Users are the most important part of any ecosystem. So, when you migrate servers or the entire on-premise architecture, inform them. What if you have a CRM system fully in the cloud and there's no local cache on the system anymore? This won't fit the needs of your customers or internal customers and this is why organizations should inform them of their plans. They should also ask them what they really need and this will, in turn, help the organizations. Berg illustrated this point with a project in Germany that includes a customer with a very specific project that wanted the product to decrease their response times. The client needs up to two days to answer a customer's email because the project product is very complex and they have a very spread documentation library and it's hard. Their internal goal is to bring down the product response to ten minutes--from two days to 10 minutes. Berg said they considered using a bot, some cognitive services and Azure search, and a plug-in an Outlook. So you get the mail you just search for your product and everything will be figured out. The documentation, the fact sheets, and the standard email template for answering such a thing. The solution proposed was good; both Berg and the IT liked it. However, when the sales team was asked, they said such a solution would steal their jobs. The mistake here was Sales was not included in the process of finding this solution. To rectify this, organizations should include all stakeholders. Focus on benefits, have some key users because they will help you to spread the word over. In the above case, explain and evangelize the sales teams as they are afraid because they don't know and don't understand what happens if you have a bot and some cognitive services to figure out which document is right. This won’t steal their job but instead, help to do better at their job with improved efficiency. Train and educate so they are able to use it, check processes and consider changes. Managed services can help you focus. Back up, monitoring, patching, this is something somebody can do for you. Instead, organizations can now focus on after the migration such as integrating new services, improving right scaling, optimizing cost, optimizing performance, staying up-to-date with all the changes in Azure, etc. Tip #8: Consider Transformation instead of Migration Consider a transformation instead of a migration. Build some logical blocks, don't move an ERP system without your database or the other way around. Berg suggests: To adopt technical and licensing showstoppers define your infrastructure requirements check your compatibility to migrate update helpdesk about SLAs Ask if Azure is really helping me (to figure out or to cover my assets or is it getting better or maybe worse). Tip #9: Keep up to date Continuous learning and continuous knowledge are key to growth. As Azure releases a lot of changes very often, users are notified of these latest updates via emails or via Azure news. Organizations should review their architecture on a regular basis, Berg says. VPN to global v-net peering to Global WAN so that you can change your infrastructure quite fast. Audit your governance not on a yearly basis may be monthly or quarterly. Consider changes fast; don't think two years about a change because then it will not be any more interesting. If there's a new opportunity, grab it, use it and three weeks later probably drop it away. But avoid thinking for two months or more else it will be too late. Tip #10: Plan for the future Do some end to end planning, think about the end-to-end solution; who's using it, what's my back end on this, and so on. Save money and forecast your costs. Keep an eye on resources that probably spread because someone runs the script without knowing what they are doing.  Simply migrating an IIS server with a static website to Azure is not actual cloud migration. Instead, customers should consider moving their servers to a static storage website, to a web app, etc. but not in the Windows VM. Berg concludes by saying that an important migration step is to move from infrastructure. Everybody migrates infrastructure to Azure because that's easy because it's just migrating from one VM to another VM. Customers should not ‘only’ migrate. They should also start an optimization, move forward to platform services, be more agile, think about new ways and most importantly get rid of all on-premise old stuff. Berg adds, “In five years probably nobody will talk about infrastructure as a service anymore because everybody has migrated and optimized it already.” To stay more compliant with corporate standards and SLAs, learn how to configure Azure subscription policies with “Microsoft Azure Administrator – Exam Guide AZ-103” by Packt Publishing. 5 reasons Node.js developers might actually love using Azure [Sponsored by Microsoft] Azure Functions 3.0 released with support for .NET Core 3.1! Microsoft announces Azure Quantum, an open cloud ecosystem to learn and build scalable quantum solutions
Read more
  • 0
  • 0
  • 8711
article-image-why-should-you-consider-becoming-aws-developer-associate-certified
Savia Lobo
12 Dec 2019
5 min read
Save for later

Why should you consider becoming ‘AWS Developer Associate’ certified?

Savia Lobo
12 Dec 2019
5 min read
Organizations both large and small are looking to automating their day-to-day processes and the best option they consider is moving to the cloud. However, they also fear certain challenges that can make cloud adoption difficult. The biggest challenge is the lack of resources or expertise to understand how different cloud services function or how they are built, to leverage its advantages to the fullest. Many developers use cloud computing services--either through the companies they work with or simply subscribe to it--without really knowing the intricacies. Their knowledge of how the internal processes work remains limited. Certifications, can, in fact, help you understand how cloud functions and what goes on within these gigantic data holders. To start with, enroll yourself into a basic certification by any of the popular cloud service providers. Once you know the basics, you can go ahead to master the other certifications available based on your job role or career aspirations. Why choose an AWS certification Amazon Web Services (AWS) is considered one of the top cloud services providers in the cloud computing market currently. According to Gartner’s Magic Quadrant 2019, AWS continues to lead in public cloud adoption. AWS also offers eleven certifications that include foundational and specialty cloud computing topics. If you are a developer or a professional who wants to pursue a career in Cloud computing, you should consider taking the ‘AWS Certified Developer - Associate’ certification. Do you wish to learn from the AWS subject-matter experts, explore real-world scenarios, and pass the AWS Certified Developer – Associate exam? We recommend you to explore the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. Many organizations use AWS services and being certified can open various options for improved learning. Along with being popular among companies, AWS includes a host of cloud service options compared to other cloud service providers. While having a hands-on experience holds great value for developers, getting certified by one of the most popular cloud services will only have greater advantages for their better future. Starting with web developers, database admins, IoT or an AI developer, etc., AWS includes various certification options that delve into almost every aspect of technology. It is also constantly adding more offerings and innovating in a way that keeps one updated with cutting-edge technologies. Getting an AWS certification is definitely a difficult task but you do not have to quit your current job for this one. Unlike other vendors, Amazon offers a realistic certification path that does not require highly specialized (and expensive) training to start. AWS certifications validate a candidate’s familiarity and knowledge of best practices in cloud architecture, management, and security. Prerequisites for this certification The AWS Developer Associate certification will help you enhance your skills impacting your career growth. However, one needs to keep certain prerequisites in mind. A developer should have: Attended the AWS Essentials course or should have an equivalent experience Knowledge in developing applications with API interfaces Basic understanding of relational and non-relational databases. How the AWS Certified Developer - Associate level certification course helps a developer AWS Certified Developer Associate certification training will give you hands-on exposure to core AWS services through guided lectures, videos, labs and quizzes. You'll get trained in compute and storage fundamentals, architecture and security best practices that are relevant to the AWS certified developer exam. This associate-level course will help developers identify the appropriate AWS architecture and also learn to design, develop, and deploy optimum AWS cloud solutions. If one already has some existing knowledge of AWS, this course will help them identify and deploy secure procedures for optimal cloud deployment and maintenance. Developers will also learn to develop and maintain applications written for Amazon S3, DynamoDB, SQS, SNS, SWS, AWS Elastic Beanstalk, and AWS CloudFormation. After achieving this certification, you will be an asset to any organization. You can help them leverage best practices around advanced cloud-based solutions and migrate existing workloads to the cloud. This indirectly means a rise in your annual income and also career growth. However, getting certified alone is not enough, other factors such as skills, experience, geographic location, etc. are also important. This certification will help you become competent in using Amazon’s cloud services. This course is a part of the first tier (Associate level) of certifications that AWS offers. You could further improve your cloud computing skills by taking up certifications from the professional tier and later from the specialty tiers, whatever suits you the best. New AWS services and features are added every year. Certification alone is not enough, staying relevant is the key. To continually demonstrate expertise and knowledge of best practices for the most up to date AWS services, certification holders are required to re-certify every two years. You can either choose to take a professional-level exam for the same certification or pass the re-certification exam for your existing certification. To further gain valuable insights on how to design, develop, and deploy cloud-based solutions using AWS and also get familiar with Identity and Access Management (IAM) along with Virtual private cloud (VPC), you can check out the book, AWS Certified Developer - Associate Guide - Second Edition by Vipul Tankariya and Bhavin Parmar. How do AWS developers manage Web apps? Why AWS is the preferred cloud platform for developers working with big data How do you become a developer advocate?
Read more
  • 0
  • 0
  • 4982

article-image-abel-wang-explains-the-relationship-between-devops-and-cloud-native
Savia Lobo
12 Dec 2019
5 min read
Save for later

Abel Wang explains the relationship between DevOps and Cloud-Native

Savia Lobo
12 Dec 2019
5 min read
Cloud-native is microservices containers and serverless apps that run in multi-cloud environments and are managed by DevOps processes. However, the relationship between these is not always clearly defined. Shayne Boyer, Principal Cloud Advocate, in a conversation with Abel Wang, Principal cloud Advocate, and DevOps lead discussed the relationship between DevOps and Cloud-Native on the Microsoft developer channel. Do you wish to further learn how to implement DevOps using Azure DevOps, and also want to learn the entire serverless stack available in Azure including Azure Event Grid, Azure Functions, and Azure Logic Apps, you should check out the book Azure for Architects - Second Edition to know more. What is DevOps? Abel starts off by saying, DevOps at Microsoft is the union of people, processes, and products to enable the continuous delivery of value to our end-users. The reason one should care about DevOps when it comes to cloud native apps is that the key here is continuously delivering value. One of the powers of Cloud Native is because all of your infrastructures is out in the cloud, you're able to iterate it very quickly. This is what DevOps helps you do as well, continuously deliver value to your end-users. These days the speed of business is so quick that if we can't iterate quickly and give value quickly, our competitors will and once they do it the rest will be obsolete. Hence, it is extremely important to iterate quickly which Cloud-Native helps enable. The concept of continuously delivering value remains similar to the concept we carry out on our local machine during a standard deployment. Where Cloud-Native become completely unique is, All your infrastructures are out in the cloud. Hence, deploying to the cloud is easier to do than deploying on to like a mobile app. One of the most powerful things about cloud-native is that it is a microservice-based architecture. With these advantages, we're able to iterate quickly because instead of deploying this massive model if we make one tiny little change, we can just deploy that one service. This will simplify and speed up the process. With this every developer check-in can go through our gates, can go through our pipeline, reach production at a quicker rate, and so we're able to give value even faster and better. Key DevOps and Cloud-Native Apps concepts Wang says to aim for a CI/CD pipeline that can process code as soon as somebody adds it in. The pipeline should further make it easy to build and then finally deploy it to the infrastructure present. Wang demonstrates a cloud-native application with a slightly complicated infrastructure. The application consists of a static website that is held in Azure storage, it has a back-end written in .Net Core which is held in an Azure function and they both connect up to a Cosmos DB. If microservices are deployed independently, the services need to be smart enough to realize what version the other services are on so that the entire application is not disturbed if additional service is uploaded. He further demonstrates an instance for deploying the entire infrastructure all at once. You can check out the video to know more about the demonstration in detail. How to ensure quality across environments in a DevOps practice? In a cloud-native application, we need not worry about deploying similar infrastructures while moving from one environment to the next. This is because the dev environment will be exactly the same as the QA environment and further the same throughout all the way out into production. This will be cost-effective because we can just spin it up to run whatever we need to and as soon as it's done, tear it down so we're not paying for anything. Automating the Cloud Native processes include a few manual steps such as approving an email. Wang says one could technically automate everything, however, he prefers having manual approvers. Within Azure DevOps, there is a concept of an automated approval gate as well. So one can use automation to help decide if they should postpone approval or not. Wang says he uses an automated approval gate to conduct DNS checks that can inform him whether or not the DNS has propagated. Wang says, trying to keep quality in your pipelines is really difficult to do. You can do things like run all of the automated UI tests for a particular environment. “so by the time let's say I deployed this into a QA environment by the time my QA testers even look at it it could have run through like hundreds of thousands of automated UI tests already. So there's a lot less that a human needs to do,” Wang adds. To learn comprehensively how to develop Azure cloud architecture and a pipeline management system and also to know about some security best practices for your Azure deployment, you can check out the book, Azure for Architects - Second Edition by Ritesh Modi. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Can DevOps promote empathy in software engineering? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast]
Read more
  • 0
  • 0
  • 2850