Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - DevOps

82 Articles
article-image-introducing-azure-devops-server-2019-rc1-with-better-ui-azure-sql-support-and-more
Melisha Dsouza
22 Nov 2018
3 min read
Save for later

Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more!

Melisha Dsouza
22 Nov 2018
3 min read
On 19th November, Microsoft announced the first release candidate (RC) of Azure DevOps Server 2019. Azure DevOps Server 2019 includes the new, fast, and clean Azure DevOps user interface and delivers the codebase of Microsoft Azure DevOps while being optimized for customers who prefer to self-host. Features of Azure DevOps Server 2019 In addition to existing SQL Server support, Azure DevOps Server also includes support for Azure SQL. This means customers can self-host Azure DevOps in their own datacenter using an on-premises SQL Server. They can take advantage of  Azure SQL capabilities and performance like backup features and scaling options, while reducing the administrative overhead of running the service and  self-hosting Azure DevOps in the cloud. Customers can use the globally available Microsoft hosted service to take advantage of automatic updates and automatic scaling of Azure DevOps Server. Azure DevOps Server 2019 includes a new release management interface. Customers can easily understand how their deployment is taking place - it gives them a better visibility of which bits are deployed to which environments and why. Customers can also mix and match agents self-hosted on-premises and in any cloud on Windows, Mac, or Linux while easily deploying to IaaS or PaaS in Azure as well as on-premises infrastructure. A new navigation and improved user experience in Azure DevOps A new feature introduced is ‘my work flyout’. This feature was developed after feedback obtained that when a customer is in one part of the product and wants some information from another part, they didn’t want to lose their context of the current task. With this new feature, customers can access this flyout from anywhere in the product, which will give them a quick glance at crucial information link work items, pull requests, and all favorites. Teams that use pull requests (PRs) and branch policies, there may be occasions when members need to override and bypass those policies. For teams to verify that those policy overrides are being used in the right situations, a new notification filter is added to allow users and teams to receive email alerts any time a policy is bypassed. Tests tab now gives a rich, in-context test information for Pipelines. It provides an in-progress test view, full page debugging experience, in context test history, reporting aborted test execution, and run level summary. The UI has undergone significant testing and the team suggests that for self-hosting customers the new navigation model may require updates to internal documentation and training. A direct upgrade to Azure DevOps Server is supported fromTeamm Foundation Server 2012 and newer. Previous versions of Team Foundation Server will stay on the old user interface. Check the Azure DevOps Server requirements and compatibility page to understand dependencies required for a self-hosted installation. Head over to Microsoft’s blog for more information on this news. You can download Azure DevOps Server 2019 RC1 and check out the release notes for all the features and information for this release. A multi-factor authentication outage strikes Microsoft Office 365 and Azure users Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report  
Read more
  • 0
  • 0
  • 3680

article-image-codefreshs-fixvember-a-devops-hackathon-to-encourage-developers-to-contribute-to-open-source
Sugandha Lahoti
30 Oct 2018
2 min read
Save for later

Codefresh’s Fixvember, a Devops hackathon to encourage developers to contribute to open source

Sugandha Lahoti
30 Oct 2018
2 min read
Open Source is getting a lot of attention these days and to incentivize people to contribute to open source Codefresh has launched "Fixvember", a do-it-from-home, DevOps hackathon. Codefresh is a Kubernetes native CI/CD which allows for creating powerful pipelines based on DinD as a service and provides self-service test environments, release management, and Docker and Helm registry. Codefresh’s Fixvember is a Devops based hackathon where Codefresh will provide DevOps professionals with a limited-edition t-shirt to contribute to open source. The event basically encourages developers (and not just Codefresh users) to make at least three contributions to open source projects, including building automation, adding better testing, and fixing bugs. The focus is on making engineers more successful by following DevOps best practices. Adding a Codefresh YAML to an open-source repo may also earn developers additional prizes or recognition. Codefresh debuts Fixvember in sync with the launch of its public-facing builds in the Codefresh platform. Codefresh is offering 120 builds/month, private Docker Registry, Helm Repository, and Kubernetes/Helm Release management for free to increase the adoption of CI/CD processes. It is also offering a huge free tier within Codefresh with everything needed to help teams. Developers can participate by following these steps. Step 1: Signup at codefresh.io/fixvember Step 2: Make 3 open source contributions that improve DevOps. This could be adding/updating a Codefresh pipeline to a repo, adding tests or validation to a repo, or just fixing bugs. Step 3: Submit your results using your special email link “I can’t promise the limited-edition t-shirt will increase in value, but if it does, I bet it will be worth $1,000 by next year. The FDA prevents me from promising any health benefits, but it’s possible this t-shirt will actually make you smarter,” joked Dan Garfield, Chief Technology Evangelist for Codefresh. “Software engineers sometimes have a hero complex that adding cool new features is the most valuable thing. But, being ‘Super Fresh’ means you do the dirty work that makes new features deploy successfully. Adding automated pipelines, writing tests, or even fixing bugs are the lifeblood of these projects.” Read more about Fixvember on Codefresh Blog. Azure DevOps outage root cause analysis starring greedy threads and rogue scale units. JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 2973

article-image-azure-devops-outage-root-cause-analysis-starring-greedy-threads-and-rogue-scale-units
Prasad Ramesh
19 Oct 2018
4 min read
Save for later

Azure DevOps outage root cause analysis starring greedy threads and rogue scale units

Prasad Ramesh
19 Oct 2018
4 min read
Azure DevOps suffered several outages earlier this month. Microsoft has done a root cause analysis to find the causes. This is after Azure cloud was affected by the environment last month. Incidents on October 3, 4 and 8 It started on October 3 with a networking issue in the North Central US region lasting over an hour. It happened again the following day which lasted an hour. On following up with the Azure networking team, it was found that there were no networking issues when the outages happened. Another incident happened on October 8. They realized that something was fundamentally wrong which is when an analysis on telemetry was done. The issue was not found after this. After the third incident, it was found that the thread count on the machine continued to rise. This was an indication that some activity was going on even with no load coming to the machine. It was found that all 1202 threads had the same call stack, the following being the key call. Server.DistributedTaskResourceService.SetAgentOnline Agent machines send a heartbeat signal every minute to the service to notify being online. On no signal from an agent over a minute it is marked offline and the agent needs to reconnect to signal. The agent machines were marked offline in this case and eventually, they succeeded after retries. On success, the agent was stored in an in-memory list. Potentially thousands of agents were reconnecting at a time. In addition, there was a cause for threads to get full with messages since asynchronous call patterns were adopted recently. The .NET message queue stores a queue of messages to process and maintains a thread pool where. As a thread becomes available, it will service the next message in queue. Source: Microsoft The thread pool, in this case, was smaller than the queue. For N threads, N messages are processed simultaneously. When an async call is made, the same message queue is used and it queues up a new message to complete the async call in order to read the value. This call is at the end of the queue while all the threads are occupied processing other messages. Hence, the call will not complete until the other previous messages have completed, tying up one thread. The process comes to a standstill when N messages are processed where N also equals to the number of threads. At this state, an device can no longer process requests causing the load balancer to take it out of rotation. Hence the outage. An immediate fix was to conditionalize this code so no more async calls were made. This was done as the pool providers feature isn’t in effect yet. Incident on October 10 On October 10, an incident with a 15-minute impact took place. The initial problem was the result of a spike in slow response times from SPS. It was ultimately caused by problems in one of the databases. A Team Foundation Server (TFS) put pressure on SPS, their authentication service. On deploying TFS, sets of scale units called deployment rings are also deployed. When the deployment for a scale unit completes, it puts extra pressure on SPS. There are built-in delays between scale units to accommodate the extra load. There is also sharding going on in SPS to break it into multiple scale units. These factors together caused a trip in the circuit breakers, in the database. This led to slow response times and failed calls. This was mitigated by manually recycling the unhealthy scale units. For more details and complete analysis, visit the Microsoft website. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Why did last week’s Azure cloud outage happen? Here’s Microsoft’s Root Cause Analysis Summary. Is your Enterprise Measuring the Right DevOps Metrics?
Read more
  • 0
  • 0
  • 3739

article-image-announcing-the-early-release-of-travis-ci-on-windows
Savia Lobo
12 Oct 2018
2 min read
Save for later

Announcing the early release of Travis CI on Windows

Savia Lobo
12 Oct 2018
2 min read
Yesterday, Travis CI announced that its service will now be available on Windows. Travis CI is a distributed Continuous Integration service used to test and deploy projects hosted on GitHub. This is an early release and they plan to release a stable version in Q2 next year. With this update, teams can run their tests on Linux, Mac, and Windows--all in the same build. At present, users can use Windows with open source and private projects on either travis-ci.org or travis-ci.com. Travis CI plans to bring this to enterprise soon. The company says, “this is our very first full approach to Windows-support, so the tooling is light.” Laurie Voss, Chief Operating Officer, npm, Inc says, “Adding Windows support to Travis CI will provide a more stable development experience for a huge segment of the JavaScript community—32% of projects in the npm Registry use Travis CI. We look forward to continuing to work with Travis CI to reduce developer friction and empower over 10 million developers worldwide to build amazing things.” Travis Windows CI environment Windows Build Environment for Travis CI launches with support for Node.js, Rust, and Bash languages. Travis Windows CI will run a git bash shell, to maintain consistency with our other bash-based environments. This will also allow users to shell out to PowerShell as needed. In addition to this, Docker is also made available for Windows builds. Travis CI uses Chocolatey as a package manager and also has a pre-installed Visual Studio 2017 Build Tools. The Windows build environment is currently based on Windows Server 1803 for containers running Windows Server 2016 as the OS version. Travis CI in their blog post mention that they are hosting their Windows virtual machines in Google Compute Engine. Following which, they have seen some variations in their boot times. However, they plan to improve this alongside their other infrastructure-related work. The company expects to release Windows Build Environments for Enterprise before the release of the stable version. To know more about Travis CI on Windows in detail, visit their official Travis CI blog. Creating a Continuous Integration commit pipeline using Docker [Tutorial] How to master Continuous Integration: Tools and Strategies Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner
Read more
  • 0
  • 0
  • 2220

article-image-introducing-alpha-support-for-volume-snapshotting-in-kubernetes-1-12
Melisha Dsouza
10 Oct 2018
3 min read
Save for later

Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12

Melisha Dsouza
10 Oct 2018
3 min read
Kubernetes v1.12 now offers alpha support for volume snapshotting. This will allow users to create or delete volume snapshots, and natively create new volumes from a snapshot using the Kubernetes API. A snapshot represents a copy of a volume at that particular instant of time. This snapshot can be used to provision a new volume that can be pre-populated with the snapshot data or to restore the existing volume to a previous state. Importance of adding Snapshots to Kubernetes The main goal of the Kubernetes team is to create an abstraction layer between distributed systems applications and underlying clusters. The layer will ensure that application deployment requires no "cluster specific" knowledge. Snapshot operations are a critical functionality for many stateful workloads. For instance, a database administrator may want to snapshot a database volume before starting a database operation. By providing a standard way to trigger snapshot operations in the Kubernetes API, users don’t have to manually execute storage system specific operations around the Kubernetes API. They can instead incorporate snapshot operations in a cluster agnostic way into their tooling and policy assured that it will work against arbitrary Kubernetes clusters regardless of the underlying storage. These snapshot primitives help to develop advanced, enterprise-grade, storage administration features for Kubernetes which includes data protection, data replication, and data migration. 3 new API objects introduced by Kubernetes Volume Snapshots: #1 VolumeSnapshot The creation and deletion of this object depicts if a user wants to create or delete a cluster resource (a snapshot). It is used to request the creation of a snapshot for a specified volume. It gives the user information about snapshot operations like the timestamp at which the snapshot was taken and whether the snapshot is ready to use. #2 VolumeSnapshotContent This object is created by the CSI volume driver once a snapshot has been successfully created. It contains information about the snapshot including its ID. This object represents a provisioned resource on the cluster (a snapshot). Once a snapshot is created, the VolumeSnapshotContent object binds to the VolumeSnapshot- with a one to one mapping- for which it was created. #3 VolumeSnapshotClass This object created by cluster administrators describes how snapshots should be created. It includes the driver information, how to access the snapshot, etc. These Snapshot objects are defined as CustomResourceDefinitions (CRDs).  End users need to verify if a CSI driver that supports snapshots is deployed on their Kubernetes cluster. CSI Drivers that support snapshots will automatically install the required CRDs. Limitations of the alpha implementation of snapshots The alpha implementation does not support reverting an existing volume to an earlier state represented by a snapshot It does not support "in-place restore" of an existing PersistentVolumeClaim from a snapshot. Users can provision a new volume from a snapshot. However, updating an existing PVC to a new volume and reverting it back to an earlier state is not allowed. No snapshot consistency guarantees given beyond any of those provided by storage system An example of creating new snapshots and importing existing snapshots is explained well on the Kubernetes Blog. Head over to  the team's Concepts page or Github to find more official documentation of the snapshot feature. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Limited Availability of DigitalOcean Kubernetes announced! Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits
Read more
  • 0
  • 0
  • 2475

article-image-aws-service-operator-for-kubernetes-now-available-allowing-the-creation-of-aws-resources-using-kubectl
Melisha Dsouza
08 Oct 2018
3 min read
Save for later

‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl

Melisha Dsouza
08 Oct 2018
3 min read
On the 5th of October, the Amazon team announced the general availability of ‘The AWS Service Operator’. This is an open source project in an alpha state which allows users to manage their AWS resources directly from Kubernetes using the standard Kubernetes CLI, kubectl. What is an Operator? Kubernetes is built on top of a 'controller pattern'. This allows applications and tools to listen to a central state manager (etcd), and take action when something happens. The controller pattern allows users to create decoupled experiences without having to worry about how other components are integrated. An operator is a purpose-built application that manages a specific type of component using this same pattern. You can check the entire list of operators at Awesome Operators. All about the AWS Service Operator Generally, users that need to integrate Amazon DynamoDB with an application running in Kubernetes or deploy an S3 Bucket for their application to use, would need to use tools such as AWS CloudFormation or Hashicorp Terraform. They then have to create a way to deploy those resources. This requires the user to behave as an operator to manage and maintain the entire service lifecycle. Users can now skip all of the above steps and deploy Kubernetes’ built-in control loop. This stores a desired state within the API server for both the Kubernetes components and the AWS services needed. The AWS Service Operator models the AWS Services as Custom Resource Definitions (CRDs) in Kubernetes and applies those definitions to a user’s cluster. A developer can model their entire application architecture from the container to ingress to AWS services, backing it from a single YAML manifest. This will reduce the time it takes to create new applications, and assist in keeping applications in the desired state. The AWS Service Operator exposes a way to manage DynamoDB Tables, S3 Buckets, Amazon Elastic Container Registry (Amazon ECR) Repositories, SNS Topics, SQS Queues, and SNS Subscriptions, with many more integrations coming soon. Looks like users are pretty excited about this update! Source: Hacker News You can learn more about this announcement on the AWS Service Operator project on GitHub. Head over to the official blog to explore how to use AWS Service Operator to create a DynamoDB table and deploy an application that uses the table after it has been created. Limited Availability of DigitalOcean Kubernetes announced! Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS  
Read more
  • 0
  • 0
  • 3602
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-jfrog-devops-artifact-management-platform-bags-165-million-series-d-funding
Sugandha Lahoti
05 Oct 2018
2 min read
Save for later

JFrog, a DevOps based artifact management platform, bags a $165 million Series D funding

Sugandha Lahoti
05 Oct 2018
2 min read
JFrog the DevOps based artifact management platform has announced a $165 million Series D funding, yesterday. This funding round was led by Insight Venture Partners. The secured funding is expected to drive JFrog product innovation, support rapid expansion into new markets, and accelerate both organic and inorganic growth. Other new investors included Spark Capital and Geodesic Capital, as well as existing investors including Battery Ventures, Sapphire Ventures, Scale Venture Partners, Dell Technologies Capital and Vintage Investment Partners. Additional JFrog investors include JFrog Gemini VC Israel, Qumra Capital and VMware. JFrog transforms the way software is updated by offering an end-to-end, universal, highly-available software release platform. This platform is used for storing, securing, monitoring and distributing binaries for all technologies, including Docker, Go, Helm, Maven, npm, Nuget, PyPi, and more. As of now, according to the company, more than 5 million developers use JFrog Artifactory as their system of record when they build and release software. It also supports multiple deployment options, with its products available in a hybrid model, on-premise, and across major cloud platforms: Amazon Web Services, Google Cloud Platform, and Microsoft Azure. The announcement comes on the heels of Microsoft’s $7.5 billion purchase of coding-collaboration site GitHub earlier this year. Since its Series C funding round in 2016, the company has seen more than 500% sales growth and expanded its reach to over 4,500 customers, including more than 70% of the Fortune 100. It continues to add 100 new commercial logos per month and supports the world’s open source communities with its Bintray binary hub. Bintray powers 700K community projects distributing over 5.5M unique software releases that generate over 3 billion downloads a month. Read more about the announcement on JFrog official press release. OmniSci, formerly MapD, gets $55 million in series C funding. Microsoft’s GitHub acquisition is good for the open source community. Chaos engineering platform Gremlin announces $18 million series B funding and new feature for “full-stack resiliency”
Read more
  • 0
  • 0
  • 2788

article-image-limited-availability-of-digitalocean-kubernetes-announced
Melisha Dsouza
03 Oct 2018
3 min read
Save for later

Limited Availability of DigitalOcean Kubernetes announced!

Melisha Dsouza
03 Oct 2018
3 min read
On Monday, the Kubernetes team announced that DigitalOcean, which was available in Early Access, is now accessible as Limited Availability. DigitalOcean simplifies the container deployment process that accompanies plain Kubernetes and offers Kubernetes container hosting services. Incorporating DigitalOcean’s trademark simplicity and ease of use, they aim to reduce the headache involved in setting up, managing and securing Kubernetes clusters. DigitalOcean incidentally are also the people behind Hacktoberfest which runs all of October in partnership with GitHub to promote open source contribution. The Early Access availability was well received by users who commented on the simplicity of configuring and provisioning a cluster. They appreciated that deploying and running containerized services consumed hardly any time. Users also brought to light issues and feedback that was utilized to increase reliability and resolve a number of bugs, thus improving user experience in the limited availability of DigitalOcean Kubernetes. The team also notes that during early access, they had a limited set of free hardware resources for users to deploy to. This restricted the total number of users they could provide access to. In the Limited Availability phase, the team hopes to open up access to anyone who requests it. That being said, the Limited Availability will be a paid product. Why should users consider DigitalOcean Kubernetes? Each customer has their own Dedicated Managed Cluster. This provides security and isolation for their containerized applications with access to the full Kubernetes API. DigitalOcean products provide storage for any amount of data.   Cloud Firewalls make it easy to manage network traffic in and out of the Kubernetes cluster. DigitalOcean provides cluster security scanning capabilities to alert users of flaws and vulnerabilities. In typical Kubernetes environments; metrics, logs, and events can be lost if nodes are spun down. To help developers learn from the performance of past environments, DigitalOcean stores this information separately from the node indefinitely. To know more about these features, head over to their official blog page. Some benefits for users of Limited Availability: Users will be able to provision Droplet workers in many more of regions with full support. To test out their containers in an orchestrated environment, they can start with a single node cluster using a $5/mo Droplet. As they scale their applications, users can add worker pools of various Droplet sizes, attach persistent storage using DigitalOcean Block Storage for $0.10/GB per month, and expose Kubernetes services with a public IP using $10/mo Load Balancers. This is a highly available service designed to protect against application or hardware failures while spreading traffic across available resources. Looks like users are really excited about this upgrade: Source: DigitalOcen Blog Users that have already signed up for Early Access, will receive an email shortly with details about how to get started. To know more about this news, head over to DigitalOcean’s Blog post. Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 5363

article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 4100

article-image-kubernetes-1-12-released-with-general-availability-of-kubelet-tls-bootstrap-support-for-azure-vmss
Melisha Dsouza
28 Sep 2018
3 min read
Save for later

Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS

Melisha Dsouza
28 Sep 2018
3 min read
As promised by the Kubernetes team earlier this month, Kubernetes 1.12 now stands released! With a focus on internal improvements,  the release includes two highly-anticipated features- general availability of Kubelet TLS Bootstrap and Support for Azure Virtual Machine Scale Sets (VMSS). This promises to provide better security, availability, resiliency, and ease of use for faster delivery of production based applications. Let’s dive into the features of Kubernetes 1.12 #1 General Availability of Kubelet TLS Bootstrap The team has made the Kubelet TLS Bootstrap generally available. This feature significantly streamlines Kubernetes’ ability to add and remove nodes to the cluster. Cluster operators are responsible for ensuring the TLS assets they manage remain up-to-date and can be rotated in the face of security events. Kubelet server certificate bootstrap and rotation (beta) will introduce a process for generating a key locally and then issuing a Certificate Signing Request to the cluster API server to get an associated certificate signed by the cluster’s root certificate authority. As certificates approach expiration, the same mechanism will be used to request an updated certificate. #2 Stable Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Azure Virtual Machine Scale Sets (VMSS) allows users to create and manage a homogenous VM pool. This pool can automatically increase or decrease based on demand or a set schedule. Users can easily manage, scale, and load balance multiple VMs to provide high availability and application resiliency which will be ideal for large-scale applications that can run as Kubernetes workloads. The stable support will allow Kubernetes to manage the scaling of containerized applications with Azure VMSS. Users will have the ability to integrate the applications with cluster-autoscaler to automatically adjust the size of the Kubernetes clusters. #3 Other additional Feature Updates Encryption at rest via KMS is now in beta. It adds multiple encryption providers, including Google Cloud KMS, Azure Key Vault, AWS KMS, and Hashicorp Vault. These providers will encrypt data as it is stored to etcd. RuntimeClass is a new cluster-scoped resource that surfaces container runtime properties to the control plane. Topology aware dynamic provisioning is now in beta. Storage resources can now understand where they live. Configurable pod process namespace sharing enables users to configure containers within a pod to share a common PID namespace by setting an option in the PodSpec. Vertical Scaling of Pods will help vary the resource limits on a pod over its lifetime. Snapshot / restore functionality for Kubernetes and CSI will provide standardized APIs design and add PV snapshot/restore support for CSI volume drivers To explore these features in depth, the team will be hosting a  5 Days of Kubernetes series next week. Users will be given a walkthrough of the following features: Day 1 - Kubelet TLS Bootstrap Day 2 - Support for Azure Virtual Machine Scale Sets (VMSS) and Cluster-Autoscaler Day 3 - Snapshots Functionality Day 4 - RuntimeClass Day 5 - Topology Resources Additionally, users can join the members of the release team on November 6th at 10 am PDT in a webinar that will cover major features in this release. You can check out the release on GitHub. Additionally, if you would like to know more about this release, head over to Kubernetes official blog. Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 2672
article-image-gnu-shepherd-0-5-0-releases
Savia Lobo
27 Sep 2018
1 min read
Save for later

GNU Shepherd 0.5.0 releases

Savia Lobo
27 Sep 2018
1 min read
Yesterday, the GNU Daemon Shepherd community announced the release of GNU Shepherd 0.5.0. GNU Shepherd, formerly known as GNU dmd, is a service manager written in Guile and looks after the herd of system services. It provides a replacement for the service-managing capabilities of SysV-init (or any other init) with both a powerful and beautiful dependency-based system and a convenient interface. The GNU Shepherd 0.5.0 contains new features and bug fixes and was bootstrapped with tools including: Autoconf 2.69 Automake 1.16.1 Makeinfo 6.5 Help2man 1.47.6 Changes in GNU Shepherd 0.5.0 Services now have a ‘replacement’ slot In this version, restarting a service will also restart its dependent services When running as PID 1 on GNU/Linux, halt upon ctrl-alt-del Actions can now be invoked on services which are not in the current running state This version supports Guile 3.0 and users need to have Guile version>= 2.0.13 Unused runlevel code has been removed Some of the updated translations in this version include, es, fr, pt_BR, sv To know more about this release in detail, visit GNU official website. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements Network programming 101 with GAWK (GNU AWK) GNU Octave: data analysis examples  
Read more
  • 0
  • 0
  • 2623

article-image-kubernetes-1-12-is-releasing-next-week-with-updates-to-its-storage-security-and-much-more
Melisha Dsouza
21 Sep 2018
4 min read
Save for later

Kubernetes 1.12 is releasing next week with updates to its storage, security and much more!

Melisha Dsouza
21 Sep 2018
4 min read
Kubernetes 1.12 will be released on Tuesday, the 25th of September 2018. This updated release comes with improvements to security and storage, cloud provider support and other internal changes. Let’s take a look at the four domains that will be majorly impacted by this update. #1 Security Stability provided for Kubelet TLS bootstrap The Kubelet TLS bootstrap will now have a stable version. This was also covered in the blog post Kubernetes Security: RBAC and TLS. The kubelet can generate a private key and a signing request (CSR) to get the corresponding certificate. Kubelet server TLS certificate automatic rotation (Beta) The kubelets are able to rotate both client and/or server certificates. They can be automatically rotated through the respective RotateKubeletClientCertificate and RotateKubeletServerCertificate feature flags in the kubelet that are enabled by default now. Egress and IPBlock support for Network Policy NetworkPolicy objects support an egress or to section to allow or deny traffic based on IP ranges or Kubernetes metadata. NetworkPolicy objects also support CIDR IP blocks to be configured in the rule definitions. Users can combine Kubernetes-specific selectors with IP-based ones both for ingress and egress policies. Encryption at rest Data encryption at rest can be obtained using Google Key Management Service as an encryption provider. Read more about this on KMS providers for data encryption. #2 Storage Snapshot / restore volume support for Kubernetes VolumeSnapshotContent and VolumeSnapshot API resources can be provided to create volume snapshots for users and administrators. Topology aware dynamic provisioning, Kubernetes CSI topology support (Beta) Topology aware dynamic provisioning will allow a Pod to request one or more Persistent Volumes (PV) with topology that are compatible with the Pod’s other scheduling constraints- such as resource requirements and affinity/anti-affinity policies. While using multi-zone clusters, pods can be spread across zones in a specific region. The volume binding mode handles the instant at which the volume binding and dynamic provisioning should happen. Automatic detection of Node type When the dynamic volume limits feature is enabled in Kubernetes, it automatically determines the node type. Kubernetes supports the appropriate number of attachable volumes for the node and vendor. #3 Support for Cloud providers Support for Azure Availability Zones Kubernetes 1.12 brings support for Azure availability zones. Nodes within each availability zone will be added with label failure-domain.beta.kubernetes.io/zone=<region>-<AZ> and Azure managed disks storage class will be provisioned taking this into account. Stable support for Azure Virtual Machine Scale Sets This feature adds support for Azure Virtual Machine Scale Sets. This technology lets users create and manage a group of identical load balanced virtual machines. Add Azure support to cluster-autoscaler (Stable) This feature adds support for Azure Cluster Autoscaler. The cluster autoscaler allows clusters to grow as resource demands increase. The Cluster Autoscaler does this scaling  based on pending pods. #4 Better support for Kubernetes internals Easier installation and upgrades through ComponentConfig In earlier Kubernetes versions, modifying the base configuration of the core cluster components was not easily automatable. ComponentConfig is an ongoing effort to make components configuration more dynamic and directly reachable through the Kubernetes API. Improved multi-platform compatibility Kubernetes aims to support the multiple architectures, including arm, arm64, ppc64le, s390x and Windows platforms. Automated CI e2e conformance tests have been deployed to ensure compatibility moving forward. Quota by priority scopeSelector can be used to create Pods at a specific priority. Users can also control a pod’s consumption of system resources based on a pod’s priority. Apart from these four major areas that will be upgraded in Kubernetes 1.12, additional features to look out for are Arbitrary / Custom Metrics in the Horizontal Pod Autoscaler, Pod Vertical Scaling, Mount namespace propagation, and much more! To know about all the upgrades in Kubernetes 1.12, head over to Sysdig’s Blog Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Kubernetes 1.11 is here! VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service  
Read more
  • 0
  • 0
  • 2430

article-image-kong-1-0-launches-the-only-open-source-api-platform-specifically-built-for-microservices-cloud-and-serverless
Richard Gall
18 Sep 2018
3 min read
Save for later

Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless

Richard Gall
18 Sep 2018
3 min read
The API is the building block of much modern software. With Kong 1.0, launching today at Kong Summit, Kong believes it has cemented its position as the go-to platform for developing APIs on modern infrastructures, like cloud-native, microservices, and serverless. The release of the first stable version of Kong marks an important milestone for the company as it looks to develop what it calls a 'service control platform.' This is essentially a tool that will allow developers, DevOps engineers, and architects to manage their infrastructure at every point - however they choose to build it. It should, in theory off a fully integrated solution that let's you handle APIs, manage security permissions, and even leverage the latest in cutting edge artificial intelligence for analytics and automation. CEO Augusto Marietti said that "API management is rapidly evolving with the industry, and technology must evolve with it. We built Kong from the ground up to meet these needs -- Kong is the only API platform designed to manage and broker the demands that in-flight data increasingly place on modern software architectures." How widely used is Kong? According to the press release, Kong has been downloaded 45 million times, making it the most widely used open source API platform. The team stress that reaching Kong 1.0 has taken three years of intensive development work, done alongside customers from a wide range of organizations, including Yahoo! Japan and Healthcare.gov. Kanaderu Fukuda, senior manager of the Computing Platform Department at Yahoo! Japan, said: "as Yahoo! Japan shifts to microservices, we needed more than just an API gateway – we needed a high-performance platform to manage all APIs across a modern architecture... With Kong as a single point for proxying and routing traffic across all of our API endpoints, we eliminated redundant code writing for authentication and authorization, saving hundreds of hours. Kong positions us well to take advantage of future innovations, and we're excited to expand our use of Kong for service mesh deployments next." New features in Kong 1.0 Kong 1.0, according to the release materials "combines sub-millisecond low latency, linear scalability and unparalleled flexibility." Put simply, it's fast but also easy to adapt and manipulate according to your needs. Everything a DevOps engineer or solutions architect would want. Although it isn't mentioned specifically, Kong is a tool that exemplifies the work of SREs - site reliability engineers. It's a tool that's designed to manage the relationship between various services, and to ensure they not only interact with each other in the way they should, but that they do so with minimum downtime. The Kong team appear to have a huge amount of confidence in the launch of the platform - the extent to which they can grow their customer base depends a lot on how the marketplace evolves, and how much the demand for forward-thinking software architecture grows over the next couple of years. Read next: How Gremlin is making chaos engineering accessible [Interview] Is the ‘commons clause’ a threat to open source?
Read more
  • 0
  • 0
  • 4097
article-image-microsoft-announces-azure-devops-makes-azure-pipelines-available-on-github-marketplace
Melisha Dsouza
11 Sep 2018
4 min read
Save for later

Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace

Melisha Dsouza
11 Sep 2018
4 min read
Microsoft is rebranding Visual Studio Team Services(VSTS) to Azure DevOps along with  Azure DevOps Server, the successor of Team Foundation Server (TFS). Microsoft understands that DevOps has become increasingly critical to a team’s success. The re-branding is done to achieve the aim of shipping higher quality software in a short span of time. Azure DevOps supports both public and private cloud configurations. The services are open and extensible and designed to work with any type of application, framework, platform, or cloud. Since Azure DevOps services work great together, users can gain more control over their projects. Azure DevOps is free for open source projects and small projects including up to five users. For larger teams, the cost ranges from $30 per month to $6,150 per month, depending upon the number of users. VSTS users will be upgraded into Azure DevOps projects automatically without any loss of functionally. URLs will be changed from abc.visualstudio.com to dev.azure.com/abc. Redirects from visualstudio.com URLs will be supported to avoid broken links. New users will get the update starting 10th September 2018, and existing users can expect the update in coming months. Key features in Azure DevOps: #1 Azure Boards Users can keep track of their work at every development stage with Kanban boards, backlogs, team dashboards, and custom reporting. Built-in scrum boards and planning tools help in planning meetings while gaining new insights into the health and status of projects with powerful analytics tools. #2 Azure Artifacts Users can easily manage Maven, npm, and NuGet package feeds from public and private sources. Code storing and sharing across small teams and large enterprises is now efficient thanks to Azure Artifacts. Users can Share packages, and use built-in CI/CD, versioning, and testing. They can easily access all their artifacts in builds and releases. #3 Azure Repos Users can enjoy unlimited cloud-hosted private Git repos for their projects.  They can securely connect with and push code into their Git repos from any IDE, editor, or Git client. Code-aware searches help them find what they are looking for. They can perform effective Git code reviews and use forks to promote collaboration with inner source workflows. Azure repos help users maintain a high code quality by requiring code reviewer sign off, successful builds, and passing tests before pull requests can be merged. #4 Azure Test Plans Users can improve their code quality using planned and exploratory testing services for their apps. These Test plans help users in capturing rich scenario data, testing their application and taking advantage of end-to-end traceability. #5 Azure Pipelines There’s more in store for VSTS users. For a seamless developer experience, Azure Pipelines is also now available in the GitHub Marketplace. Users can easily configure a CI/CD pipeline for any Azure application using their preferred language and framework. These Pipelines can be built and deployed with ease. They provide users with status reports, annotated code, and detailed information on changes to the repo within the GitHub interface. The pipelines Work with any platform- like Azure, Amazon Web Services, and Google Cloud Platform. They can run on apps with operating systems, including Android, iOS, Linux, macOS, and Windows systems. The Pipelines are free for open source projects. Microsoft has tried to update user experience by introducing these upgrades. Are you excited yet? You can learn more at the Microsoft live Azure DevOps keynote today at 8:00 a.m. Pacific and a workshop with Q&A on September 17 at 8:30 a.m. Pacific on Microsoft’s events page. You can read all the details of the announcement on Microsoft’s official Blog. Real clouds take out Microsoft’s Azure Cloud; users, developers suffer indefinite Azure outage Machine Learning as a Service (MLaaS): How Google Cloud Platform, Microsoft Azure, and AWS are democratizing Artificial Intelligence 8 ways Artificial Intelligence can improve DevOps  
Read more
  • 0
  • 0
  • 3466

article-image-atlassian-acquires-opsgenie-launches-jira-ops-to-make-incident-response-more-powerful
Bhagyashree R
05 Sep 2018
2 min read
Save for later

Atlassian acquires OpsGenie, launches Jira Ops to make incident response more powerful

Bhagyashree R
05 Sep 2018
2 min read
Yesterday, Atlassian made two major announcements, the acquisition of OpsGenie and the release of Jira Ops. Both these products aim to help IT operations teams resolve downtime quickly and reduce the occurrence of these incidents over time. Atlassian is an Australian enterprise software company that develops collaboration software for teams with products including JIRA, Confluence, HipChat, Bitbucket, and Stash. OpsGenie: Alert the right people at the right time Source: Atlassian OpsGenie is an IT alert and notification management tool that helps notify critical alerts to all the right people (operations and software development teams). It uses a sophisticated combination of scheduling, escalation paths, and notifications that take things like time zone and holidays into account. OpsGenie is a prompt and reliable alerting system, which comes with the following features: It is integrated with monitoring, ticketing, and chat tools, to notify the team using multiple channels, providing the necessary information for your team to immediately begin resolution. It provides various notification methods such as, email, SMS, push, phone call, and group chat to ensure alerts are seen by the users. You can build and modify schedules and define escalation rules within one interface. It tracks everything related to alerts and incidents, which helps you to gain insight into areas of success and opportunities for improvement. You can define escalation policies and on-call schedules with rotations to notify the right people and escalate when necessary. Jira Ops: Resolve incidents faster Source: Atlassian Jira Ops is an unified incident command center that provides the response team with a single place for response coordination. It is integrated with OpsGenie, Slack, Statuspage, PagerDuty, and xMatters. It guides the response team through the response workflow and automates common steps such as creating a new Slack room for each incident. Jira Ops is available through Atlassian’s early access program. Jira Ops enables you to resolve a downtime quickly by providing the following functionalities: It quickly alerts you about what is affected and what the associated impacts are. You can check the status, severity level, and duration of the incident. You can see real-time response activities. You can also find the associated Slack channel, current incident manager, and technical lead. You can find more details on OpsGenie and Jira Ops on Atlassian’s official website. Atlassian sells Hipchat IP to Slack Atlassian open sources Escalator, a Kubernetes autoscaler project Docker isn’t going anywhere
Read more
  • 0
  • 0
  • 3257