Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - DevOps

82 Articles
article-image-the-continuous-intelligence-report-by-sumo-logic-highlights-the-rise-of-multi-cloud-adoption-and-open-source-technologies-like-kubernetes
Vincy Davis
11 Sep 2019
4 min read
Save for later

The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes

Vincy Davis
11 Sep 2019
4 min read
Today, Sumo Logic revealed the fourth edition of their “Continuous Intelligence Report: The State of Modern Applications and DevSecOps in the Cloud.” The primary goal of this report is to present data-driven insights, best practices and the latest trends by analyzing technology adoption among Sumo Logic customers. The data in the report is derived from 2000+ Sumo Logic customers running applications on cloud platforms like AWS, Azure, Google Cloud Platform, as well as, on-premise environments. This year, the Continuous Intelligence report finds that, with an increase of 50% in enterprise adoption and deployments of multi-cloud, Multi-cloud is growing faster than any other modern infrastructure category. In a statement, Kalyan Ramanathan, vice president of product marketing for Sumo Logic says, “the increased adoption of services to enable and secure a multi-cloud strategy are adding more complexity and noise,  which current legacy analytics solutions can’t handle. To address this complexity, companies will need a continuous intelligence strategy that consolidates all of their data into a single pane of glass to close the intelligence gap. Sumo Logic provides this strategy as a cloud-native, continuous intelligence platform, delivered as a service.” Key findings of the Modern App Report 2019 Kubernetes highly prevalent in multi-cloud environments Kubernetes offers broad multi-cloud support and can be used by many organizations to run applications across cloud environments. The 2019 Modern App survey reveals that 1 in 5 AWS customers use Kubernetes. Image Source: The Continuous Intelligence Report The report states, “Enterprises are betting on Kubernetes to drive their multi-cloud strategies. It is imperative that enterprises deploy apps on Kubernetes to easily orchestrate/manage/scale apps and also retain the flexibility to port apps across different clouds.” Open source has disrupted the modern application stack Open source has disrupted the modern application stack with open source solutions for containers like orchestration, infrastructure and application services leading in majority. 4 out of 6 application infrastructure platforms are dominated by open source now. One of the open source solution called the orchestration technologies are used to not only automate the deployment and scaling of containers, but also to ensure reliability of applications and workloads which are running on containers. Image Source: The Continuous Intelligence Report Adoption of individual IaaS services suggests enterprises are trying to avoid vendor lock-in The Modern App 2019 survey finds that typical enterprises are only using 15 out of 150+ discrete services marketed and available for consumption in AWS. The adoption of AWS services demonstrates that basic compute, storage, database, network, and identity services are some of the top 10 adopted services in AWS. It is also found that services like management, tooling, and advanced security services are adopted at a lower rate than the core infrastructure services (50% or less). Image Source: The Continuous Intelligence Report Serverless technology mainly AWS Lambda continue to rise Serverless technologies like AWS Lambda continues to grow steeply as it is a cost-effective option to speed cloud and DevOps deployment automation. The Modern App Report 2019 reveals that AWS Lambda adoption grew to 36% in 2019, up 24% from 2017. It is also being used in several non-production use cases. AWS Lambda continues to increase their cloud migration and digital transformation efforts which makes it one of the top 10 AWS services by adoption. “Lambda usage for application or deployment automation technology should be considered for every production application,” reads the report. Image Source: The Continuous Intelligence Report The 2019 Continuous Intelligence Report is the first industry report to quantitatively define the state of the Modern Application Stack and its implication to the growing technology. Professionals like cloud architects, Site Reliability Engineers (SREs), data engineers, operations teams, DevOps and Chief Information Security Officers (CISOs) can learn how to build, run and secure modern applications and cloud infrastructures by leveraging information from this report. If you are interested to know more, you can check out the full report at the Sumo Logic blog. Other news in Cloud and Networking Containous introduces Maesh, a lightweight and simple Service Mesh to ease microservices adoption Amazon announces improved VPC networking for AWS Lambda functions Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more
Read more
  • 0
  • 0
  • 2137

article-image-vmware-signs-definitive-agreement-to-acquire-pivotal-software-and-carbon-black
Vincy Davis
23 Aug 2019
3 min read
Save for later

VMware signs definitive agreement to acquire Pivotal Software and Carbon Black

Vincy Davis
23 Aug 2019
3 min read
Yesterday, VMware announced in a press release that they entered a conclusive agreement to acquire Carbon Black, a cloud-native endpoint security software developer. According to the agreement, “VMware will acquire Carbon Black in an all cash transaction for $26 per share, representing an enterprise value of $2.1 billion.”  VMware intends to use Carbon Black’s big data and behavioral analytics to offer customers advanced threat detection and behavioral insight to defend against experienced attacks. Consequently, they aspire to protect clients through big data, behavioral analytics, and AI. Pat Gelsinger, the CEO of VMware says, “By bringing Carbon Black into the VMware family, we are now taking a huge step forward in security and delivering an enterprise-grade platform to administer and protect workloads, applications, and networks.” He adds, “With this acquisition, we will also take a significant leadership position in security for the new age of modern applications delivered from any cloud to any device.” Yesterday, after much speculation, VMware also announced that they have acquired Pivotal Software, a cloud-native platform provider, for an enterprise value of $2.7 billion. Dell technologies is a major stakeholder in both companies. Lately, VMware has been heavily investing in Kubernetes. Last year, it also launched a VMware Kubernetes Engine (VKE) to offer Kubernetes-as-a-Service. This year, Pivotal also teamed up with the Heroku team to create Cloud Native Buildpacks for Kubernetes and recently, also launched a Pivotal Spring Runtime for Kubernetes. With Pivotal, VMware plans to “deliver a comprehensive portfolio of products, tools and services necessary to build, run and manage modern applications on Kubernetes infrastructure with velocity and efficiency.” Read More: VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Gelsinger told ZDNet that both these “acquisitions address two critical technology priorities of all businesses today — building modern, enterprise-grade applications and protecting enterprise workloads and clients.” Gelsinger also pointed out that multi-cloud, digital transformation, and the increasing trend of moving “applications to the cloud and access it over distributed networks and from a diversity of endpoints” are significant reasons for placing high stakes on security. It is clear that by acquiring Carbon Black and Pivotal Software, the cloud computing and virtualization software company is seeking to expand its range of products and services with an ultimate focus on security in Kubernetes. A user on Hacker News comments, “I'm not surprised at the Pivotal acquisition. VMware is determined to succeed at Kubernetes. There is already a lot of integration with Pivotal's Kubernetes distribution both at a technical as well as a business level.” Also, developers around the world are excited to see what the future holds for VMware, Carbon Black, and Pivotal Software. https://twitter.com/rkagal1/status/1164852719594680321 https://twitter.com/CyberFavourite/status/1164656928913596417 https://twitter.com/arashg_/status/1164785525120618498 https://twitter.com/jambay/status/1164683358128857088 https://twitter.com/AnnoyedMerican/status/1164646153389875200 Per the press release, both the transaction payments are expected to be concluded in the second half of VMware’s fiscal year i.e., January 31, 2020. Interested users can read the VMware acquiring Carbon Black and Pivotal Software press releases for more information. VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service
Read more
  • 0
  • 0
  • 2734

article-image-pivotal-open-sources-kpack-a-kubernetes-native-image-build-service
Sugandha Lahoti
23 Aug 2019
2 min read
Save for later

Pivotal open sources kpack, a Kubernetes-native image build service

Sugandha Lahoti
23 Aug 2019
2 min read
In April, Pivotal and Heroku teamed up to create Cloud Native Buildpacks for Kubernetes. Cloud-Native Buildpacks turn source code into production-ready Docker images that are OCI image compatible and is based around the popular Buildpack model. Yesterday, they open-sourced kpack, which is a set of experimental build service Kubernetes resource controllers. Basically, kpack is Kubernetes’ native way to build and update containers. It automates the creation and update of container images that can be run anywhere. Pivotal’s commercial implementation of kpack comes via Pivotal Build Service. Users can use it atop Kubernetes to boost developer productivity. The Build Service integrates kpack with buildpacks and the Kubernetes permissions model. kpack presents a CRD as its interface, and users can interact with all Kubernetes API tooling including kubectl. Pivotal has open-sourced kpack for two reasons, as mentioned in their blog post. “First, to provide Build Service’s container building functionality and declarative logic as a consumable component that can be used by the community in other great products. Second, to provide a first-class interface, to create and modify image resources for those who desire more granular control.” Many companies and communities have announced that they will be using Kpack in their projects. Project riff will use kpack to build functions to handle events. The Cloud Foundry community plans to feature kpack as the new app staging mechanism in the Cloud Foundry Application Runtime. Check out the kpack repo for more details. You can also request alpha access to Build Service. In other news, Pivotal and VMware, the former’s parent company are negotiating a deal for VMware to acquire Pivotal as per a recent regulatory filing from Dell. VMware, Pivotal, and Dell have jointly filed the document informing the government regulators about the potential transaction. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes VMware’s plan to acquire Pivotal Software reflects a rise in Pivotal’s shares Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads.
Read more
  • 0
  • 0
  • 3403
Banner background image

article-image-puppet-launches-puppet-remediate-a-vulnerability-remediation-solution-for-it-ops
Vincy Davis
22 Aug 2019
3 min read
Save for later

Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops

Vincy Davis
22 Aug 2019
3 min read
Yesterday, Puppet announced a vulnerability remediation solution called Puppet Remediate which aims to reduce the time taken by IT teams to identify, prioritize and rectify mission-critical vulnerabilities. Matt Waxman, head of product at Puppet said, “There is a major gap between sophisticated scanning tools that identify vulnerabilities and the fragmented and manual, error-prone approach of fixing these vulnerabilities.” He adds, “Puppet Remediate closes this gap giving IT the insight they need to end the current soul-crushing work associated with vulnerability remediation to ensure they are keeping their organization safe.” Puppet Remediate will produce faster remedial solution by taking support from security partners who have access to potentially sensitive vulnerability data. It will discover vulnerabilities depending on the type of infrastructure resources affected by them. Next, Puppet Remediate will render instant action “to remediate vulnerable packages without requiring any agent technology on the vulnerable systems on both Linux and Windows through SSH and WinRM”, says Puppet. Key features in Puppet Remediate Shared vulnerability data between security and IT Ops Puppet Remediate unifies infrastructure data and vulnerability data, to help IT Ops get access to vulnerability data in real-time, thus reducing delays and eliminating risks associated to manual handover of data. Risk-based prioritization It will assist IT teams to prioritize critical systems and identify vulnerabilities within the organization's systems based on infrastructure context. It will give IT teams more clarity on what to fix first. Agentless remediation IT teams will be able to take immediate action to rectify a vulnerability without requiring to leave the application or without the need of requiring any agent technology on the vulnerable systems. Channel partners will provide Puppet an established infrastructure and InfoSec practices Puppet have selected initial channel partners depending on their established infrastructure and InfoSec practices. The channel partners will help Puppet Remediate to bridge the gap between security and IT practices in enterprises. Fishtech, a cybersecurity solutions provider and Bitbone, a Germany based computer software store are the initial channel partners for Puppet Remediate. Sebastian Scheuring, CEO of Bitbone AG says, “Puppet Remediate offers real added value with its new functions to our customers. It drastically automates the workflow of vulnerability remediation through taking out the manual, mundane and error-prone steps that are required to remediate vulnerabilities. Continuous scans, remediation tasks and short cycles of update processes significantly increase the security level of IT environments.” Check out the website to know more about Puppet Remediate. Listen: Puppet’s VP of Ecosystem Engineering Nigel Kersten talks about key DevOps challenges [Podcast] Puppet announces updates in a bid to help organizations manage their “automation footprint” “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel
Read more
  • 0
  • 0
  • 2192

article-image-introducing-kdevops-modern-devops-framework-for-linux-kernel-development
Fatema Patrawala
20 Aug 2019
3 min read
Save for later

Introducing kdevops, a modern DevOps framework for Linux kernel development

Fatema Patrawala
20 Aug 2019
3 min read
Last Friday, Luis Chamberlain announced the release of kdevops as a Linux kernel development DevOps framework. Chamberlain wrote in his email, “the goal behind this project is to provide a modern devops framework for Linux kernel development. It is not a test suite, it is designed to use any test suites, and more importantly, it allows us to let us easily set up test environments in a jiffie. It supports different virtualization environments, and different cloud environments, and supports different Operating Systems.” kdevops is a sample framework which lets you to easily set up a testing environment for a number of different use cases. How does kdevops work? kdevops relies on Vagrant, Terraform and Ansible to get you going with your virtualization/bare metal/cloud provisioning environment. It relies heavily on public ansible galaxy roles and terraform modules. This lets the kdevops team share codes with the community and allow them to use the project as a demo framework which uses theses ansible roles and terraform modules. There are three parts to the long terms ideals for kdevops: Provisioning required virtual hosts/cloud environment Provisioning your requirements Running whatever you want Ansible will be used to get all the required ansible roles. Then Vagrant or Terraform can be used to provision hosts. Vagrant makes use of two ansible roles to setup update ~/.ssh/config and update the systems with basic development preference files, things like .git config or bashrc hacks. This last part is handled by the devconfig ansible role. Since ~/.ssh/config is updated you can then run further ansible roles manually when using Vagrant. If using Terraform for cloud environments, it updates ~/.ssh/config directly without ansible, however since access to hosts on cloud environments can vary in time running all ansible roles is expected to be done manually. What you can do with kdevops Full vagrant provisioning, including updating your ~/.ssh/config Terraform provisioning on different cloud providers Running ansible to install dependencies on debian Using ansible to clone, compile and boot into any random kernel git tree with a supplied config Updating ~/.ssh/config for terraform, first tested with the OpenStack provider, with both generic and special minicloud support. Other terraform providers just require making use of the newly published terraform module add-host-ssh-config On Hacker News, this release has gained positive reviews, but the only concern for users is if it has anything to do with devops as it appears to be an automated testing environment provision. One of them comments, “This looks cool, but I'm not sure what it has to do with devops? It just seems to be automated test environment provisioning, am I missing something?” On Reddit as well, Linux users are happy with this setup and they find it really promising, one of the comments read, “I have so much hacky scriptwork around kvm, have always been looking for a cleaner setup; this looks super promising. thank you.” To know more about this release, check out the official announcement page as well as the GitHub page. Why do IT teams need to transition from DevOps to DevSecOps? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 3546

article-image-macstadium-announces-orka-orchestration-with-kubernetes-on-apple
Savia Lobo
13 Aug 2019
2 min read
Save for later

MacStadium announces ‘Orka’ (Orchestration with Kubernetes on Apple)

Savia Lobo
13 Aug 2019
2 min read
Today, MacStadium, an enterprise-class cloud solution for Apple Mac infrastructure, announced ‘Orka’ (Orchestration with Kubernetes on Apple). Orka is a new virtualization layer for Mac build infrastructure based on Docker and Kubernetes technology. It offers a solution for orchestrating macOS in a cloud environment using Kubernetes on genuine Apple Mac hardware. With Orka, users can apply native Kubernetes commands for macOS virtual machines (VMs) on genuine Apple hardware. “While Kubernetes and Docker are not new to full-stack developers, a solution like this has not existed in the Apple ecosystem before,” MacStadium wrote in an email statement to us. “The reality is that most enterprises need to develop applications for Apple platforms, but these enterprises prefer to use nimble, software-defined build environments,” said Greg McGraw, Chief Executive Officer, MacStadium. “With Orka, MacStadium’s flagship orchestration platform, developers and DevOps teams now have access to a software-defined Mac cloud experience that treats infrastructure-as-code, similar to what they are accustomed to using everywhere else.” Developers creating apps for Mac or iOS must build on genuine Apple hardware. However, until now, popular orchestration and container technologies like Kubernetes and Docker have been unable to leverage Mac operating systems. With Orka, Apple OS development teams can use container technology features in a Mac cloud, the same way they build on other cloud platforms like AWS, Azure or GCP. As part of its initial release, Orka will ship with a plugin for Jenkins, an open-source automation tool that enables developers to build, test and deploy their software using continuous integration techniques. Macstadium will also present a session at DevOps World | Jenkins World in San Francisco (August 12-15) demonstrating users how Orka integrates with Jenkins build pipelines and how it leverages the capability and power of Docker/Kubernetes in a Mac development environment. To know more about Orka in detail, visit MacStadium’s official website. CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Implementing Horizontal Pod Autoscaling in Kubernetes [Tutorial]
Read more
  • 0
  • 0
  • 2778
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-lxd-3-15-releases-with-a-switch-to-dqlite-1-0-branch-new-hardware-vlan-and-mac-filtering-on-sr-iov-and-more
Vincy Davis
17 Jul 2019
5 min read
Save for later

LXD 3.15 releases with a switch to dqlite 1.0 branch, new hardware VLAN and MAC filtering on SR-IOV and more!

Vincy Davis
17 Jul 2019
5 min read
A few days ago, the Linux Daemon (LXD) team announced the release of LXD 3.15. The major highlight of the release is the transition of LXD to the dqlite 1.0 branch, which will yield better performance and reliability to cluster users and standalone installations.  Linux Daemon (LXD) is a next-generation system container manager which uses Linux containers. It’s a free software, written in Go and developed under the Apache 2 license. LXD 3.15 explores new features including hardware VLAN and MAC filtering on SR-IOV, new storage-size option for lxd-p2c, Ceph FS storage backend for custom volumes and more. It also includes many major improvements including DHCP lease handling, cluster heartbeat handling, and bug fixes. What’s new in LXD 3.15? Hardware VLAN and MAC filtering on SR-IOV The security.mac_filtering and vlan properties are now available to SR-IOV devices. This will prevent MAC spoofing from the container as it will directly control the matching SR-IOV options on the virtual function. It will also perform hardware filtering at the VF level, in case of VLANs. New storage-size option for lxd-p2c A new --storage-size option has been added in LXD 3.15. When this option is used along with   --storage, it allows specifying the desired volume size to use for the container. Ceph FS storage backend for custom volumes Ceph FS is used as a storage driver for LXD and its support is limited to custom storage volumes. Its support includes size restrictions and native snapshot when the server, server configuration, and client kernel support those features. Ceph FS also allows attaching the same custom volume to multiple containers at the same time, even if they’re located on different hosts. IPv4 and IPv6 filtering IPv4 and IPv6 filtering (spoof protection) enable multiple containers to share the same underlying bridge, without worrying about spoofing the address of other containers, hijacking traffic or causing connectivity issues. Read Also: Internet governance project (IGP) survey on IPV6 adoption, initial reports Major improvements in LXD 3.15 Switch to dqlite 1.0 After a year of running all the LXD servers on the original implementation of distributed sqlite database, LXD 3.15 has finally switched to its 1.0 branch. This transition reduces the number of external dependencies, CPU usage and memory usage for the database. It also makes it easier to debug issues and integrate better with more complex database operations when running clusters.  Reworked DHCP lease handling In the previous versions, LXD’s handling of DHCP was pretty limited. With LXD 3.15, LXD will itself be able to issue DHCP requests to the dnsmasq server based on what’s currently in the DHCP lease table. This allows the user to manually release a lease when a container’s configuration is altered or a container is deleted, all without ever needing to restart dnsmasq. Reworked cluster heartbeat handling With LXD 3.15, the internal heartbeat (the list of database nodes) extends to include the most recent version information from the cluster as well as the status of all cluster members. This means that only the cluster leader will have to retrieve the data and the remaining members will get a consistent view of everything within 10s. Some of the Bug fixes in LXD 3.15 Linker flags have been updated. Path to the host’s communication socket has been fixed: doc/devlxd Basic install instructions have been added: doc/README Translations from weblate has been updated: i18n Unused arg from setNetworkRoutes has been removed: lxd/containers Unit tests have been updated: lxd/db Developers are happy with the new features and improvements included in LXD 3.15. A user on Reddit says, “The IPv4 and IPv6 spoof protection filters is going to make a few people very happy. As well as ceph FS support as RBD doesn't like sharing volumes with multiple host.” Some users were comparing LXD with Docker, where mostly all preferred the former over the latter. A Redditor gave a detailed comparison of the two platforms. The comment read, “The high-level difference is that Docker is for "application containers" and LXD is for "system containers". For Docker that means things like, say, your application process being PID 1, and generally being forced to do things the "Docker way".  “LXD, on the other hand, provides flexibility to use containers the way you want to. This means containers end up being closer to your development environment, e.g. by using systemd if you want it; they can be ephemeral like Docker, but only if you want to”, the user further added.  “So, LXD provides containers that are closer in feel to a regular installation or VM, but with the performance benefit of containers. You can even use LXD containers as Docker hosts, which is what I often do.” For the complete list of updates, head over to the LXD 3.15 release notes. LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more LXD 3.8 released with automated container snapshots, ZFS compression support and more! Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port
Read more
  • 0
  • 0
  • 1708

article-image-azure-devops-report-how-a-bug-caused-sqlite3-for-python-to-go-missing-from-linux-images
Vincy Davis
03 Jul 2019
3 min read
Save for later

Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images

Vincy Davis
03 Jul 2019
3 min read
Yesterday, Youhana Naseim the Group Engineering Manager at Azure Pipelines provided a post-mortem of the bug, due to which a sqlite3 module in the Ubuntu 16.04 image for Python went missing from May 14th. The Azure DevOps team identified the bug on May 31st and fixed it on June 26th. Naseim apologized to all the affected customers for the delay in detecting and fixing the issue. https://twitter.com/hawl01475954/status/1134053763608530945 https://twitter.com/ProCode1/status/1134325517891411968 How Azure DevOps team detected and fixed the issue The Azure DevOps team upgraded the versions of Python, which were included in the Ubuntu 16.04 image with M151 payload. These versions of Python’s build scripts consider sqlite3 as an optional module, hence the builds were carried out successfully despite the missing sqlite3 module. Naseim says that, “While we have test coverage to check for the inclusion of several modules, we did not have coverage for sqlite3 which was the only missing module.” The issue was first reported by a user who received the M151 deployment containing the bug via the Azure Developer Community on May 20th. But the Azure support team escalated, only after receiving more reports during the M152 deployment on May 31st. The support team then proceed with the M153 deployment, after posting a workaround for the issue, as the M152 deployment would take at least 10 days. Further, due to an internal miscommunication, the support team didn’t start the M153 deployment to Ring 0 until June 13th. [box type="shadow" align="" class="" width=""]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. [/box]To safeguard the production environment, Azure DevOps rolls out changes in a progressive and controlled manner via the ring model of deployments. The team then resumed deployment to Ring 1 on June 17th and reached Ring 2 by June 20th. Finally, after a few failures, the team fully deployed the M153 deployment by June 26th. Azure’s future workarounds to deliver timely fixes The Azure team has set out plans to make improvements to their deployment and hotfix processes with an aim to deliver timely fixes. Their long term plan is to provide customers with the ability to choose to revert to the previous image as a quick workaround for issues introduced in new images. The detailed medium and short plans are as given below: Medium-term plans Add the ability to better compare what changed on the images to catch any unexpected discrepancies that our test suite might miss. Increase the speed and reliability of deployment process. Short term plans Build a full CI Pipeline for image generation for verifying images daily. Add test coverage for all modules in the Python standard library including sqlite3. Improving the support team's communication with the support team to escalate issues more quickly. Add telemetry, so it would be possible to detect and diagnose issues more quickly. Implement measures, which will enable reverting to prior image versions quickly and mitigate issues faster. Visit the Azure Devops status site for more details. Read More Ubuntu has decided to drop i386 (32-bit) architecture from Ubuntu 19.10 onwards Xubuntu 19.04 releases with latest Xfce package releases, new wallpapers and more Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32
Read more
  • 0
  • 0
  • 3642

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 4101

article-image-low-carbon-kubernetes-scheduler-a-demand-side-management-solution-that-consumes-electricity-in-low-grid-carbon-intensity-areas
Savia Lobo
27 Jun 2019
7 min read
Save for later

Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas

Savia Lobo
27 Jun 2019
7 min read
Machine learning experts are increasingly becoming interested in researching on how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. For example, Machine Learning can be used to regulate cloud data centres that manage an important asset, ‘Data’ as these data centres typically comprise tens to thousands of interconnected servers and consume a substantial amount of electrical energy. Researchers from Huawei published a paper in April 2015, estimating that by 2030 data centres will use anywhere between 3% and 13% of global electricity At the ICT4S 2019 conference held in Lappeenranta, Finland, from June 10-15, researchers from the University of Bristol, UK, introduced their research on a low carbon scheduling policy for the open-source Kubernetes container orchestrator. “Low Carbon Kubernetes Scheduler” can provide demand-side management (DSM) by migrating consumption of electric energy in cloud data centres to countries with the lowest carbon intensity of electricity. In their paper the researchers highlight, “All major cloud computing companies acknowledge the need to run their data centres as efficiently as possible in order to address economic and environmental concerns, and recognize that ICT consumes an increasing amount of energy”. Since the end of 2017, Google Cloud Platform runs its data centres entirely on renewable energy. Also, Microsoft has announced that its global operations have been carbon neutral since 2012. However, not all cloud providers have been able to make such an extensive commitment. For example, Oracle Cloud is currently 100% carbon neutral in Europe, but not in other regions. The Kubernetes Scheduler selects compute nodes based on the real-time carbon intensity of the electric grid in the region they are in. Real-time APIs that report grid carbon intensity is available for an increasing number of regions, but not exhaustively around the planet. In order to effectively demonstrate the schedulers ability to perform global load balancing, the researchers have evaluated the scheduler based on its ability to the metric of solar irradiation. “While much of the research on DSM focusses on domestic energy consumption there has also been work investigating DSM by cloud data centres”, the paper mentions. Demand side management (DSM) refers to any initiatives that affect how and when electricity is being required by consumers. Source: CEUR-WS.org Existing schedulers work with consideration to singular data centres rather than taking a more global view. On the other hand, the Low Carbon Scheduler considers carbon intensity across regions as scaling up and down of a large number of containers that can be done in a matter of seconds. Each national electric grid contains electricity generated from a variable mix of alternative sources. The carbon intensity of the electricity provided by the grid anywhere in the world is a measure of the amount of greenhouse gas released into the atmosphere from the combustion of fossil fuels for the generation of electricity. Significant generation sites report the volume of electricity input to the grid in regular intervals to the organizations operating the grid (for example the National Grid in the UK) in real-time via APIs. These APIs typically provide the retrieval of the production volumes and thus allow to calculate the carbon intensity in real-time. The Low carbon scheduler collects the carbon intensity from the available APIs and ranks them to identify the region with the lowest carbon intensity. [box type="shadow" align="" class="" width=""]For the European Union, such an API is provided by the European Network of Transmission System Operators for Electricity (www.entsoe.eu) and for the UK this is the Balancing Mechanism Reporting Service (www.elexon.co.uk).[/box] Why Kubernetes for building a low carbon scheduler Kubernetes can make use of GPUs4 and has also been ported to run on ARM architecture 5. Researchers have also said that Kubernetes has to a large extent won the container orchestration war. It also has support for extendability and plugins which makes it the “most suitable for which to develop a global scheduler and bring about the widest adoption, thereby producing the greatest impact on carbon emission reduction”. Kubernetes allows schedulers to run in parallel, which means the scheduler will not need to re-implement the pre-existing, and sophisticated, bin-packing strategies present in Kubernetes. It need only to apply a scheduling layer to complement the existing capabilities proffered by Kubernetes. According to the researchers, “Our design, as it operates at a higher level of abstraction, assures that Kubernetes continues to deal with bin-packing at the node level, while the scheduler performs global-level scheduling between data centres”. The official Kubernetes documentation describes three possible ways of extending the default scheduler (kube-scheduler): adding these rules to the scheduler source code and recompiling, implementing one’s own scheduler process that runs instead of, or alongside kube-scheduler, or implementing a scheduler extender. Evaluating the performance of the low carbon Kubernetes scheduler The researchers recorded the carbon intensities for the countries that the major cloud providers operate data centers between 18.2.2019 13:00 UTC and 21.4.2019 9:00 UTC. Following is a table showing countries where the largest public cloud providers operate data centers, as of April 2019. Source: CEUR-WS.org They further ranked all countries by the carbon intensity of their electricity in 30-minute intervals. Among the total set of 30-minute values, Switzerland had the lowest carbon intensity (ranked first) in 0.57% of the 30-minute intervals, Norway 0.31%, France 0.11% and Sweden in 0.01%. However, the list of the least carbon intense countries only contains countries in central Europe locations. To justify Kubernetes’ ability or globally distributed deployments the researchers chose to optimize placement to regions with the greatest degree of solar irradiance termed a Heliotropic Scheduler. This scheduler is termed ‘heliotropic’ in order to differentiate it from a ‘follow-the-sun’ application management policy that relates to meeting customer demand around the world by placing staff and resources in proximity to those locations (thereby making them available to clients at lower latency and at a suitable time of day). A ‘heliotropic’ policy, on the other hand, goes to where sunlight, and by extension solar irradiance, is abundant. They further evaluated the Heliotropic Scheduler implementation by running BOINC jobs on Kubernetes. BOINC (Berkeley Open Infrastructure for Network Computing) is a software platform for volunteer computing that allows users to contribute computational capacity from their home PCs towards scientific research. Einstein@Home, SETI@home and IBM World Community Grid are some of the most widely supported projects. Researchers say: “Even though many cloud providers are contracting for renewable energy with their energy providers, the electricity these data centres take from the grid is generated with release of a varying amount of greenhouse gas emissions into the atmosphere. Our scheduler can contribute to moving demand for more carbon intense electricity to less carbon intense electricity”. While the paper concludes that wind-dominant, solar-complementary strategy is superior for the integration of renewable energy sources into cloud data centres’ infrastructure, the Low Carbon Scheduler provides a proof-of-concept demonstrating how to reduce carbon intensity in cloud computing. To know more about this implementation for lowering carbon emissions read the research paper. Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?
Read more
  • 0
  • 0
  • 4253
article-image-puppet-announces-updates-in-a-bid-to-help-organizations-manage-their-automation-footprint
Richard Gall
03 May 2019
3 min read
Save for later

Puppet announces updates in a bid to help organizations manage their "automation footprint"

Richard Gall
03 May 2019
3 min read
There are murmurs on the internet that tools like Puppet are being killed off by Kubernetes. The reality is a little more complex. True, Kubernetes poses some challenges to various players in the infrastructure automation market, but they nevertheless remain important tools for engineers charged with managing infrastructure. Kubernetes is forcing this market to adapt - and with Puppet announcing new tools and features to its portfolio in Puppet Enterprise 2019.1 yesterday, this it's clear that the team are making the necessary strides to remain a key part of the infrastructure automation landscape. Update: This article was amended to highlight that Puppet Enterprise is a distinct product separate from Continuous Delivery for Puppet Enterprise. What's new for Puppet Enterprise 2019.1? There are two key elements to the Puppet announcement: enhanced integration with Puppet Bolt - an open source, agentless task runner - and improved capabilities with Continuous Delivery for Puppet Enterprise. Puppet Bolt Puppet Bolt, the Puppet team argue, offers a really simple way to get started with infrastructure automation "without requiring an agent installed on a remote target." The Puppet team explain that Puppet Bolt essentially allows users to expand the scope of what they can automate without losing the consistency and control that you'd expect when using a tool like Puppet. This has some significant benefits in the context of Kubernetes. Bryan Belanger, Principal Consultant at Autostructure, said "We love using Puppet Bolt because it leverages our existing Puppet roles and classifications allowing us to easily make changes to large groups of servers and upgrade Kubernetes clusters quicker, which is often a pain if done manually." Belanger continues, saying "with the help of Puppet Bolt, we were also able to fix more than 1,000 servers within five minutes and upgrade our Kubernetes clusters within four hours, which included coding and tasks." Continuous Delivery for Puppet Enterprise Updates to the Continuous Delivery product aim to make DevOps practices easier - the Puppet team are clearly trying to make it easier for organizations to empower their colleagues and continue to build a culture where engineers are not simply encouraged to be responsible for code deployment, but also able to do it with minimal fuss. Module Delivery Pipelines now mean modules can be independently deployed without blocking others, while Simplified Puppet Deployments aims to make it easier for engineers that aren't familiar with Puppet to "push simple infrastructure changes immediately and easily perform complex rolling deployments to a group of nodes in batches in one step." But there is also another dimension that aims to help engineers take pro-active steps to tackle resiliency and security issues. With Impact Analysis teams will be able to look at the potential impact of a deployment before it's done. Read next: “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel What's the big idea behind this announcement? The over-arching narrative that's coming from the top is about supporting teams to scale their DevOps processes. It's about making organizations' 'automation footprint' more manageable. "IT teams need a simple way to get started with automation and a solution that grows with them as their automation footprint grows," Matt Waxman, Head of Product at Puppet, explains. "You shouldn’t have to throw away your existing scripts or tools to scale automation across your organization. Organizations need a solution that is extensible — one that complements their current automation efforts and helps them scale beyond individuals to multiple teams." Puppet Enterprise 2019.1 will be out on general availability on May 7 2019. Learn more here.
Read more
  • 0
  • 0
  • 2483

article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 4277

article-image-you-can-now-integrate-chaos-engineering-into-your-ci-and-cd-pipelines-thanks-to-gremlin-and-spinnaker
Richard Gall
02 Apr 2019
3 min read
Save for later

You can now integrate chaos engineering into your CI and CD pipelines thanks to Gremlin and Spinnaker

Richard Gall
02 Apr 2019
3 min read
Chaos engineering is a trend that has been evolving quickly over the last 12 months. While for the decade it has largely been the preserve of Silicon Valley's biggest companies, thanks to platforms and tools like Gremlin, and an increased focus on software resiliency, that's been changing. Today, however, is a particularly important step in chaos engineering, as Gremlin have partnered with Netflix-built continuous deployment platform Spinnaker to allow engineering teams to automate chaos engineering 'experiments' throughout their CI and CD pipelines. Ultimately it means DevOps teams can think differently about chaos engineering. Gradually, this could help shift the way we think about chaos engineering, as it moves from localized experiments that require an in depth understanding of one's infrastructure, to something that is built-into the development and deployment process. More importantly, it makes it easier for engineering teams to take complete ownership of the reliability of their software. At a time when distributed systems bring more unpredictability into infrastructure, and when downtime has never been more costly (a Gartner report suggested downtime costs the average U.S. company $5,600 a minute all the way back in 2014) this is a step that could have a significant impact on how engineers work in the future. Read next: How Gremlin is making chaos engineering accessible [Interview] Spinnaker and chaos engineering Spinnaker is an open source continuous delivery platform built by Netflix and supported by Google, Microsoft, and Oracle. It's a platform that has been specifically developed for highly distributed and hybrid systems. This makes it a great fit for Gremlin, and also highlights that the growth of chaos engineering is being driven by the move to cloud. Adam Jordens, a Core Contributor to Spinnaker and a member of the Spinnaker Technical Oversight Committee said that "with the rise of microservices and distributed architectures, it’s more important than ever to understand how your cloud infrastructure behaves under stress.” Jordens continued; "by integrating with Gremlin, companies will be able to automate chaos engineering into their continuous delivery platform for the continual hardening and resilience of their internet systems.” Kolton Andrus, Gremlin CEO and Co-Founder explained the importance of Spinnaker in relation to chaos engineering, saying that "by integrating with Gremlin, users can now automate chaos experiments across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, Openstack, and more, enabling enterprises to build more resilient software." In recent months Gremlin has been working hard on products and features that make chaos engineering more accessible to companies and their engineering teams. In February, it released Gremlin Free, a free version of Gremlin designed to offer users a starting point for performing chaos experiments.
Read more
  • 0
  • 0
  • 2986
article-image-google-to-be-the-founding-member-of-cdf-continuous-delivery-foundation
Bhagyashree R
15 Mar 2019
3 min read
Save for later

Google to be the founding member of CDF (Continuous Delivery Foundation)

Bhagyashree R
15 Mar 2019
3 min read
On Tuesday, Google announced of being one of the founding members of the newly-formed Continuous Delivery Foundation (CDF). As a part of its membership, Google will be contributing to two projects namely Spinnaker and Tekton. About Continuous Delivery Foundation The formation of CDF was announced at the Linux Foundation Open Source Leadership Summit on Tuesday. CDF will act as a “vendor-neutral home” for some of the most important open source projects for continuous delivery and specifications to speed up the release pipeline process. https://twitter.com/linuxfoundation/status/1105515314899492864 The existing CI/CD ecosystem is heavily fragmented, which makes it difficult for developers and companies to decide on particular tooling for their projects. Also, DevOps practitioners often find it very challenging to gather guidance information on software delivery best practices. CDF was formed to make CI/CD tooling easier and define the best practices and guidelines that will enable application developers to deliver better and more secure software at speed. CDF is currently hosting some of the most popularly used CI/CD tools including Jenkins, Jenkins X, Spinnaker, and Tekton. The foundation is backed by 20+ founding members which include Alauda, Alibaba, Anchore, Armory.io, Atos, Autodesk, Capital One, CircleCI, CloudBees, DeployHub, GitLab, Google, HSBC, Huawei, IBM, JFrog, Netflix, Puppet, Rancher, Red Hat, SAP, Snyk, and SumoLogic. Why Google joined CDF? Google as a part of this foundation will be working on Spinnaker and Tekton. Originally created by Netflix and jointly led by Netflix and Google, Spinnaker is an open source, multi-cloud delivery platform. It comes with various features for making continuous delivery reliable including support for advanced deployment strategies, an open source canary analysis service named Kayenta, and more. The Spinnaker’s user community has great experience in the continuous delivery domain, and by joining CDF Google aims to share that expertise with the broader community. Tekton is a set of shared, open source components for building CI/CD systems. It allows you to build, test, and deploy applications across multiple environments such as virtual machines, serverless, Kubernetes, or Firebase. In the next few months, we can expect to see support for results and event triggering in Tekton. Google is also planning to work with CI/CD vendors to build an ecosystem of components that will allow users to use Tekton with existing tools like Jenkins X, Kubernetes native, and others. Dan Lorenc, Staff Software Engineer at Google Cloud, sharing Google’s motivation behind joining CDF said, “Continuous Delivery is a critical part of modern software development, but today space is heavily fragmented. The Tekton project addresses this problem by working with the open source community and other leading vendors to collaborate on the modernization of CI/CD infrastructure.” Kim Lewandowski, Product Manager at Google Cloud, said, “The ability to deploy code securely and as fast as possible is top of mind for developers across the industry. Only through best practices and industry-led specifications will developers realize a reliable and portable way to take advantage of continuous delivery solutions. Google is excited to be a founding member of the CDF and to work with the community to foster innovation for deploying software anywhere.” To know more, check out the official announcement at the Google Open Source blog. Google Cloud Console Incident Resolved! Cloudflare takes a step towards transparency by expanding its government warrant canaries Google to acquire cloud data migration start-up ‘Alooma’
Read more
  • 0
  • 0
  • 3931

article-image-lxd-3-11-releases-with-configurable-snapshot-expiry-progress-reporting-and-more
Natasha Mathur
08 Mar 2019
2 min read
Save for later

LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more

Natasha Mathur
08 Mar 2019
2 min read
The LXD team released version 3.11 of LXD, its open source container management extension for Linux Containers (LXC), earlier this week. LXD 3.11 explores new features, minor improvements, and bugfixes. LXD or ‘ Linux Daemon’ system container manager provides users with an experience similar to virtual machines. It is written in Go and helps improve the existing LXC features to build and manage Linux containers. New Features in LXD 3.11 Configurable snapshot expiry at creation time: LXD 3.11 allows users to set an expiry during the snapshot creation time. Earlier, it was a hassle to manually create snapshots and edit them to modify their expiry. To change the expiry at the API level, you can set the exact timestamp to null that will make a persistent snapshot despite any configured auto-expiry. Progress reporting for publish operations: Progress information is now displayed to the user in LXD 3.11 when running lxc publish against a container or snapshot. This is similar to image transfers and container migrations. Improvements Minor improvements have been made to how candid authentication feature gets handled by the CLI in LXD 3.11. Per-remote authentication cookies: Now every remote consist of its own “cookie jar”. Also, LXD’s behavior is now always identical in LXD 3.11 when adding remotes. In prior releases, a shared “cookie jar” was being used for all remotes which would lead to inconsistent behaviors. Candid preferred over TLS for new remotes: In LXD 3.11, while using LXC remote add to add in a new remote, candid will be used for TLS authentication in case that remote supports candid. Also, authentication type can always be overridden using --auth-type. Remote list can now show Candid domain: The remote list can now indicate what Candid domain is used in LXD 3.11. Bug Fixes Goroutine leak has been fixed in ExecContainer. The “client: fix goroutine leak in ExecContainer” has been reverted. rest-api.md formatting has been updated. Translations from weblate have also been updated. Error handling in execIfAliases has been improved. Duplicate scheduled snapshots have been fixed. failing backup import has been fixed. Test case that covers the image sync scenario for the joined node has been updated. For a complete list of changes, check out the official LXD 3.11 release notes. LXD 3.8 released with automated container snapshots, ZFS compression support and more! Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix An update on Bcachefs- the “next generation Linux filesystem”
Read more
  • 0
  • 0
  • 2620