Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - DevOps

82 Articles
article-image-the-future-of-jenkins-is-cloud-native-and-a-faster-development-pace-with-increased-stability
Prasad Ramesh
04 Sep 2018
4 min read
Save for later

The future of Jenkins is cloud native and a faster development pace with increased stability

Prasad Ramesh
04 Sep 2018
4 min read
Jenkins has been a success for more than a decade now mainly due to its extensibility, community and it being general purpose. But there are some challenges and problems in it which have become more pronounced now. Kohsuke Kawaguchi, the creator of Jenkins, is now planning to take steps to solve these problems and make the platform better. Challenges in Jenkins With growing competition in the continuous integration (CI), The following limitations in Jenkins come in the way of teams. Some of them discourage admins from using and installing plugins. Service instability: CI is a critical service nowadays. People are running bigger workloads, needing more plugins, and high availability. Services like instant messaging platforms need to be online all the time. Jenkins is unable to keep up with this expectation and a large instance requires a lot of overhead to keep it running. It is common for someone to restart Jenkins every day and that delays processes. Errors need to be contained to a specific area without impacting the whole service. Brittle Configuration: Installing/upgrading plugins and tweaking job settings have caused side effects. This makes admins lose confidence to make these changes safely. There is a fear that the next upgrade might break something and cause problems for other teams and affect delivery. Assembly required: Jenkins requires an assembly of service blocks to make it work as a whole. As CI has become mainstream, the users want something that can be deployed in a few clicks. Having too many choices is confusing and leads to uncertainty when assembling. This is not something that can be solved by creating more plugins. Reduced Development Velocity: It is difficult for a contributor to make a change that spans across multiple plugins. The tests do not give enough confidence to shop code; many of them do not run automatically and the coverage is not deep. Changes and steps to make Jenkins better There are two key efforts here, Cloud Native Jenkins and Jolt. Cloud native is a CI engine that runs on Kubernetes and has a different architecture, Jolt will continue in Jenkins 2 and add faster development pace with increased stability. Cloud Native Jenkins It is a sub-project in the context of Cloud Native SIG. It will use Kubernetes as runtime. It will have a new extensibility mechanism to retain what works and to continue the development of the the automation platform's ecosystem. Data on Cloud Managed Data Services to achieve high availability and horizontal scalability, alleviating admins from additional responsibilities. Configuration as Code and Jenkins Evergreen help with the brittleness. There are also plans to make Jenkins secure by default design and to continue with Jenkins X which has been received very well. The aim is to get things going in 5 clicks through easy integration with key services. Jolt in Jenkins Cloud Native Jenkins is not usable for everyone and targets only a particular set of functionalities. It also requires a platform which has a limited adoption today, so Jenkins 2 will be continued at a faster pace. For this Jolt in Jenkins is introduced. This is inspired by what happened to the development of Java SE; change in the release model by shedding off parts to move faster. There will a major version number change every couple of months. The platform needs to be largely compatible and the pace needs to justify any inconvenience put on the users. For more, visit the official Jenkins Blog. How to build and enable the Jenkins Mesos plugin Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes
Read more
  • 0
  • 0
  • 3805

article-image-russian-censorship-board-threatens-to-block-search-giant-yandex-due-to-pirated-content
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Russian censorship board threatens to block search giant Yandex due to pirated content

Sugandha Lahoti
30 Aug 2018
3 min read
Update, 31st August 2018: Yandex has refused to remove pirated content. According to a statement from the company, Yandex believes that the law is being misinterpreted. While pirate content must be removed from sites hosting it, the removal of links to such content on search engines falls outside the scope of the current legislation.  “In accordance with the Federal Law On Information, Information Technologies, and Information Protection, the mechanics are as follows: pirated content should be blocked by site owners and on the so-called mirrors of these sites,” Yandex says. A Yandex spokesperson said that the company works in “full compliance” with the law. “We will work with market participants to find a solution within the existing legal framework.” Check out more info on Interfax. Roskomnadzor has found Russian search giant Yandex guilty of holding pirated content. The Federal Service for Supervision of Communications, Information Technology and Mass Media or Roskomnadzor is the Russian federal executive body responsible for censorship in media and telecommunications. The Moscow City Court found the website guilty of including links to pirated content last week. The search giant was asked to remove those links and the mandate was further reiterated by Roskomnadzor this week. Per the authorities, if Yandex does not take action within today, its video platform will be blocked by the country's ISPs. Last week, major Russian broadcasters Gazprom-Media, National Media Group (NMG), and others had protested against pirated content by removing their TV channels from Yandex’s ‘TV Online’ service. They said that they would allow their content to appear again only if Yandex removes pirated content completely. Following this, Gazprom-Media had filed a copyright infringement complaint with the Moscow City Court. Subsequently, the Moscow Court made a decision compelling Yandex to remove links to pirated TV shows belonging to Gazprom-Media. Pirate content has been a long-standing challenge for the telecom sector that is yet to be completely eradicated. Not only does it lead to a loss in revenues, but also a person watching illegal movies violates copyright and intellectual property laws. The Yandex website is heavily populated with pirated content, especially TV shows and movies. Source: Yandex.video In a statement to Interfax, Deputy Head of Roskomnadzor Vadim Subbotin warned that Yandex.video will be blocked Thursday night (August 30) if the pirate links aren’t removed. “If the company does not take measures, then according to the law, the Yandex.Video service must be blocked. There’s nowhere to go,” Subbotin said. The search giant has not yet responded to this accusation. You can check out the detailed coverage of the news on Interfax. Adblocking and the Future of the Web. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran. YouTube has a $25 million plan to counter fake news and misinformation.
Read more
  • 0
  • 0
  • 3236

article-image-google-cloud-hands-over-kubernetes-project-operations-to-cncf-grants-9m-in-gcp-credits
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits

Sugandha Lahoti
30 Aug 2018
3 min read
Google today announced that it is stepping back from managing the Kubernetes architecture and is funding the Cloud Native Computing Foundation (CNCF) $9M in GCP credits for a successful transition. These credits are split over a period of three years to cover infrastructure costs. Google is also handing over operational control of the Kubernetes project to the CNCF community. They will now take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure. Kubernetes was first created by Google in 2014. Since then Google has been providing Kubernetes with the cloud resources that support the project development. These include CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform. With Google passing the reign to CNCF, it’s goal is to make make sure “Kubernetes is ready to scale when your enterprise needs it to”. The $9M grant will be dedicated to building the world-wide network and storage capacity required to serve container downloads. In addition, a large part of this grant will also be dedicated to funding scalability testing, which runs 150,000 containers across 5,000 virtual machines. “Since releasing Kubernetes in 2014, Google has remained heavily involved in the project and actively contributes to its vibrant community. We also believe that for an open source project to truly thrive, all aspects of a mature project should be maintained by the people developing it. In passing the baton of operational responsibilities to Kubernetes contributors with the stewardship of the CNCF, we look forward to seeing how the project continues to evolve and experience breakneck adoption” said Sarah Novotny, Head of Open Source Strategy for Google Cloud. The CNCF foundation includes a large number of companies of the likes of Alibaba Cloud, AWS, Microsoft Azure, IBM Cloud, Oracle, SAP etc. All of these will be profiting from the work of the CNCF and the Kubernetes community. With this move, Google is perhaps also transferring the load of running the Kubernetes infrastructure to these members. As mentioned in their blog post, they look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project’s operations. To learn more, check out the CNCF announcement post and the Google Cloud Platform blog. Kubernetes 1.11 is here! Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use. Kubernetes Container 1.1 Integration is now generally available.
Read more
  • 0
  • 0
  • 4034

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 4085

article-image-docker-faces-public-outcry-as-docker-for-mac-and-windows-can-be-downloaded-only-via-docker-store-login
Melisha Dsouza
23 Aug 2018
4 min read
Save for later

Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login

Melisha Dsouza
23 Aug 2018
4 min read
5 years ago, Docker was the talk of the town because it made it possible to get a number of apps running on the same old servers and it also made packaging and shipping programs easy. But the same cannot be said about Docker now as the company is facing public disapproval on their decision to allow Docker for Mac and Windows only to be downloaded if one is logged into the Docker store. Their quest for  "improving the users experience" clearly is facing major roadblocks. Two years ago, every bug report and reasonable feature request was "hard" or "something you don't want" and would result in endless back and forth for the users. On 02 June 2016, new repository keys were pushed to the docker public repository. As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch.” The issue affected  ALL systems worldwide that were configured with the docker repository. All Debian and ubuntu versions, independent of OS and docker versions, faced the meltdown. It became impossible to run a system update or upgrade on an existing system. This 7 hours interplanetary outage because of Docker had little tech news coverage. All that was done was a few messages on a GitHub issue. You would have expected Docker to be a little bit more careful after the above controversy, but lo and behold! Here , comes yet another badly managed change implementation.. The current matter in question On June 20th 2018, github and reddit were abuzz with comments from confused Docker users on how they couldn’t download Docker for Mac or Windows without logging into the docker store. The following URLs were spotted with the problem: Install Docker for Mac and Install Docker for Windows To this, a docker spokesperson responded saying that the change was incorporated to improve the Docker for Mac and Windows experience for users moving forward. This led to string of accusations from dedicated docker users. Some of their complains included-  Source: github.com            Source: github.com    Source: github.com The issue is still ongoing and with no further statements released from the Docker team, users are left in the dark. Inspite of all the hullabaloo, why choose Docker? A report by Dzone indicates that Docker adoption by companies was up 30% in the last year. Its annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent. So what is this company doing differently? It’s no secret that Docker containers are easy to deploy in a cloud. It can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, which are some of the major languages in configuration management. Specifically, for CI/CD Docker makes it achievable to set up local development environments that are exactly like a live server. It can run multiple development environments from the same host with unique software, operating systems, and configurations. It helps to test projects on new or different servers. Allows multiple users to work on the same project with the exact same settings, regardless of the local host environment. It ensures that applications that are running on containers are completely segregated and isolated from each other. Which means you get complete control over traffic flow and management. So, what’s the verdict? Most users accused Docker’s move as manipulative since they are literally asking people to login with their information to target them with ad campaigns and spam emails to make money. However, there were also some in support of this move. Source: github.com One reddit user said that while there is no direct solution to this issue, You can use https://github.com/moby/moby/releases as a workaround, or a proper package manager if you're on Linux. Hopefully, Docker takes this as a cue before releasing any more updates that could spark public outcry. It would be interesting to see how many companies still stick around and use Docker irrespective of the rollercoaster ride that the users are put through. You can find further  opinions on this matter at reddit.com. Docker isn’t going anywhere Zeit releases Serverless Docker in beta What’s new in Docker Enterprise Edition 2.0?  
Read more
  • 0
  • 0
  • 8533

article-image-ibm-launches-nabla-containers-a-sandbox-more-secure-than-docker-containers
Savia Lobo
17 Jul 2018
4 min read
Save for later

IBM launches Nabla containers: A sandbox more secure than Docker containers

Savia Lobo
17 Jul 2018
4 min read
Docker, and container technology in general have gotten a buzzing response from developers over the globe. The container technology with some enticing features such as lightweight in nature, being DevOps focussed, etc. has gradually taken over virtual machines much recently. However, most developers and organizations out there still prefer using virtual machines as they fear containers are less secure than the VMs. Enter IBM’s Nabla containers. IBM recently launched its brand new container tech with claims of it being more secure than Docker or any other containers in the market. It is a sandbox designed for a strong isolation on a host. This means, these specialized containers would cut down OS system calls to a bare minimum with as little code as possible. This is expected to decrease the surface area available for an attack. What are the leading causes for security breaches in containers? IBM Research’s distinguished engineer, James Bottomley, highlights the two fundamental kinds of security problems affecting containers and virtual machines(VM): Vertical Attack Profile (VAP) Horizontal Attack Profile (HAP) Vertical Attack Profile or VAP includes code which is used for traversing in order to provide services right from input to database update to output, in a stack. A container-based Virtual infrastructure Similar to all other programs, this VAP code is prone to bugs. Greater the code one traverses, greater will be the chances of exposure to a security loophole. Hence, the density of these bugs varies. However, this profile is much benign, as the primary actors for the hostile security attacks are the cloud tenants and the Cloud Security Providers(CSPs), which come much more into a picture in the HAP. Horizontal Attack Profile or HAP are stack security holes exploits that can jump either into the physical server host or VMs. A HAP attack These exploits cause, what is called, a failure of containment. Here, one part of the Vertical Attack Profile belongs to the tenants (The guest kernel, guest OS and application) while the other part (the hypervisor and host OS) belongs to the CSPs. However, the CSP vertical part has an additional problem which is, any exploit present in this piece of stack can be used to jump onto either the host itself or any other tenant VMs running on the host. James also states that any Horizontal security failure or HAP is a potential business destroying event for the CSPs. So one has to take care of preventing such failures. On the other hand, the exploit occuring in the VAP owned by the tenant is seen as a tenant-only-problem. This problem is expected to be located and fixed by tenants only. This tells us that, the larger the profile( for instance CSPs) the greater the probability of being exploited. HAP breaches, however, are not that common. But, whenever they occur, they ruin the system. James has called HAPs as the "potentially business destroying events." IBM Nabla Containers can ease out the HAP attacks for you!! Nabla containers achieve isolation by reducing the surface for an attack on the host. Standard containers vs Nabla containers These containers make use of a library OS also known as unikernel techniques adapted from the Solo5 project. These techniques help Nabla containers to avoid system calls and simultaneously reduce the attack surface. The containers use only 9 system calls; the rest are blocked through a Linux seccomp policy. Internals of Nabla containers Per IBM Research, Nabla containers are more secure than the other container technologies including Docker, and Google’s gVisor (a container runtime sandbox), and even Kata Containers (an open-source lightweight VM to secure containers). Read more about IBM Nabla containers on the official GitHub website. Docker isn’t going anywhere AWS Fargate makes Container infrastructure management a piece of cake Create a TeamCity project [Tutorial]    
Read more
  • 0
  • 0
  • 3206
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kubernetes-1-11-is-here
Vijin Boricha
28 Jun 2018
3 min read
Save for later

Kubernetes 1.11 is here!

Vijin Boricha
28 Jun 2018
3 min read
This is the second release of Kubernetes in 2018. Kubernetes 1.11 comes with significant updates on features that revolve around maturity, scalability, and flexibility of Kubernetes.This newest version comes with storage and networking enhancements with which it is possible to plug-in any kind of infrastructure (Cloud or on-premise), into the Kubernetes system. Now let's dive into the key aspects of this release: IPVS-Based In-Cluster Service Load Balancing Promotes to General Availability IPVS consist of a simpler programming interface than iptable and delivers high-performance in-kernel load balancing. In this release it has moved to general availability where is provides better network throughput, programming latency, and scalability limits. It is not yet the default option but clusters can use it for production traffic. CoreDNS Graduates to General Availability CoreDNS has moved to general availability and is now the default option when using kubeadm. It is a flexible DNS server that directly integrates with the Kubernetes API. In comparison to the previous DNS server CoreDNS has lesser moving pasts as it is a single process that creates custom DNS entries to supports flexible uses cases. CoreDNS is also memory-safe as it is written in Go. Dynamic Kubelet Configuration Moves to Beta It has always been difficult to update Kubelet configurations in a running cluster as Kubelets are configured through command-line flags. With this feature moving to Beta, one can configure Kubelets in a live cluster through the API server. CSI enhancements Over the past few releases CSI (Container Storage Interface) has been a major focus area. This service was moved to Beta in version 1.10. In this version, the Kubernetes team continues to enhance CSI with a number of new features such as: Alpha support for raw block volumes to CSI Integrates CSI with the new kubelet plugin registration mechanism Easier to pass secrets to CSI plugins Enhanced Storage Features This release introduces online resizing of Persistent Volumes as an alpha feature. With this feature users can increase the PVs size without terminating pods or unmounting the volume. Users can update the PVC to request a new size and kubelet can resize the file system for the PVC. Dynamic maximum volume count is introduced as an alpha feature. With this new feature one can enable in-tree volume plugins to specify the number of volumes to be attached to a node, allowing the limit to vary based on the node type. In the earlier version the limits were configured through an environment variable. StorageObjectInUseProtection feature is now stable and prevents issues from deleting a Persistent Volume or a Persistent Volume Claim that is integrated to an active pod. You can know more about Kubernetes 1.11 from Kubernetes Blog and this version is available for download on GitHub. To get started with Kubernetes, check out our following books: Learning Kubernetes [Video] Kubernetes Cookbook - Second Edition Mastering Kubernetes - Second Edition Related Links VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service Rackspace now supports Kubernetes-as-a-Service Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads
Read more
  • 0
  • 0
  • 2548

article-image-vmware-kubernetes-engine-vke-launched-to-offer-kubernetes-as-a-service
Savia Lobo
27 Jun 2018
2 min read
Save for later

VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service

Savia Lobo
27 Jun 2018
2 min read
VMware recently announced its Kubernetes-as-a-Service adoption by launching VMware Kubernetes Engine (VKE) that provides a multi-cloud experience. The VKE is a fully-managed service offered through a SaaS model. It allows customers to use Kubernetes easily without having to worry about the deployment and operation of Kubernetes clusters. Kubernetes lets users manage clusters of containers while also making it easier to move applications between public hosted clouds. By adding Kubernetes on cloud, VMware offers a managed service business that will use Kubernetes containers with reduced complexities. VMware's Kubernetes engine will face a big time competition from Google Cloud and Microsoft Azure, among others. Recently, Rackspace also announced its partnership with HPE to develop a new Kubernetes-based cloud offering. VMware Kubernetes Engine (VKE) features include: VMware Smart Cluster VMware Smart Cluster is the selection of compute resources to constantly optimize resource usage, provide high availability, and reduce cost. It also enables the management of cost-effective, scalable Kubernetes clusters optimized to application requirements. Users can also have role-based access and visibility only to their predefined environment with the smart cluster. Fully Managed by VMware VMware Kubernetes Engine(VKE) is fully managed by VMware. It ensures that clusters always run in an efficient manner with multi-tenancy, seamless Kubernetes upgrades, high availability, and security. Security by default in VKE VMware Kubernetes Engine is highly secure with features like: Multi-tenancy Deep policy control Dedicated AWS accounts per organization Logical network isolation Integrated identity Access management with single sign-on Global Availability VKE has a region-agnostic user interface and is available across three AWS regions, US-East1, US-West2, and EU-West1, giving users the choice for which region to run clusters on. Read full coverage about the VMware Kubernetes Engine (VKE) on the official website. Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Hortonworks partner with Google Cloud to enhance their Big Data strategy  
Read more
  • 0
  • 0
  • 3905

article-image-gitlab-11-0-released
Savia Lobo
25 Jun 2018
2 min read
Save for later

GitLab 11.0 released!

Savia Lobo
25 Jun 2018
2 min read
GitLab recently announced the release of GitLab 11.0 which includes major features such as the Auto DevOps and License Management; among other features. The Auto DevOps feature is generally available in GitLab 11.0. It is a pre-built, fully featured CI/CD pipeline that automates the entire delivery process. With this feature, one has to simply commit their code and Auto DevOps does the rest. This includes tasks such as building and testing the app; performing code quality, security, and license scans. One can also package, deploy and monitor their applications using Auto DevOps. Chris Hill, head of systems engineering for infotainment at Jaguar Land Rover, said, “We’re excited about Auto DevOps, because it will allow us to focus on writing code and business value. GitLab can then handle the rest; automatically building, testing, deploying, and even monitoring our application.” License Management automatically detects licenses of project's dependencies such as, Enhanced Security Testing of code, containers, and dependencies: GitLab 11.0 has an extended coverage of Static Analysis Security Testing (SAST) and  includes Scala and .Net. Kubernetes integration features: If one needs to debug or check on a pod, they can do so by reviewing the Kubernetes pod logs directly from GitLab's deployment board. Improved Web IDE:  One can view their CI/CD pipelines from the IDE and get immediate feedback if a pipeline fails. Switching tasks can be disruptive, so the updated Web IDE makes it easy to quickly switch to the next merge request, to create, improve, or review without leaving the Web IDE. Enhanced Epic and Roadmap views : GitLab 11.0 has an updated Epic/Roadmap navigation interface to make it easier to see the big images and make planning easier. Read more about GitLab 11.0 on its GitLab’s official website. GitLab’s new DevOps solution GitLab open sources its Web IDE in GitLab 10.7 The Microsoft-GitHub deal has set into motion an exodus of GitHub projects to GitLab
Read more
  • 0
  • 0
  • 1221

article-image-nvidia-gpus-offer-kubernetes-for-accelerated-deployments-of-artificial-intelligence-workloads
Savia Lobo
21 Jun 2018
2 min read
Save for later

Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads

Savia Lobo
21 Jun 2018
2 min read
Nvidia recently announced that they will make Kubernetes available on its GPUs, at the Computer Vision and Pattern Recognition (CVPR) conference. Although it is not generally available, developers will be allowed to use this technology in order to test the software and provide their feedback. Source: Kubernetes on Nvidia GPUs Kubernetes on NVIDIA GPUs will allow developers and DevOps engineers to build and deploy a scalable GPU-accelerated deep learning training. It can also be used to create inference applications on multi-cloud GPU clusters. Using this novel technology, developers can handle the growing number of AI applications and services. This will be possible by automating processes such as deployment, maintenance, scheduling and operation of GPU-accelerated application containers. One can orchestrate deep learning and HPC applications on heterogeneous GPU clusters. It also includes easy-to-specify attributes such as GPU type and memory requirement. It also offers integrated metrics and monitoring capabilities for analyzing and improving GPU utilization on clusters. Interesting features of Kubernetes on Nvidia GPUs include: GPU support in Kubernetes can be used via the NVIDIA device plugin One can easily specify GPU attributes such as GPU type and memory requirements for deployment in heterogeneous GPU clusters Visualizing and monitoring GPU metrics and health with an integrated GPU monitoring stack of NVIDIA DCGM , Prometheus and Grafana Support for multiple underlying container runtimes such as Docker and CRI-O Officially supported on all NVIDIA DGX systems (DGX-1 Pascal, DGX-1 Volta and DGX Station) Read more about this exciting news on Nvidia Developer blog NVIDIA brings new deep learning updates at CVPR conference Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released! Distributed TensorFlow: Working with multiple GPUs and servers  
Read more
  • 0
  • 0
  • 3535
article-image-atlassian-open-sources-escalator-a-kubernetes-autoscaler-project
Savia Lobo
07 Jun 2018
2 min read
Save for later

Atlassian open sources Escalator, a Kubernetes autoscaler project

Savia Lobo
07 Jun 2018
2 min read
Atlassian recently announced the release of their open source Kubernetes autoscaler project, Escalator. This project aims at resolving issues related with autoscaling where clusters were not fast enough in scaling up or down. Atlassian explained the problem with scaling up, which was when clusters hit capacity, users would have to wait for a long time for the additional Kubernetes workers to be booted up in order to assist with the additional load. Many builds cannot tolerate extended delays and would fail. On the other hand, the issue while scaling down was that when loads had subsided, the autoscaler would not scale-down fast enough. Though this is not really an issue when the node count is less, however a problem can arise when that number reaches hundreds and more. Escalator, written in Go, is the solution To address the problem with the scalability of the clusters, Atlassian created Escalator, which is a batch of job optimized autoscaler for Kubernetes. Escalator basically had two goals : Provide preemptive scale-up with a buffer capacity feature to prevent users from experiencing the 'cluster full' situation, Support aggressive scale-down of machines when they were no longer required. Atlassian also wanted to build a Prometheus metrics for the Ops team, to gauge how well the clusters were working. With Escalator, one need not wait for EC2 instances to boot and join the cluster. It also helps in saving money by allowing one to pay for the number of machines actually needed. It has also helped Atlassian save a lot of money, nearly thousands of dollars per day, based on the workloads they run. At present, Escalator is released as open source to the Kubernetes community. However, others can avail its features too. The company would be expanding the tool to its external Bitbucket Pipeline users, and would also explore ways to manage more service-based workloads. Read more about Escalator on the Atlassian blog. You can also check out its GitHub Repo. The key differences between Kubernetes and Docker Swarm Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS) Kubernetes Containerd 1.1 Integration is now generally available
Read more
  • 0
  • 0
  • 2654

article-image-5-things-you-shouldnt-miss-in-dockercon-2018-next-week
Vijin Boricha
07 Jun 2018
5 min read
Save for later

5 things you shouldn’t miss in DockerCon 2018 next week

Vijin Boricha
07 Jun 2018
5 min read
DockerCon 2018 is around the corner and is taking place at the Moscone Center in San Francisco next week from 12th -15th June. More than 6,000 developers, architects, system admins, and other IT professionals are expected to get their hands on the latest enhancements in the container ecosystem. DockerCon is where people from the Docker community come together to learn, share, and collaborate. Here, you will find attendees from beginners, to intermediate and advanced experts who are interested in learning something new and enhancing their skill set. So, if you are interested in learning the modern ways of working with Docker then, this is your perfect chance. Here, you will have 2 full days of training, over a 100 session and hands-on labs, free workshops and more that will be brought to the table by different individuals. If you haven’t yet scheduled your DockerCon Agenda, here is the DockerCon Agenda Builder that will help you browse and search the sessions you are looking forward to in DockerCon 2018. With that being said, here are some interesting sessions you should not miss in your trip to DockerCon 2018. Automated Hardware Testing Using Docker for Space We already know how hard it is to cope up with space but that is not keeping Docker from thinking beyond web content. Space software development is difficult as they run on highly constrained embedded hardware. But Docker and its DevOps mentality helped DART create a scalable and rapidly deployable test infrastructure, in NASA’s mission to hit an asteroid at 6 km/s. This presentation will be all about how Docker can be used for both embedded development environment and scalable test environment. You will also learn about how Docker has evolved testing from a human-based testing to an automated one. Lastly, this presentation will summarize the do’s and don'ts of automated hardware testing, how you can play a key role in making a difference and what Docker wishes to achieve in the near future. Democratizing Machine Learning on Kubernetes One of the biggest challenges Docker is facing today is understanding how to build a platform that runs common open-source ML libraries such as Tensorflow. This session will be all about deploying distributed Tensorflow training cluster with GPU scheduling Kubernetes. This session will also teach you about the functioning of distributed training, its various options and which options to choose when. Lastly, this session will cover best practices on using distributed Tensorflow on top of Kubernetes. In the end, you will be provided with a public Github repository of the entire work presented in this session. Serverless Panel (Gloo function gateway) DockerCon 2018 is entirely based on your journey to containerization, where you will learn about modernizing traditional applications, adding microservices, and then serverless environments. One of the interesting development areas in 2018 is Gloo which is designed for microservice, monolithic, and serverless applications. It is a high-performance, plugin-extendable, platform-agnostic function gateway that enables the enterprise application developer to modernize a traditional application. Gloo containerizes a traditional application and uses microservices to add functions to it. Developers can then leverage orchestrated and routed portable serverless frameworks on top of Docker EE, or AWS Lambda to create hybrid cloud applications. Don’t Have A Meltdown! Practical Steps For Defending Your Apps With recent cybercrime events such as Meltdown and Spectre, security has become one of the major concerns for applications developers and operations teams. This session will demonstrate some best practices, configuration, and tools to effectively defend your container deployments from some common attacks. This session will be all about risks and preventive measures to be taken on authentication, injection, sensitive data, and more. All the events displayed in this session are inspired from highlights of OWASP Top 10 and other popular and massive attacks. By the end of this session, you will understand important security risks in your application and how you can go about mitigating them. Tips & Tricks of the Docker Captains This session is entirely focused on the tips and tricks for making the most out of Docker. These best practices will be from Docker Captains who will guide users in making common operations easier, addressing common misunderstandings, and avoiding common pitfalls. Topics covered in this session will revolve around build processes, security, orchestration, maintenance and more. This session will not only make new and intermediate user’s life easy with Docker but will also provide some new and valuable information to advanced users. DockerCon is considered as the number one container conference for IT professionals interested in learning and creating scalable solutions with innovative technologies. So, what are you waiting for? Start planning for DockerCon 2018 now and if you haven’t yet, you can register for DockerCon 2018 and get your container journey started. Related Links What’s new in Docker Enterprise Edition 2.0? Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use Microsoft’s Azure Container Service (ACS) is now Azure Kubernetes Services (AKS)
Read more
  • 0
  • 0
  • 1697

article-image-kublr-1-9-2-for-kubernetes-cluster-deployment-in-isolated-environments-released
Savia Lobo
30 May 2018
2 min read
Save for later

Kublr 1.9.2 for Kubernetes cluster deployment in isolated environments released!

Savia Lobo
30 May 2018
2 min read
Kublr, a comprehensive Kubernetes platform for the enterprise, announced the release of Kublr 1.9.2 at the DevOpsCon, Berlin. Kublr provides a Kubernetes platform which makes it easy for Operations to deploy, run, and handle containerized applications. At the same time, it allows developers to use the development tools and the environment they wish to choose. Kublr 1.9.2 allows developers to deploy the complete Kublr platform and Kubernetes clusters in isolated environments without requiring access to the Internet. This comes as an advantage for organizations that have sensitive data, which should remain secure. However, while being secured and isolated this data also benefits from features such as auto-scaling, backup and disaster recovery, centralized monitoring and log collection. Slava Koltovich, CEO of Kublr, stated that,”We’ve learned from several financial institutions that there is a vital need for cloud-like capabilities in completely isolated environments. It became increasingly clear that, to be truly enterprise grade, Kublr needed to work in even the most secure environments. We are proud to now offer that capability out-of-the-box”. The Kublr 1.9.2 changelog includes the following key updates: Ability to deploy Kublr without access to Internet Support Docker EE for RHEL Support CentOS 7.4. Delete onprem clusters. Additional kubelet monitoring. The Changelog also includes some bug fixes of some known issues. Kublr further announced that it is now Certified Kubernetes for Kubernetes v1.10. To know more about Kublr 1.9.2 in detail, check the release notes. Why Agile, DevOps and Continuous Integration are here to stay: Interview with Nikhil Pathania, DevOps practitioner Kubernetes Containerd 1.1 Integration is now generally available Introducing OpenStack Foundation’s Kata Containers 1.0  
Read more
  • 0
  • 0
  • 2381
article-image-kubernetes-containerd-1-1-integration-is-now-generally-available
Savia Lobo
25 May 2018
3 min read
Save for later

Kubernetes Containerd 1.1 Integration is now generally available

Savia Lobo
25 May 2018
3 min read
After just 6 months of releasing the alpha version of Kubernetes containerd integration, the community has declared that the upgraded containerd 1.1 is now generally available. Containerd 1.1 can be used as the container runtime for production Kubernetes clusters. It works well with Kubernetes 1.10 and also supports all Kubernetes features. Let’s look at the key upgrades in the new Kubernetes Containerd 1.1 : Architecture upgrade Containerd 1.1 architecture with the CRI plugin In the current version 1.1, the cri-containerd daemon is changed to a containerd CRI plugin. This CRI plugin is made default and is built-in containerd 1.1. It interacts with containerd through direct function calls. Kubernetes can now be used by containerd directly as this new architecture makes the integration more stable and efficient, and eliminates another grpc hop in the stack. Thus, the cri-containerd daemon is no longer needed. Performance upgrades Performance optimizations have been the major focus in the Containerd 1.1. Performance was optimized in terms of pod startup latency and daemon resource usage which are discussed in detail below. Pod Startup Latency The containerd 1.1 integration has lower pod startup latency than Docker 18.03 CE integration with dockershim. Following graph is based on the results from the ‘105 pod batch startup benchmark’ (The lower, the better) Pod Startup Latency Graph CPU and Memory Usage The containerd 1.1 integration consumes less CPU and memory overall compared to Docker 18.03 CE integration with dockershim at a steady state with 105 pods. The results differ as per the number of pods running on the node. 105 is the current default for the max number of user pods per node. CPU Usage Graph Memory Usage Graph On comparing Docker 18.03 CE integration with dockershim, the containerd 1.1 integration has 30.89% lower kubelet cpu usage, 68.13% lower container runtime cpu usage, 11.30% lower kubelet resident set size (RSS) memory usage,  and 12.78% lower container runtime RSS memory usage. What would happen to Docker Engine? Switching to containerd would not mean that one will be unable to use Docker Engine. The fact is that Docker Engine is built on top of containerd. The next release of Docker Community Edition (Docker CE) will allow using containerd version 1.1. Docker engine built over Containerd Containerd is being used by both Kubelet and Docker Engine. This means users choosing the containerd integration will not only get new Kubernetes features, performance, and stability improvements, but also have the option of keeping Docker Engine around for other use cases. Read more interesting details on the Containerd 1.1 on Kubernetes official blog post. Top 7 DevOps tools in 2018 Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 3097

article-image-introducing-openstack-foundations-kata-containers-1-0
Savia Lobo
24 May 2018
2 min read
Save for later

Introducing OpenStack Foundation’s Kata Containers 1.0

Savia Lobo
24 May 2018
2 min read
OpenStack Foundation successfully launched the version 1.0 of its first non-OpenStack project, Kata Containers. Kata containers is a result of the combination of two leading open source virtualized container projects, Intel’s Clear Containers and Hyper’s runV technology. Kata Containers enable developers to have a, lighter, faster, and an agile container management technology across stacks and platforms. Developers can have a more container-like experience with security and isolation features. Kata Containers deliver an OCLI compatible runtime with seamless integration for Docker and Kubernetes. They execute a lightweight VM for every container such that every container gets similar hardware isolation as expected from a virtual machine. Although, hosted by OpenStack foundation, Kata Containers are assumed to be platform and architecture agnostic. Kata Containers 1.0 components include: Kata Containers runtime 1.0.0 (in the /runtime repo) Kata Containers proxy 1.0.0 (in the /proxy repo) Kata Containers shim 1.0.0 (in the /shim repo) Kata Containers agent 1.0.0 (in the /agent repo) KSM throttler 1.0.0 (in the /ksm-throttler repo) Guest operating system building scripts (in the /osbuilder repo) Intel, RedHat, Canonical and cloud vendors such as Google, Huawei, NetApp, and others have offered to financially support the Kata Containers Project. Read more about Kata containers on their official website and on the GitHub Repo. Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform What to expect from vSphere 6.7 Red Hat Enterprise Linux 7.5 (RHEL 7.5) now generally available
Read more
  • 0
  • 0
  • 2722