Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - DevOps

82 Articles
article-image-docker-faces-public-outcry-as-docker-for-mac-and-windows-can-be-downloaded-only-via-docker-store-login
Melisha Dsouza
23 Aug 2018
4 min read
Save for later

Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login

Melisha Dsouza
23 Aug 2018
4 min read
5 years ago, Docker was the talk of the town because it made it possible to get a number of apps running on the same old servers and it also made packaging and shipping programs easy. But the same cannot be said about Docker now as the company is facing public disapproval on their decision to allow Docker for Mac and Windows only to be downloaded if one is logged into the Docker store. Their quest for  "improving the users experience" clearly is facing major roadblocks. Two years ago, every bug report and reasonable feature request was "hard" or "something you don't want" and would result in endless back and forth for the users. On 02 June 2016, new repository keys were pushed to the docker public repository. As a direct consequence, any run of “apt-get update” (or equivalent) on a system configured with the broken repo will fail with an error “Error https://apt.dockerproject.org/ Hash Sum mismatch.” The issue affected  ALL systems worldwide that were configured with the docker repository. All Debian and ubuntu versions, independent of OS and docker versions, faced the meltdown. It became impossible to run a system update or upgrade on an existing system. This 7 hours interplanetary outage because of Docker had little tech news coverage. All that was done was a few messages on a GitHub issue. You would have expected Docker to be a little bit more careful after the above controversy, but lo and behold! Here , comes yet another badly managed change implementation.. The current matter in question On June 20th 2018, github and reddit were abuzz with comments from confused Docker users on how they couldn’t download Docker for Mac or Windows without logging into the docker store. The following URLs were spotted with the problem: Install Docker for Mac and Install Docker for Windows To this, a docker spokesperson responded saying that the change was incorporated to improve the Docker for Mac and Windows experience for users moving forward. This led to string of accusations from dedicated docker users. Some of their complains included-  Source: github.com            Source: github.com    Source: github.com The issue is still ongoing and with no further statements released from the Docker team, users are left in the dark. Inspite of all the hullabaloo, why choose Docker? A report by Dzone indicates that Docker adoption by companies was up 30% in the last year. Its annual revenue is expected to increase by 4x, growing from $749 million in 2016 to more than $3.4 billion by 2021, representing a compound annual growth rate (CAGR) of 35 percent. So what is this company doing differently? It’s no secret that Docker containers are easy to deploy in a cloud. It can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, which are some of the major languages in configuration management. Specifically, for CI/CD Docker makes it achievable to set up local development environments that are exactly like a live server. It can run multiple development environments from the same host with unique software, operating systems, and configurations. It helps to test projects on new or different servers. Allows multiple users to work on the same project with the exact same settings, regardless of the local host environment. It ensures that applications that are running on containers are completely segregated and isolated from each other. Which means you get complete control over traffic flow and management. So, what’s the verdict? Most users accused Docker’s move as manipulative since they are literally asking people to login with their information to target them with ad campaigns and spam emails to make money. However, there were also some in support of this move. Source: github.com One reddit user said that while there is no direct solution to this issue, You can use https://github.com/moby/moby/releases as a workaround, or a proper package manager if you're on Linux. Hopefully, Docker takes this as a cue before releasing any more updates that could spark public outcry. It would be interesting to see how many companies still stick around and use Docker irrespective of the rollercoaster ride that the users are put through. You can find further  opinions on this matter at reddit.com. Docker isn’t going anywhere Zeit releases Serverless Docker in beta What’s new in Docker Enterprise Edition 2.0?  
Read more
  • 0
  • 0
  • 8533

article-image-limited-availability-of-digitalocean-kubernetes-announced
Melisha Dsouza
03 Oct 2018
3 min read
Save for later

Limited Availability of DigitalOcean Kubernetes announced!

Melisha Dsouza
03 Oct 2018
3 min read
On Monday, the Kubernetes team announced that DigitalOcean, which was available in Early Access, is now accessible as Limited Availability. DigitalOcean simplifies the container deployment process that accompanies plain Kubernetes and offers Kubernetes container hosting services. Incorporating DigitalOcean’s trademark simplicity and ease of use, they aim to reduce the headache involved in setting up, managing and securing Kubernetes clusters. DigitalOcean incidentally are also the people behind Hacktoberfest which runs all of October in partnership with GitHub to promote open source contribution. The Early Access availability was well received by users who commented on the simplicity of configuring and provisioning a cluster. They appreciated that deploying and running containerized services consumed hardly any time. Users also brought to light issues and feedback that was utilized to increase reliability and resolve a number of bugs, thus improving user experience in the limited availability of DigitalOcean Kubernetes. The team also notes that during early access, they had a limited set of free hardware resources for users to deploy to. This restricted the total number of users they could provide access to. In the Limited Availability phase, the team hopes to open up access to anyone who requests it. That being said, the Limited Availability will be a paid product. Why should users consider DigitalOcean Kubernetes? Each customer has their own Dedicated Managed Cluster. This provides security and isolation for their containerized applications with access to the full Kubernetes API. DigitalOcean products provide storage for any amount of data.   Cloud Firewalls make it easy to manage network traffic in and out of the Kubernetes cluster. DigitalOcean provides cluster security scanning capabilities to alert users of flaws and vulnerabilities. In typical Kubernetes environments; metrics, logs, and events can be lost if nodes are spun down. To help developers learn from the performance of past environments, DigitalOcean stores this information separately from the node indefinitely. To know more about these features, head over to their official blog page. Some benefits for users of Limited Availability: Users will be able to provision Droplet workers in many more of regions with full support. To test out their containers in an orchestrated environment, they can start with a single node cluster using a $5/mo Droplet. As they scale their applications, users can add worker pools of various Droplet sizes, attach persistent storage using DigitalOcean Block Storage for $0.10/GB per month, and expose Kubernetes services with a public IP using $10/mo Load Balancers. This is a highly available service designed to protect against application or hardware failures while spreading traffic across available resources. Looks like users are really excited about this upgrade: Source: DigitalOcen Blog Users that have already signed up for Early Access, will receive an email shortly with details about how to get started. To know more about this news, head over to DigitalOcean’s Blog post. Kubernetes 1.12 released with general availability of Kubelet TLS Bootstrap, support for Azure VMSS Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads  
Read more
  • 0
  • 0
  • 5363

article-image-googles-kaniko-open-source-build-tool-for-docker-images-in-kubernetes
Savia Lobo
27 Apr 2018
2 min read
Save for later

Google’s kaniko - An open-source build tool for Docker Images in Kubernetes, without a root access

Savia Lobo
27 Apr 2018
2 min read
Google recently introduced kaniko, an open-source tool for building container images from a Dockerfile even without privileged root access. Prior to kaniko, building images from a standard Dockerfile typically was totally dependent on an interactive access to a Docker daemon, which requires a root access on the machine to run. Such a process makes it difficult to build container images in environments that can’t easily or securely expose their Docker daemons, such as Kubernetes clusters. To combat these challenges, Kaniko was created. With kaniko, one can build an image from a Dockerfile and push it to a registry. Since it doesn’t require any special privileges or permissions, kaniko can even run in a standard Kubernetes cluster, Google Kubernetes Engine, or in any environment that can’t have access to privileges or a Docker daemon. How does kaniko Build Tool work? kaniko runs as a container image that takes in three arguments: a Dockerfile, a build context and the name of the registry to which it should push the final image. The image is built from scratch, and contains only a static Go binary plus the configuration files needed for pushing and pulling images.kaniko image generation The kaniko executor takes care of extracting the base image file system into the root. It executes each command in order, and takes a snapshot of the file system after each command. The snapshot is created in the user area where the file system is running and compared to the previous state that is in memory. All changes in the file system are appended to the base image, making relevant changes in the metadata of the image. After successful execution of each command in the Dockerfile, the executor pushes the newly built image to the desired registry. Finally, Kaniko unpacks the filesystem, executes commands and takes snapshots of the filesystem completely in user-space within the executor image. This is how it avoids requiring privileged access on your machine. Here, the docker daemon or CLI is not involved. To know more about how to run kaniko in a Kubernetes Cluster, and in the Google Cloud Container Builder, read the documentation on the GitHub Repo. The key differences between Kubernetes and Docker Swarm Building Docker images using Dockerfiles What’s new in Docker Enterprise Edition 2.0?
Read more
  • 0
  • 0
  • 4698

article-image-idera-acquires-travis-ci-the-open-source-continuous-integration-solution
Sugandha Lahoti
24 Jan 2019
2 min read
Save for later

Idera acquires Travis CI, the open source Continuous Integration solution

Sugandha Lahoti
24 Jan 2019
2 min read
The popular open source continuous integration service Travis CI solution, has been acquired by Idera. Idera offers a number of B2B software solutions ranging from database administration to application development to test management. Travis CI will be joining Idera’s Testing Tools division, which also includes TestRail, Ranorex, and Kiuwan. Travis CI assured its users that the company will continue to be open source and a stand-alone solution under an MIT license. “We will continue to offer the same services to our hosted and on-premises users. With the support from our new partners, we will be able to invest in expanding and improving our core product”, said Konstantin Haase, a founder of Travis CI in a blog post. Idera will also keep the Travis Foundation running which runs projects like Rails Girls Summer of Code, Diversity Tickets, Speakerinnen, and Prompt. It’s not just a happy day for Travis CI. Travis CI will also bring it’s 700,000 users to Idera, and it’s high profile customers like IBM and Zendesk. Users are quick to note that this acquisition comes at a time when Tavis CI’s competitors like Circle CI, seem to be taking market share away from Travis CI. A comment on hacker news reads, “In a past few month I started to see Circle CI badges popping here and there for opensource repositories and anecdotally many internal projects at companies are moving to GitLab and their built-in CI offering. Probably a good time to sell Travis CI, though I'd prefer if they would find a better buyer.” Another user says, “Honestly, for enterprise users that is a good thing. In the hands of a company like Idera we can be reasonably confident that Travis will not disappear anytime soon” Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform Creating a Continuous Integration commit pipeline using Docker [Tutorial] How to master Continuous Integration: Tools and Strategies
Read more
  • 0
  • 0
  • 4489

article-image-elastic-launches-helm-charts-alpha-for-faster-deployment-of-elasticsearch-and-kibana-to-kubernetes
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes

Melisha Dsouza
12 Dec 2018
3 min read
At the KubeCon+CloudNativeCon happening at Seattle this week, Elastic N.V., the pioneer behind Elasticsearch and the Elastic Stack, announced the alpha availability of Helm Charts for Elasticsearch on Kubernetes. Helm Charts will make it possible to deploy Elasticsearch and Kibana to Kubernetes almost instantly. Developers use Helm charts for its flexibility in creating, publishing and sharing Kubernetes applications. The ease of using Kubernetes to manage containerized workloads has also lead to Elastic users deploying their ElasticSearch workloads to Kubernetes. Now, with the Helm chart support provided for Elasticsearch on Kubernetes, developers can harness the benefits of both, Helm charts and Kubernetes, to instal, configure, upgrade and run their applications on Kubernetes. With this new functionality in place, users can now take advantage of the best practices and templates to deploy Elasticsearch and Kibana. They will obtain access to some basic free features like monitoring, Kibana Canvas and spaces. According to the blog post, Helm charts will serve as a “ way to help enable Elastic users to run the Elastic Stack using modern, cloud-native deployment models and technologies.” Why should developers consider Helm charts? Helm charts have been known to provide users with the ability to leverage Kubernetes packages through the click of a button or single CLI command. Kubernetes is sometimes complex to use, thus impairing developer productivity. Helm charts improve their productivity as follows: With helm charts, developers can focus on developing applications rather than  deploying dev-test environments. They can author their own chart, which in turn automates deployment of their dev-test environment It comes with a “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Combating the complexity related of deploying a Kubernetes-orchestrated container application, Helm Charts allows software vendors and developers to preconfigure their applications with sensible defaults. This enables users/deployers to change parameters of the application/chart using a consistent interface. Developers can incorporate production-ready packages while building applications in a Kubernetes environment thus eliminating deployment errors due to incorrect configuration file entries or mangled deployment recipes. Deploying and maintaining Kubernetes applications can be tedious and error prone. Helm Charts reduces the complexity of maintaining an App Catalog in a Kubernetes environment. Central App Catalog reduces duplication of charts (when shared within or between organizations) and spreads best practices by encoding them into Charts. To know more about Helm charts, check out the README files for the Elasticsearch and Kibana charts available on GitHub. In addition to this announcement, Elastic also announced its collaboration with Cloud Native Computing Foundation (CNCF) to promote and support open cloud native technologies and companies. This is another step towards Elastic’s mission towards building products in an open and transparent way. You can head over to Elastic’s official blog for an in-depth coverage of this news. Alternatively, check out MarketWatch for more insights on this article. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support How to perform Numeric Metric Aggregations with Elasticsearch
Read more
  • 0
  • 0
  • 4437

article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 4269
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-low-carbon-kubernetes-scheduler-a-demand-side-management-solution-that-consumes-electricity-in-low-grid-carbon-intensity-areas
Savia Lobo
27 Jun 2019
7 min read
Save for later

Low Carbon Kubernetes Scheduler: A demand side management solution that consumes electricity in low grid carbon intensity areas

Savia Lobo
27 Jun 2019
7 min read
Machine learning experts are increasingly becoming interested in researching on how machine learning can be used to reduce greenhouse gas emissions and help society adapt to a changing climate. For example, Machine Learning can be used to regulate cloud data centres that manage an important asset, ‘Data’ as these data centres typically comprise tens to thousands of interconnected servers and consume a substantial amount of electrical energy. Researchers from Huawei published a paper in April 2015, estimating that by 2030 data centres will use anywhere between 3% and 13% of global electricity At the ICT4S 2019 conference held in Lappeenranta, Finland, from June 10-15, researchers from the University of Bristol, UK, introduced their research on a low carbon scheduling policy for the open-source Kubernetes container orchestrator. “Low Carbon Kubernetes Scheduler” can provide demand-side management (DSM) by migrating consumption of electric energy in cloud data centres to countries with the lowest carbon intensity of electricity. In their paper the researchers highlight, “All major cloud computing companies acknowledge the need to run their data centres as efficiently as possible in order to address economic and environmental concerns, and recognize that ICT consumes an increasing amount of energy”. Since the end of 2017, Google Cloud Platform runs its data centres entirely on renewable energy. Also, Microsoft has announced that its global operations have been carbon neutral since 2012. However, not all cloud providers have been able to make such an extensive commitment. For example, Oracle Cloud is currently 100% carbon neutral in Europe, but not in other regions. The Kubernetes Scheduler selects compute nodes based on the real-time carbon intensity of the electric grid in the region they are in. Real-time APIs that report grid carbon intensity is available for an increasing number of regions, but not exhaustively around the planet. In order to effectively demonstrate the schedulers ability to perform global load balancing, the researchers have evaluated the scheduler based on its ability to the metric of solar irradiation. “While much of the research on DSM focusses on domestic energy consumption there has also been work investigating DSM by cloud data centres”, the paper mentions. Demand side management (DSM) refers to any initiatives that affect how and when electricity is being required by consumers. Source: CEUR-WS.org Existing schedulers work with consideration to singular data centres rather than taking a more global view. On the other hand, the Low Carbon Scheduler considers carbon intensity across regions as scaling up and down of a large number of containers that can be done in a matter of seconds. Each national electric grid contains electricity generated from a variable mix of alternative sources. The carbon intensity of the electricity provided by the grid anywhere in the world is a measure of the amount of greenhouse gas released into the atmosphere from the combustion of fossil fuels for the generation of electricity. Significant generation sites report the volume of electricity input to the grid in regular intervals to the organizations operating the grid (for example the National Grid in the UK) in real-time via APIs. These APIs typically provide the retrieval of the production volumes and thus allow to calculate the carbon intensity in real-time. The Low carbon scheduler collects the carbon intensity from the available APIs and ranks them to identify the region with the lowest carbon intensity. [box type="shadow" align="" class="" width=""]For the European Union, such an API is provided by the European Network of Transmission System Operators for Electricity (www.entsoe.eu) and for the UK this is the Balancing Mechanism Reporting Service (www.elexon.co.uk).[/box] Why Kubernetes for building a low carbon scheduler Kubernetes can make use of GPUs4 and has also been ported to run on ARM architecture 5. Researchers have also said that Kubernetes has to a large extent won the container orchestration war. It also has support for extendability and plugins which makes it the “most suitable for which to develop a global scheduler and bring about the widest adoption, thereby producing the greatest impact on carbon emission reduction”. Kubernetes allows schedulers to run in parallel, which means the scheduler will not need to re-implement the pre-existing, and sophisticated, bin-packing strategies present in Kubernetes. It need only to apply a scheduling layer to complement the existing capabilities proffered by Kubernetes. According to the researchers, “Our design, as it operates at a higher level of abstraction, assures that Kubernetes continues to deal with bin-packing at the node level, while the scheduler performs global-level scheduling between data centres”. The official Kubernetes documentation describes three possible ways of extending the default scheduler (kube-scheduler): adding these rules to the scheduler source code and recompiling, implementing one’s own scheduler process that runs instead of, or alongside kube-scheduler, or implementing a scheduler extender. Evaluating the performance of the low carbon Kubernetes scheduler The researchers recorded the carbon intensities for the countries that the major cloud providers operate data centers between 18.2.2019 13:00 UTC and 21.4.2019 9:00 UTC. Following is a table showing countries where the largest public cloud providers operate data centers, as of April 2019. Source: CEUR-WS.org They further ranked all countries by the carbon intensity of their electricity in 30-minute intervals. Among the total set of 30-minute values, Switzerland had the lowest carbon intensity (ranked first) in 0.57% of the 30-minute intervals, Norway 0.31%, France 0.11% and Sweden in 0.01%. However, the list of the least carbon intense countries only contains countries in central Europe locations. To justify Kubernetes’ ability or globally distributed deployments the researchers chose to optimize placement to regions with the greatest degree of solar irradiance termed a Heliotropic Scheduler. This scheduler is termed ‘heliotropic’ in order to differentiate it from a ‘follow-the-sun’ application management policy that relates to meeting customer demand around the world by placing staff and resources in proximity to those locations (thereby making them available to clients at lower latency and at a suitable time of day). A ‘heliotropic’ policy, on the other hand, goes to where sunlight, and by extension solar irradiance, is abundant. They further evaluated the Heliotropic Scheduler implementation by running BOINC jobs on Kubernetes. BOINC (Berkeley Open Infrastructure for Network Computing) is a software platform for volunteer computing that allows users to contribute computational capacity from their home PCs towards scientific research. Einstein@Home, SETI@home and IBM World Community Grid are some of the most widely supported projects. Researchers say: “Even though many cloud providers are contracting for renewable energy with their energy providers, the electricity these data centres take from the grid is generated with release of a varying amount of greenhouse gas emissions into the atmosphere. Our scheduler can contribute to moving demand for more carbon intense electricity to less carbon intense electricity”. While the paper concludes that wind-dominant, solar-complementary strategy is superior for the integration of renewable energy sources into cloud data centres’ infrastructure, the Low Carbon Scheduler provides a proof-of-concept demonstrating how to reduce carbon intensity in cloud computing. To know more about this implementation for lowering carbon emissions read the research paper. Machine learning experts on how we can use machine learning to mitigate and adapt to the changing climate VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Deep learning models have massive carbon footprints, can photonic chips help reduce power consumption?
Read more
  • 0
  • 0
  • 4194

article-image-chaos-engineering-platform-gremlin-announces-18-million-series-b-funding-and-new-feature-for-full-stack-resiliency
Richard Gall
28 Sep 2018
3 min read
Save for later

Chaos engineering platform Gremlin announces $18 million series B funding and new feature for "full-stack resiliency"

Richard Gall
28 Sep 2018
3 min read
Gremlin, the chaos engineering platform have revealed some exciting news today to coincide with the very first chaos engineering conference - Chaos Conf. Not only has the company raised a $18 million in its series B funding round, it has also launched a brand new feature. Application Level Fault Injection - ALFI - brings a whole new dimension to the Gremlin platform as it will allow engineering teams to run resiliency tests - or 'chaos experiments' - at an application level. Up until now, tests could only be run at the infrastructure level, targeting a specific host or container (although containers are only a recent addition). Bringing chaos engineering to serverless applications One of the benefits of ALFI is it will make it possible to run 'attacks' on serverless applications. Citing Cloudability's State of the Cloud 2018 report, the press release highlights that serverless adoption is growing rapidly. This means that Gremlin will now be able to expand its use cases and continue to move forward in its broader mission to help engineering teams improve the resiliency of their software in a manageable and accessible way. Matt Fornaciari, Gremlin CTO and co-founder, said: “With ALFI one of the first problems we wanted to address was improving the reliability and understanding of serverless providers like AWS Lambda and Azure Functions. It’s a tough problem to solve because the host is abstracted and it’s a very new technology -- but now we can run attacks at the application level, and with a level of precision that isn’t possible at the infrastructure level. We are giving them a scalpel to very specifically target particular communications between different systems and services.” One of the great benefits of ALFI is that it should help engineers to tackle different types of threats that might be missed if you simply focus on infrastructure. Yan Cui, Principal Engineer at DAZN, the sports streaming service explained, saying, "AWS Lambda protects you against some infrastructure failures, but you still need to defend against weakness in your own code. Application-level fault injection is a great way to uncover these weaknesses." A new chapter for Gremlin and a big step forward for chaos engineering It would seem that Gremlin is about to embark on a new chapter. But what will be even more interesting is the wider impact chaos engineering has on the industry. Research, such as this year's Packt Skill Up survey, indicates that chaos engineering is a trend that is still in an emergent phase. If Gremlin can develop a product that not only makes chaos engineering relatively accessible but also palatable for those making technical decisions, we might start to see things changing. It's clear that Redpoint Ventures, the VC firm leading Gremlin's Series B funding, sees a lot of potential in what the platform can offer the software landscape. Managing Director  Tomasz Tuguz said "In a world where nearly every business is an online business, Gremlin makes companies more resilient and saves millions of dollars in unnecessary disasters and outages. We’re thrilled to join them on this journey."
Read more
  • 0
  • 0
  • 4100

article-image-kong-1-0-launches-the-only-open-source-api-platform-specifically-built-for-microservices-cloud-and-serverless
Richard Gall
18 Sep 2018
3 min read
Save for later

Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless

Richard Gall
18 Sep 2018
3 min read
The API is the building block of much modern software. With Kong 1.0, launching today at Kong Summit, Kong believes it has cemented its position as the go-to platform for developing APIs on modern infrastructures, like cloud-native, microservices, and serverless. The release of the first stable version of Kong marks an important milestone for the company as it looks to develop what it calls a 'service control platform.' This is essentially a tool that will allow developers, DevOps engineers, and architects to manage their infrastructure at every point - however they choose to build it. It should, in theory off a fully integrated solution that let's you handle APIs, manage security permissions, and even leverage the latest in cutting edge artificial intelligence for analytics and automation. CEO Augusto Marietti said that "API management is rapidly evolving with the industry, and technology must evolve with it. We built Kong from the ground up to meet these needs -- Kong is the only API platform designed to manage and broker the demands that in-flight data increasingly place on modern software architectures." How widely used is Kong? According to the press release, Kong has been downloaded 45 million times, making it the most widely used open source API platform. The team stress that reaching Kong 1.0 has taken three years of intensive development work, done alongside customers from a wide range of organizations, including Yahoo! Japan and Healthcare.gov. Kanaderu Fukuda, senior manager of the Computing Platform Department at Yahoo! Japan, said: "as Yahoo! Japan shifts to microservices, we needed more than just an API gateway – we needed a high-performance platform to manage all APIs across a modern architecture... With Kong as a single point for proxying and routing traffic across all of our API endpoints, we eliminated redundant code writing for authentication and authorization, saving hundreds of hours. Kong positions us well to take advantage of future innovations, and we're excited to expand our use of Kong for service mesh deployments next." New features in Kong 1.0 Kong 1.0, according to the release materials "combines sub-millisecond low latency, linear scalability and unparalleled flexibility." Put simply, it's fast but also easy to adapt and manipulate according to your needs. Everything a DevOps engineer or solutions architect would want. Although it isn't mentioned specifically, Kong is a tool that exemplifies the work of SREs - site reliability engineers. It's a tool that's designed to manage the relationship between various services, and to ensure they not only interact with each other in the way they should, but that they do so with minimum downtime. The Kong team appear to have a huge amount of confidence in the launch of the platform - the extent to which they can grow their customer base depends a lot on how the marketplace evolves, and how much the demand for forward-thinking software architecture grows over the next couple of years. Read next: How Gremlin is making chaos engineering accessible [Interview] Is the ‘commons clause’ a threat to open source?
Read more
  • 0
  • 0
  • 4097

article-image-gremlin-makes-chaos-engineering-with-docker-easier-with-new-container-discovery-feature
Richard Gall
28 Aug 2018
3 min read
Save for later

Gremlin makes chaos engineering with Docker easier with new container discovery feature

Richard Gall
28 Aug 2018
3 min read
Gremlin, the product that's bringing chaos engineering to a huge range of organizations, announced today that it has added a new feature to its product: container discovery. Container discovery will make it easier to run chaos engineering tests alongside Docker. Chaos engineering and containers have always been closely related - arguably the loosely coupled architectural style of modern software driven by containers has, in turn, led to an increased demand for chaos engineering to improve software resiliency. Matt Fornaciari, Gremlin CTO, explains that "with today’s updates to the Gremlin platform, DevOps teams will be able to drastically improve the reliability of Docker in production." Read next: How Gremlin is making chaos engineering accessible [Interview] What does Gremlin's new container discovery feature do? Container discovery will do 2 things: it will make it easier for engineers to identify specific Docker containers, but more importantly, it will also allow them to simulate attacks or errors within those containerized environments. The real benefit of this is that it makes the testing process so much easier for engineers. Containers are, the press release notes, "often highly dynamic, ephemeral, and difficult to pinpoint at a given moment," which means identifying and isolating a particular container to run a 'chaos test' on can ordinarily be very challenging and time consuming. Gremlin has been working with the engineering team at Under Armour. Paul Osman, Senior Engineering Manager says that "we use Gremlin to test various failure scenarios and build confidence in the resiliency of our microservices." This new feature could save the engineers a lot of time, as he explains: "the ability to target containerized services with an easy-to-use UI has reduced the amount of time it takes us to do fault injection significantly." Read next: Chaos Engineering: managing complexity by breaking things Why is Docker such a big deal for Gremlin? As noted above, chaos engineering and containers are part of the same wider shift in software architectural styles. With Docker leading the way when it comes to containerization - its market share growing healthily - making it easier to perform resiliency tests on containers is incredibly important for the product. It's not a stretch to say that Gremlin have probably been working on this feature for some time, with users placing it high on their list of must-haves. Chaos engineering is still in its infancy - this year's Skill Up report found that it still remains on the periphery of many developer's awareness. However, that could quickly change, and it appears that Gremlin are working hard to make chaos engineering not only more accessible but also more appealing to companies for whom software resiliency is essential.
Read more
  • 0
  • 0
  • 4085
article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 4053

article-image-google-cloud-hands-over-kubernetes-project-operations-to-cncf-grants-9m-in-gcp-credits
Sugandha Lahoti
30 Aug 2018
3 min read
Save for later

Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits

Sugandha Lahoti
30 Aug 2018
3 min read
Google today announced that it is stepping back from managing the Kubernetes architecture and is funding the Cloud Native Computing Foundation (CNCF) $9M in GCP credits for a successful transition. These credits are split over a period of three years to cover infrastructure costs. Google is also handing over operational control of the Kubernetes project to the CNCF community. They will now take ownership of day-to-day operational tasks such as testing and builds, as well as maintaining and operating the image repository and download infrastructure. Kubernetes was first created by Google in 2014. Since then Google has been providing Kubernetes with the cloud resources that support the project development. These include CI/CD testing infrastructure, container downloads, and other services like DNS, all running on Google Cloud Platform. With Google passing the reign to CNCF, it’s goal is to make make sure “Kubernetes is ready to scale when your enterprise needs it to”. The $9M grant will be dedicated to building the world-wide network and storage capacity required to serve container downloads. In addition, a large part of this grant will also be dedicated to funding scalability testing, which runs 150,000 containers across 5,000 virtual machines. “Since releasing Kubernetes in 2014, Google has remained heavily involved in the project and actively contributes to its vibrant community. We also believe that for an open source project to truly thrive, all aspects of a mature project should be maintained by the people developing it. In passing the baton of operational responsibilities to Kubernetes contributors with the stewardship of the CNCF, we look forward to seeing how the project continues to evolve and experience breakneck adoption” said Sarah Novotny, Head of Open Source Strategy for Google Cloud. The CNCF foundation includes a large number of companies of the likes of Alibaba Cloud, AWS, Microsoft Azure, IBM Cloud, Oracle, SAP etc. All of these will be profiting from the work of the CNCF and the Kubernetes community. With this move, Google is perhaps also transferring the load of running the Kubernetes infrastructure to these members. As mentioned in their blog post, they look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project’s operations. To learn more, check out the CNCF announcement post and the Google Cloud Platform blog. Kubernetes 1.11 is here! Google Kubernetes Engine 1.10 is now generally available and ready for enterprise use. Kubernetes Container 1.1 Integration is now generally available.
Read more
  • 0
  • 0
  • 4034

article-image-google-to-be-the-founding-member-of-cdf-continuous-delivery-foundation
Bhagyashree R
15 Mar 2019
3 min read
Save for later

Google to be the founding member of CDF (Continuous Delivery Foundation)

Bhagyashree R
15 Mar 2019
3 min read
On Tuesday, Google announced of being one of the founding members of the newly-formed Continuous Delivery Foundation (CDF). As a part of its membership, Google will be contributing to two projects namely Spinnaker and Tekton. About Continuous Delivery Foundation The formation of CDF was announced at the Linux Foundation Open Source Leadership Summit on Tuesday. CDF will act as a “vendor-neutral home” for some of the most important open source projects for continuous delivery and specifications to speed up the release pipeline process. https://twitter.com/linuxfoundation/status/1105515314899492864 The existing CI/CD ecosystem is heavily fragmented, which makes it difficult for developers and companies to decide on particular tooling for their projects. Also, DevOps practitioners often find it very challenging to gather guidance information on software delivery best practices. CDF was formed to make CI/CD tooling easier and define the best practices and guidelines that will enable application developers to deliver better and more secure software at speed. CDF is currently hosting some of the most popularly used CI/CD tools including Jenkins, Jenkins X, Spinnaker, and Tekton. The foundation is backed by 20+ founding members which include Alauda, Alibaba, Anchore, Armory.io, Atos, Autodesk, Capital One, CircleCI, CloudBees, DeployHub, GitLab, Google, HSBC, Huawei, IBM, JFrog, Netflix, Puppet, Rancher, Red Hat, SAP, Snyk, and SumoLogic. Why Google joined CDF? Google as a part of this foundation will be working on Spinnaker and Tekton. Originally created by Netflix and jointly led by Netflix and Google, Spinnaker is an open source, multi-cloud delivery platform. It comes with various features for making continuous delivery reliable including support for advanced deployment strategies, an open source canary analysis service named Kayenta, and more. The Spinnaker’s user community has great experience in the continuous delivery domain, and by joining CDF Google aims to share that expertise with the broader community. Tekton is a set of shared, open source components for building CI/CD systems. It allows you to build, test, and deploy applications across multiple environments such as virtual machines, serverless, Kubernetes, or Firebase. In the next few months, we can expect to see support for results and event triggering in Tekton. Google is also planning to work with CI/CD vendors to build an ecosystem of components that will allow users to use Tekton with existing tools like Jenkins X, Kubernetes native, and others. Dan Lorenc, Staff Software Engineer at Google Cloud, sharing Google’s motivation behind joining CDF said, “Continuous Delivery is a critical part of modern software development, but today space is heavily fragmented. The Tekton project addresses this problem by working with the open source community and other leading vendors to collaborate on the modernization of CI/CD infrastructure.” Kim Lewandowski, Product Manager at Google Cloud, said, “The ability to deploy code securely and as fast as possible is top of mind for developers across the industry. Only through best practices and industry-led specifications will developers realize a reliable and portable way to take advantage of continuous delivery solutions. Google is excited to be a founding member of the CDF and to work with the community to foster innovation for deploying software anywhere.” To know more, check out the official announcement at the Google Open Source blog. Google Cloud Console Incident Resolved! Cloudflare takes a step towards transparency by expanding its government warrant canaries Google to acquire cloud data migration start-up ‘Alooma’
Read more
  • 0
  • 0
  • 3924
article-image-vmware-kubernetes-engine-vke-launched-to-offer-kubernetes-as-a-service
Savia Lobo
27 Jun 2018
2 min read
Save for later

VMware Kubernetes Engine (VKE) launched to offer Kubernetes-as-a-Service

Savia Lobo
27 Jun 2018
2 min read
VMware recently announced its Kubernetes-as-a-Service adoption by launching VMware Kubernetes Engine (VKE) that provides a multi-cloud experience. The VKE is a fully-managed service offered through a SaaS model. It allows customers to use Kubernetes easily without having to worry about the deployment and operation of Kubernetes clusters. Kubernetes lets users manage clusters of containers while also making it easier to move applications between public hosted clouds. By adding Kubernetes on cloud, VMware offers a managed service business that will use Kubernetes containers with reduced complexities. VMware's Kubernetes engine will face a big time competition from Google Cloud and Microsoft Azure, among others. Recently, Rackspace also announced its partnership with HPE to develop a new Kubernetes-based cloud offering. VMware Kubernetes Engine (VKE) features include: VMware Smart Cluster VMware Smart Cluster is the selection of compute resources to constantly optimize resource usage, provide high availability, and reduce cost. It also enables the management of cost-effective, scalable Kubernetes clusters optimized to application requirements. Users can also have role-based access and visibility only to their predefined environment with the smart cluster. Fully Managed by VMware VMware Kubernetes Engine(VKE) is fully managed by VMware. It ensures that clusters always run in an efficient manner with multi-tenancy, seamless Kubernetes upgrades, high availability, and security. Security by default in VKE VMware Kubernetes Engine is highly secure with features like: Multi-tenancy Deep policy control Dedicated AWS accounts per organization Logical network isolation Integrated identity Access management with single sign-on Global Availability VKE has a region-agnostic user interface and is available across three AWS regions, US-East1, US-West2, and EU-West1, giving users the choice for which region to run clusters on. Read full coverage about the VMware Kubernetes Engine (VKE) on the official website. Introducing VMware Integrated OpenStack (VIO) 5.0, a new Infrastructure-as-a-Service (IaaS) cloud Nvidia GPUs offer Kubernetes for accelerated deployments of Artificial Intelligence workloads Hortonworks partner with Google Cloud to enhance their Big Data strategy  
Read more
  • 0
  • 0
  • 3905

article-image-the-future-of-jenkins-is-cloud-native-and-a-faster-development-pace-with-increased-stability
Prasad Ramesh
04 Sep 2018
4 min read
Save for later

The future of Jenkins is cloud native and a faster development pace with increased stability

Prasad Ramesh
04 Sep 2018
4 min read
Jenkins has been a success for more than a decade now mainly due to its extensibility, community and it being general purpose. But there are some challenges and problems in it which have become more pronounced now. Kohsuke Kawaguchi, the creator of Jenkins, is now planning to take steps to solve these problems and make the platform better. Challenges in Jenkins With growing competition in the continuous integration (CI), The following limitations in Jenkins come in the way of teams. Some of them discourage admins from using and installing plugins. Service instability: CI is a critical service nowadays. People are running bigger workloads, needing more plugins, and high availability. Services like instant messaging platforms need to be online all the time. Jenkins is unable to keep up with this expectation and a large instance requires a lot of overhead to keep it running. It is common for someone to restart Jenkins every day and that delays processes. Errors need to be contained to a specific area without impacting the whole service. Brittle Configuration: Installing/upgrading plugins and tweaking job settings have caused side effects. This makes admins lose confidence to make these changes safely. There is a fear that the next upgrade might break something and cause problems for other teams and affect delivery. Assembly required: Jenkins requires an assembly of service blocks to make it work as a whole. As CI has become mainstream, the users want something that can be deployed in a few clicks. Having too many choices is confusing and leads to uncertainty when assembling. This is not something that can be solved by creating more plugins. Reduced Development Velocity: It is difficult for a contributor to make a change that spans across multiple plugins. The tests do not give enough confidence to shop code; many of them do not run automatically and the coverage is not deep. Changes and steps to make Jenkins better There are two key efforts here, Cloud Native Jenkins and Jolt. Cloud native is a CI engine that runs on Kubernetes and has a different architecture, Jolt will continue in Jenkins 2 and add faster development pace with increased stability. Cloud Native Jenkins It is a sub-project in the context of Cloud Native SIG. It will use Kubernetes as runtime. It will have a new extensibility mechanism to retain what works and to continue the development of the the automation platform's ecosystem. Data on Cloud Managed Data Services to achieve high availability and horizontal scalability, alleviating admins from additional responsibilities. Configuration as Code and Jenkins Evergreen help with the brittleness. There are also plans to make Jenkins secure by default design and to continue with Jenkins X which has been received very well. The aim is to get things going in 5 clicks through easy integration with key services. Jolt in Jenkins Cloud Native Jenkins is not usable for everyone and targets only a particular set of functionalities. It also requires a platform which has a limited adoption today, so Jenkins 2 will be continued at a faster pace. For this Jolt in Jenkins is introduced. This is inspired by what happened to the development of Java SE; change in the release model by shedding off parts to move faster. There will a major version number change every couple of months. The platform needs to be largely compatible and the pace needs to justify any inconvenience put on the users. For more, visit the official Jenkins Blog. How to build and enable the Jenkins Mesos plugin Google Compute Engine Plugin makes it easy to use Jenkins on Google Cloud Platform Everything you need to know about Jenkins X, the new cloud native CI/CD solution on Kubernetes
Read more
  • 0
  • 0
  • 3805