Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - DevOps

82 Articles
article-image-2019-upskilling-enterprise-devops-skills-report-gives-an-insight-into-the-devops-skill-set-required-for-enterprise-growth
Melisha Dsouza
05 Mar 2019
3 min read
Save for later

‘2019 Upskilling: Enterprise DevOps Skills’ report gives an insight into the DevOps skill set required for enterprise growth

Melisha Dsouza
05 Mar 2019
3 min read
DevOps Institute has announced the results of the "2019 Upskilling: Enterprise DevOps Skills Report". The research and analysis for this report were conducted by Eveline Oehrlich, former vice president and research director at Forrester Research. The project was supported by founding Platinum Sponsor Electric Cloud, Gold Sponsor CloudBees and Silver Sponsor Lenovo. This report outlines the most valued and in-demand skills needed to achieve DevOps transformation within enterprise IT organizations of all sizes. It also gives an insight into the skills a DevOps professional should develop to help build a mindset and a culture for organizations and other individuals. According to Jayne Groll, CEO of DevOps Institute, “DevOps Institute is thrilled to share the research findings that will help businesses and the IT community understand the requisite skills IT practitioners need to meet the growing demand for T-shaped professionals. By identifying skill sets needed to advance the human side of DevOps, we can nurture the development of the T-shaped professional that is being driven by the requirement for speed, agility and quality software from the business.” Key findings from the report 55% of the survey respondents said that they first look for internal candidates when searching for DevOps team members and will look for external candidates only if they have not identified an internal candidate. The respondents agreed that automation skills (57%), process skills (55%) and soft skills (53%) are the most important must-have skills On being asked about which job title(s) companies recently hired (or are planning to hire), the survey depicted: DevOps Engineer/Manager, 39%; Software Engineer, 29%; DevOps Consultant, 22%; Test Engineer, 18%; Automation Architect, 17%; and Infrastructure Engineer, 17%. Other recruits included CI/CD Engineers, 16%; System Administrators, 15%; Release Engineers/Managers, 13%; and Site Reliability Engineers, 10% Functional skills and key technical skills when combined, complement the soft skills required to create qualified  DevOps engineers. Automation process and soft skills are the “must-have” skills for a DevOps engineer. Process skills are needed for intelligent automation.  Another key functional skill is IT Operations. Security comes in second. Business skills are most important to leaders, but not as much to individual contributors. Cloud and analytical knowledge are the top technical skills. Recruiting for DevOps is on the rise. Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report The following figure shows the priorities across the top skill categories relative to the key roles surveyed: Source: Press release, DevOps Institute’s '2019 Upskilling: Enterprise DevOps Skills Report Oehrlich also said in a statement that Hiring managers see a DevOps professional as a creative, knowledge-sharing, eager-to-learn individual with shapeable skill sets. Andre Pino, vice president of marketing, CloudBees said in a statement s that “The survey results show the importance for developers and managers to have the right skills that empower them to meet business objectives and have a rewarding career in our fast-paced industry.” You can check out the entire report for more insights on this news. Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! Microsoft announces Azure DevOps, makes Azure pipelines available on GitHub Marketplace JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution
Read more
  • 0
  • 0
  • 2878

article-image-chaos-engineering-platform-gremlin-launches-gremlin-free
Richard Gall
27 Feb 2019
3 min read
Save for later

Chaos engineering platform Gremlin launches Gremlin Free

Richard Gall
27 Feb 2019
3 min read
Chaos engineering has been a trend to watch for the last 12 months, but it is yet to really capture the imagination of the global software industry. It remains a pretty specialised discipline confined to the most forward thinking companies who depend on extensive distributed systems. However, that could all be about to change thanks to Gremlin who have today announced the launch of Gremlin Free. Gremlin Free is a tool that allows software, infrastructure and DevOps engineers to perform shutdown and CPU attacks on their infrastructure in a safe and controlled way using a neat and easy to use UI. In a blog post published on the Gremlin site today, Lorne Kligerman, Director of Product, said "we believe the industry has answered why do chaos engineering, and has begun asking how do I begin practicing Chaos Engineering in order to significantly increase the reliability and resiliency of our systems to provide the best user experience possible." Read next: How Gremlin is making chaos engineering accessible [Interview] What is Gremlin free? Gremlin Free is based on Netflix's Chaos Monkey tool. Chaos Monkey is the tool that gave rise to chaos engineering way back in 2011 when the streaming platform first moved to AWS. It let Netflix engineers "randomly shut down compute instances," which became a useful tactic for stress testing the reliability and resilience of its new microservices architecture. What can you do with Gremlin Free? There are two attacks you can do with Gremlin Free: Shutdown and CPU. As the name indicates, Shutdown lets you take down (or reboot) multiple hosts or containers. CPU attacks simply allow you to cause spikes in CPU usage to monitor its impact on your infrastructure. Both attacks can help teams identify pain points within their infrastructure, and ultimately form the foundations of an engineering strategy that relies heavily on the principles of chaos engineering. Why Gremlin Free now? Gremlin cites data from Gartner that underlines just how expensive downtime can be: according to Gartner, eCommerce companies can lose an average of $5,600 per minute, with that figure stretching even bigger for the planet's leading eCommerce businesses. However, despite the cost of downtime making a clear argument for chaos engineering's value, its adoption isn't widespread - certainly not as widespread as Gremlin believe it should be. Kligerman said "It's still a new concept to most engineering teams, so we wanted to offer a free version of our software that helps them become more familiar with chaos engineering - from both a tooling and culture perspective." If you're interested in trying chaos engineering, sign up for Gremlin Free here.
Read more
  • 0
  • 0
  • 2374

article-image-jfrog-acquires-devops-startup-shippable-for-an-end-to-end-devops-solution
Melisha Dsouza
22 Feb 2019
2 min read
Save for later

JFrog acquires DevOps startup ‘Shippable’ for an end-to-end DevOps solution

Melisha Dsouza
22 Feb 2019
2 min read
JFrog, a leading company in DevOps has acquired Shippable- a cloud-based startup that focuses on Kubernetes-ready continuous integration and delivery (CI/CD), helping developers to ship code and deliver app and microservices updates. This strategic acquisition- JFrog’s fifth-  aims at providing customers with a “complete, integrated DevOps pipeline solution”. The collaboration between JFrog and Shippable will allow users to automate their development processes right from the time the code is committed all the way to production. Shlomi Ben Haim, Co-founder, and CEO of JFrog, says in the official press release that “The modern DevOps landscape requires ever-faster delivery with more and more automation. Shippable’s outstanding hybrid and cloud native technologies will incorporate yet another best-of-breed solution into the JFrog platform. Coupled with our commitments to universality and freedom of choice, developers can expect a superior out-of-the-box DevOps platform with the greatest flexibility to meet their DevOps needs.” According to an email sent to Packt Hub, JFrog, will now allow developers to have a completely integrated DevOps pipeline with JFrog, while still retaining the full freedom to choose their own solutions in JFrog’s universal DevOps model. The plan is to release the first technology integrations with JFrog Enterprise+ this coming summer, and a full integration by Q3 of this year. According to JFrog, this acquisition will result in a more automated, complete, open and secure DevOps solution in the market. This is just another victory for JFrog. JFrog has previously announced a $165 million Series D funding. Last year, the company also launched JFrog Xray, a binary analysis tool that performs recursive security scans and dependency analyses on all standard software package and container types. Avi Cavale, founder and CEO of Shippable, says that Shippable users and customers will now “have access to leading security, binary management and other high-powered enterprise tools in the end-to-end JFrog Platform”, and that the combined forces of JFrog and Shippable can make full DevOps automation from code to production a reality. Spotify acquires Gimlet and Anchor to expand its podcast services Microsoft acquires Citus Data with plans to create a ‘Best Postgres Experience’ Adobe Acquires Allegorithmic, a popular 3D editing and authoring company
Read more
  • 0
  • 0
  • 2212

article-image-idera-acquires-travis-ci-the-open-source-continuous-integration-solution
Sugandha Lahoti
24 Jan 2019
2 min read
Save for later

Idera acquires Travis CI, the open source Continuous Integration solution

Sugandha Lahoti
24 Jan 2019
2 min read
The popular open source continuous integration service Travis CI solution, has been acquired by Idera. Idera offers a number of B2B software solutions ranging from database administration to application development to test management. Travis CI will be joining Idera’s Testing Tools division, which also includes TestRail, Ranorex, and Kiuwan. Travis CI assured its users that the company will continue to be open source and a stand-alone solution under an MIT license. “We will continue to offer the same services to our hosted and on-premises users. With the support from our new partners, we will be able to invest in expanding and improving our core product”, said Konstantin Haase, a founder of Travis CI in a blog post. Idera will also keep the Travis Foundation running which runs projects like Rails Girls Summer of Code, Diversity Tickets, Speakerinnen, and Prompt. It’s not just a happy day for Travis CI. Travis CI will also bring it’s 700,000 users to Idera, and it’s high profile customers like IBM and Zendesk. Users are quick to note that this acquisition comes at a time when Tavis CI’s competitors like Circle CI, seem to be taking market share away from Travis CI. A comment on hacker news reads, “In a past few month I started to see Circle CI badges popping here and there for opensource repositories and anecdotally many internal projects at companies are moving to GitLab and their built-in CI offering. Probably a good time to sell Travis CI, though I'd prefer if they would find a better buyer.” Another user says, “Honestly, for enterprise users that is a good thing. In the hands of a company like Idera we can be reasonably confident that Travis will not disappear anytime soon” Announcing Cloud Build, Google’s new continuous integration and delivery (CI/CD) platform Creating a Continuous Integration commit pipeline using Docker [Tutorial] How to master Continuous Integration: Tools and Strategies
Read more
  • 0
  • 0
  • 4489

article-image-microsoft-announces-azure-devops-bounty-program
Prasad Ramesh
18 Jan 2019
2 min read
Save for later

Microsoft announces Azure DevOps bounty program

Prasad Ramesh
18 Jan 2019
2 min read
Yesterday, the Microsoft Security Response Center (MSRC) announced the launch of the Azure DevOps Bounty program. This is a program launched to solidify the security provided to Azure DevOps customers. They are offering rewards up to US$20,000 if you can find eligible vulnerabilities in Azure DevOps online and Azure DevOps server. The bounty rewards range from $500 to $20,000 US. The reward will depend on Microsoft’s discretion on the severity and impact of a vulnerability. It will also depend on the quality of the submission subject to their bounty terms and conditions. Products in focus of this program are Azure DevOps services which was previously known as Visual Studio Team Services and the latest versions of Azure DevOps Server and Team Foundation Server. The goal of the program is to find any eligible vulnerabilities that may have a direct security impact on the customer base. For a submission to be eligible, it should fulfil the following criteria: Identifying a previously unreported vulnerability in one of the services or products. The web application vulnerabilities must impact supported browsers for Azure DevOps server, services, or plug-ins. The submission should have documented steps that are clear and reproducible. It can be text or video. Any necessary information to quickly reproduce and understand the issue can result in faster response and higher rewards. Any submissions that Microsoft thinks are not eligible in this criteria may be rejected. You can send your submissions to secure@microsoft.com with the help of bug submission guidelines. Participants are requested to use the Coordinated Vulnerability Disclosure when reporting the vulnerabilities. Note that there are no restrictions on how many vulnerabilities you can report or the rewards for it. When there are multiple submissions, the first one will be chosen for the reward. For more details about the eligible vulnerabilities and the Microsoft Azure DevOps bounty program, visit the Microsoft website. 8 ways Artificial Intelligence can improve DevOps Azure DevOps outage root cause analysis starring greedy threads and rogue scale units Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”
Read more
  • 0
  • 0
  • 2496

article-image-google-and-waze-share-their-best-practices-for-canary-deployment-using-spinnaker
Bhagyashree R
18 Jan 2019
3 min read
Save for later

Google and Waze share their best practices for canary deployment using Spinnaker

Bhagyashree R
18 Jan 2019
3 min read
On Monday, Eran Davidovich, a System Operations Engineer at Waze and Théo Chamley, Solutions Architect at Google Cloud shared their experience on using Spinnaker for canary deployments. Waze estimated that canary deployment helped them prevent a quarter of all incidents on their services. What is Spinnaker? Developed at Netflix, Spinnaker, is an open source, multi-cloud continuous delivery platform that helps developers to manage app deployments on different computing platforms including Google App Engine, Google Kubernetes Engine, AWS, Azure, and more. This platform also enables you to implement advanced deployment methods like canary deployment. In this type of deployment, developers roll out the changes to a subset of users to analyze whether or not the code release provides the desired outcome. If this new code poses any risks, you can mitigate it before releasing the update to all users. In April 2018, Google and Netflix introduced a new feature for Spinnaker called Kayenta using which you can create an automated canary analysis for your project. Though you can build your own canary deployment or other advanced deployment patterns, Spinnaker and Kayenta together are aimed at making it much easier and reliable. The tasks that Kayenta automates includes fetching user-configured metrics from their sources, running statistical tests, and providing an aggregating score for the canary. On the basis of the aggregated score and set limits for success, Kayenta automatically promotes or fails the canary, or triggers a human approval path. Canary best practices Check out the following best practices to ensure that your canary analyses are reliable and relevant: Instead of comparing the canary against the production, compare it against a baseline. This is because many differences can skew the results of the analysis such as cache warmup time, heap size, load-balancing algorithms, and so on. The canary should be run for enough time, at least 50 pieces of time-series data per metric, to ensure that the statistical analysis is relevant. Choose metrics that represent different aspects of your applications’ health. Three aspects are very critical as per the SRE book, which includes latency, errors, and saturation. You must put a standard set of reusable canary configs in place. This will come in handy for anyone in your team as a starting point and will also keep the canary configurations maintainable. Thunderbird welcomes the new year with better UI, Gmail support and more Introducing Azure DevOps Server 2019 RC1 with better UI, Azure SQL support and more! AIOps – Trick or Treat?
Read more
  • 0
  • 0
  • 3142
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kong-1-0-is-now-generally-available-with-grpc-support-updated-database-abstraction-object-and-more
Amrata Joshi
21 Dec 2018
4 min read
Save for later

Kong 1.0 is now generally available with gRPC support, updated Database abstraction object and more

Amrata Joshi
21 Dec 2018
4 min read
Yesterday, the team at Kong announced the general availability of Kong 1.0, a scalable, fast, open source microservice API gateway that manages hybrid and cloud-native architectures. Kong can be extended through plugins including authentication, traffic control, observability and more.The first stable version of Kong 1.0 was  launched earlier this year in September at the Kong summit. The Kong API  creates a Certificate authority which Kong nodes can use for establishing mutual TLS authentication with each other. It can balance traffic from mail servers and other TCP-based applications, from L7 to L4. What’s new in Kong 1.0? gRPC This release supports gRPC protocol alongwith REST. It is built on top of HTTP/2 and provides option for Kong users looking to connect east-west traffic with low overhead and latency. This helps in enabling Kong users to open more mesh deployments in hybrid environments. New Migrations Framework in Kong 1.0 This version of Kong introduces a new Database Abstraction Object (DAO), a framework that allows migrations from one database schema to another with nearly zero downtime. The new DAO helps users to upgrade their Kong cluster all at once, without the need of any manual intervention for upgrading each node. Plugin Development Kit (PDK) PDK, a set of Lua functions and variables can be used by custom-plugins for implementing logic on Kong. The plugins built with the PDK will be compatible with Kong versions 1.0 and above. PDK’s interfaces are much easier to use than the bare-bones ngx_lua API. It allows users to isolate plugin operations such as logging or caching. It is semantically versioned which helps in maintaining backward compatibility. Service Mesh Support Users can now easily deploy Kong as a standalone service mesh. A service mesh can help address the challenges of microservices in terms of security. It secures the services as it integrates multiple layers of security with Kong plugins. It also features secure communication at every step of the request lifecycle. Seamless Connections This release connects services in the mesh to services across all environments, platforms, and vendors. Kong 1.0 can be used to bridge the gap between cloud-native design and traditional architecture patterns. Robust plugin architecture This release comes with a robust plugin architecture that offers users unparalleled flexibility. Kong plugins provide key functionality and supports integrations with other cloud-native technologies including Prometheus, Zipkin, and many others. Kong’s plugins can now execute code in the new preread phase which improves performance. AWS Lambda and Azure FaaS Kong 1.0 comes with improvements to interactions with AWS Lambda and Azure FaaS, including Lambda Proxy Integration. The Azure Functions plugin can be used to filter out headers disallowed by HTTP/2 when proxying HTTP/1.1 responses to HTTP/2 clients. Deprecations in Kong 1.0 Core The API entity and related concepts such as the /apis endpoint have been removed from this release. Routes and Services are used instead. The old DAO implementation and the old schema validation library are removed. New Admin API Filtering now happens withURL path changes (/consumers/x/plugins) instead of querystring fields (/plugins?consumer_id=x) Error messages have been reworked in this release to be more consistent, precise and informative. The PUT method has been reimplemented.   Plugins The galileo plugin has been removed. Some internal modules, that were used by plugin authors before the introduction of the Plugin Development Kit (PDK) in 0.14.0 have been removed now. Internal modules that have been removed include, kong.tools.ip module, kong.tools.public module and  kong.tools.responses module. Major bug fixes SNIs (Server Name Indication) are now correctly paginated. With this release, null & default values are now handled better. Datastax Enterprise 6.X doesn't throw errors anymore. Several typos, style and grammar fixes have been made. The router doesn’t inject an extra / in certain cases. Read more about this release on Kong’s blog post. Kong 1.0 launches: the only open source API platform specifically built for microservices, cloud, and serverless Eclipse 4.10.0 released with major improvements to colors, fonts preference page and more Windows Sandbox, an environment to safely test EXE files is coming to Windows 10 next year
Read more
  • 0
  • 0
  • 3191

article-image-lxd-3-8-released-with-automated-container-snapshots-zfs-compression-support-and-more
Melisha Dsouza
14 Dec 2018
5 min read
Save for later

LXD 3.8 released with automated container snapshots, ZFS compression support and more!

Melisha Dsouza
14 Dec 2018
5 min read
Yesterday, the LXD team announced the release of LXD 3.8. This is the last update for 2018, improving the previous version features as well as adding new upgrades to 3.8. LXD, also known as ‘Linux Daemon’ system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. LXD is written in Go which is a free software and is developed under the Apache 2 license. LXD is secure by design in terms of unprivileged containers, resource restrictions and much more. Customers can use LXD from containers on their laptop to thousands of compute nodes. WIth advanced resource control and support for multiple storage backends, storage pools and storage volumes, LXD has been well received by the community. Features of LXD 3.8 #1 Automated container snapshots The new release includes three configuration keys to control automated snapshots and configure how their naming convention. snapshots.schedule uses a CRON pattern to determine when to perform the snapshot snapshots.schedule.stopped is a boolean used to control whether to snapshot stopped containers snapshots.pattern is a format string with pongo2 templating support used to set what the name of the snapshots should be when no name is given to a snapshot. This applicable to both, automated and unnamed, manually created snapshots. #2 Support for copy/move between projects Users can now copy or move containers between projects using the newly available  --target-project option added to both lxc copy and lxc move #3 cluster.https_address server option LXD 3.8 includes a new cluster.https_address option. This option will help users facilitate internal cluster communication, making it easy to prioritize and filter cluster traffic. Until recently, clustered LXD servers had to be configured to listen on a single IPv4 or IPv6 address and both the internal cluster traffic and regular client traffic used the same address. This new option with a write-once key holds the address used for cluster communication and cannot currently be changed without having to remove the node from the cluster. Users can now change the regular core.https_address on clustered nodes to any address they want, making it possible to use a completely different network for internal cluster communication. #4 Cluster image replication LXD 3.8 introduces automatic image replication. Prior to this update, images would only get copied to other cluster members as containers on those systems request them. The downside of this method was that if the image is only present on a single system and that system goes offline, then the image cannot be used until the system recovers. In LXD 3.8,  all manually created or imported images will be replicated on at least 3 systems. Images that are stored in the image store only as a cache entry do not get replicated. #5 security.protection.shift container option In previous versions, LXD had to rely on slow rewriting of all uid/gid on the filesystem whenever the container’s idmap changes. This can be dangerous when run on systems that are prone to sudden shutdowns as the operation cannot be safely resumed if interrupted partway. The newly introduced security.protection.shift configuration option will prevent any such remapping, instead making any action that would result in one fail until the key is unset. #6 Support for passing all USB devices All USB devices can be passed to a container by not specifying any filter, without specifying any vendorid or productid filter. Every USB device will be made visible to the container, including any device hotplugged after the fact. #7 CLI override of default project After reports from users that interacting with multiple projects can be tedious due to having to constantly use lxc project switch to switch the client between projects, LXD 3.8 now has made available a --project option available throughout the command line client, which lets users override the project for a particular operation. #8 Bi-directional rsync negotiation Recent LXD releases use the rsync feature negotiation where the source could tell the server what rsync features it’s using for the server to match them on the receiving end. LXD 3.8 introduces the reverse of that by having the LXD server indicate what it supports as part of the migration protocol, allowing for the source to restrict the features it uses. This will provide a robust migration when a newer LXD will be able to migrate containers out to an older LXD without running into rsync feature mismatches. #9 ZFS compression support The LXD migration protocol implements a detection and use of ZFS compression support when available. On combining with zpool compression, this can very significantly reduce the size of the migration stream. HackerNews was buzzing with positive remarks for this release, with users requesting more documentation on how to use LXD containers. Some users also have compared LXD containers to Docker and Kubernetes, preferring the former over the latter. In addition to these new upgrades, the release also fixes multiple bugs from the previous version. You can head over to Linuxcontainers.org for more insights on this news. Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix An update on Bcachefs- the “next generation Linux filesystem” The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 3237

article-image-digitalocean-launches-its-kubernetes-as-a-service-at-kubeconcloudnativecon-to-ease-running-containerized-apps
Melisha Dsouza
12 Dec 2018
2 min read
Save for later

DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps

Melisha Dsouza
12 Dec 2018
2 min read
At KubeCon+CloudNativeCon this week, DigitalOcean announced the launch of its Kubernetes-as-a-Service offering to all developers. This is a limited release with full general availability planned for early 2019. DigitalOcean had first announced its container offering through an early access program in May this year followed by its limited availability in October. Building up on the simplicity factor that was appreciated the most by customers, DigitalOcean Kubernetes (DOK8s) claims to be a powerfully simple managed Kubernetes service. Once customers define the size and location of their worker nodes, DigitalOcean will provision, manage, and optimize the services needed to run a Kubernetes cluster. The DOK8s is easy to setup as well. During the announcement, DigitalOcean VP of Product Shiven Ramji said that “Kubernetes promises to be one of the leading technologies in a developer’s arsenal to gain the scalability, portability and availability needed to build modern apps. Unfortunately, for many, it’s extremely complex to manage and deploy. With DigitalOcean Kubernetes, we make running containerized apps consumable for any developer, regardless of their skills or resources.” The new release builds on the early access release of the service including capabilities like node provisioning, handling durable storage, firewall, load balancing and similar tools. Further, the new features added now include: Guided configuration experiences to assist users in provisioning, configuring and deploying clusters Open APIs to enable easy integrations with developer tools Ability to programmatically create and update cluster and nodes settings Expanded version support including Kubernetes version 1.12.1 with support for 1.13.1 shortly. Support released for DOK8s in the DigitalOcean API, making it easy for users to create and manage their clusters through DigitalOcean’s API. Effective pricing for DigitalOcean Kubernetes- Customers pay only for the underlying resources they use (Droplets, Block Storage, and Load Balancers) Head over to DigitalOcean’s blog to know more about this announcement. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes
Read more
  • 0
  • 0
  • 2431

article-image-elastic-launches-helm-charts-alpha-for-faster-deployment-of-elasticsearch-and-kibana-to-kubernetes
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes

Melisha Dsouza
12 Dec 2018
3 min read
At the KubeCon+CloudNativeCon happening at Seattle this week, Elastic N.V., the pioneer behind Elasticsearch and the Elastic Stack, announced the alpha availability of Helm Charts for Elasticsearch on Kubernetes. Helm Charts will make it possible to deploy Elasticsearch and Kibana to Kubernetes almost instantly. Developers use Helm charts for its flexibility in creating, publishing and sharing Kubernetes applications. The ease of using Kubernetes to manage containerized workloads has also lead to Elastic users deploying their ElasticSearch workloads to Kubernetes. Now, with the Helm chart support provided for Elasticsearch on Kubernetes, developers can harness the benefits of both, Helm charts and Kubernetes, to instal, configure, upgrade and run their applications on Kubernetes. With this new functionality in place, users can now take advantage of the best practices and templates to deploy Elasticsearch and Kibana. They will obtain access to some basic free features like monitoring, Kibana Canvas and spaces. According to the blog post, Helm charts will serve as a “ way to help enable Elastic users to run the Elastic Stack using modern, cloud-native deployment models and technologies.” Why should developers consider Helm charts? Helm charts have been known to provide users with the ability to leverage Kubernetes packages through the click of a button or single CLI command. Kubernetes is sometimes complex to use, thus impairing developer productivity. Helm charts improve their productivity as follows: With helm charts, developers can focus on developing applications rather than  deploying dev-test environments. They can author their own chart, which in turn automates deployment of their dev-test environment It comes with a “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Combating the complexity related of deploying a Kubernetes-orchestrated container application, Helm Charts allows software vendors and developers to preconfigure their applications with sensible defaults. This enables users/deployers to change parameters of the application/chart using a consistent interface. Developers can incorporate production-ready packages while building applications in a Kubernetes environment thus eliminating deployment errors due to incorrect configuration file entries or mangled deployment recipes. Deploying and maintaining Kubernetes applications can be tedious and error prone. Helm Charts reduces the complexity of maintaining an App Catalog in a Kubernetes environment. Central App Catalog reduces duplication of charts (when shared within or between organizations) and spreads best practices by encoding them into Charts. To know more about Helm charts, check out the README files for the Elasticsearch and Kibana charts available on GitHub. In addition to this announcement, Elastic also announced its collaboration with Cloud Native Computing Foundation (CNCF) to promote and support open cloud native technologies and companies. This is another step towards Elastic’s mission towards building products in an open and transparent way. You can head over to Elastic’s official blog for an in-depth coverage of this news. Alternatively, check out MarketWatch for more insights on this article. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support How to perform Numeric Metric Aggregations with Elasticsearch
Read more
  • 0
  • 0
  • 4437
article-image-istio-available-in-beta-for-google-kubernetes-engine-will-accelerate-app-delivery-and-improve-microservice-management
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management

Melisha Dsouza
12 Dec 2018
3 min read
The KubeCon+CloudNativeCon happening at Seattle this week has excited developers with its plethora of new announcements and releases. This conference dedicated to Kubernetes and other cloud native technologies, brings together adopters and technologists from leading open source and cloud native communities to discuss new advancements at the cloud front. At this year’s conference, Google Cloud announced the beta availability of ‘Istio’ for its Google Kubernetes Engine. Istio was launched in the middle of 2017, as a result of a collaboration between Google, IBM and Lyft. According to Google, this open-source “service mesh” that is used to connect, manage and secure microservices on a variety of platforms- like Kubernetes- will play a vital role in helping developers make the most of their microservices. Yesterday, Google Cloud Director of Engineering Chen Goldberg and Director of Product Management Jennifer Lin said in a blog post that the availability of Istio on Google Kubernetes Engine will provide “more granular visibility, security and resilience for Kubernetes-based apps”. This service will be made available through Google’s Cloud Services Platform that bundles together all the tools and services needed by developers to get their container apps up and running on the company’s cloud or in on-premises data centres. In an interview with SiliconANGLE, Holger Mueller, principal analyst and vice president of Constellation Research Inc., compared software containers to “cars.” He says that  “Kubernetes has built the road but the cars have no idea where they are, how fast they are driving, how they interact with each other or what their final destination is. Enter Istio and enterprises get all of the above. Istio is a logical step for Google and a sign that the next level of deployments is about manageability, visibility and awareness of what enterprises are running.” Additional features of Istio Istio allows developers and operators to manage applications as services and not as lots of different infrastructure components. Istio allows users to encrypt all their network traffic, layering transparently onto any existing distributed application. Users need not embed any client libraries in their code to avail this functionality. Istio on GKE also comes with an integration into Stackdriver, Google Cloud’s monitoring and logging service. Istio securely authenticates and connects a developer’s services to one another. It transparently adds mTLS to a service communication, thus encrypting all information in transit. It provides a service identity for each service, allowing developers to create service-level policies enforced for each individual application transaction, while providing non-replayable identity protection. Istio is yet another step for GKE that will make it easier to secure and efficiently manage containerized applications. Head over to TechCrunch for more insights on this news. Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud What’s new in Google Cloud Functions serverless platform Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 2077

article-image-neuvector-upgrades-kubernetes-container-security-with-the-release-of-containerd-and-cri-o-run-time-support
Sugandha Lahoti
11 Dec 2018
2 min read
Save for later

NeuVector upgrades Kubernetes container security with the release of Containerd and CRI-O run-time support

Sugandha Lahoti
11 Dec 2018
2 min read
At the ongoing KubeCon + CloudNativeCon North America 2018, NeuVector has upgraded their line of container network security with the release of containerd and CRI-O run-time support. Attendees of the conference are invited to learn how customers use NeuVector and get 1:1 demos of the platform’s new capabilities. Containerd is a Cloud Native Computing Foundation incubating project. It’s basically a container run-time built to emphasize simplicity, robustness, and portability while managing the complete container lifecycle of its host system. This includes managing the lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and more. NeuVector is testing the containerd version on the latest IBM Cloud Kubernetes Service version, which uses the containerd run-time. CRI-O is an implementation of the Kubernetes container run-time interface enabling OCI compatible run-times. It is a lightweight alternative to Docker as a run-time for Kubernetes. CRI-O is made up of several components including: OCI compatible runtime containers/storage containers/image networking (CNI) container monitoring (common) security is provided by several core Linux capabilities With this newly added support, organizations using containerd or CRI-O can deploy NeuVector to secure their container environments. Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes. Kubernetes 1.13 released with new features and fixes to a major security flaw Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 2237

article-image-docker-announces-docker-desktop-enterprise
Savia Lobo
05 Dec 2018
3 min read
Save for later

Docker announces Docker Desktop Enterprise

Savia Lobo
05 Dec 2018
3 min read
Yesterday, at DockerCon Europe 2018, the Docker community announced the Docker Desktop Enterprise, an easy, fast, and a secure way to build production-ready containerized applications. Docker Desktop Enterprise Docker Desktop Enterprise is a new addition to Docker’s desktop product portfolio, which currently includes the free Docker Desktop Community products for MacOS and Windows. The Docker Desktop Enterprise version enables developers to work with the frameworks and languages they are comfortable with. It will also assist IT teams to safely configure, deploy, and manage development environments while adhering to corporate standards practices. Hence the enterprise version enables organizations to quickly move containerized applications from development to production and reduce their time to market. Features of Docker Desktop Enterprise Enterprise Manageability With Docker Desktop Enterprise, IT teams and application architects can present developers with application templates designed specifically for their team, to bootstrap and standardize the development process and provide a consistent environment all the way to production. For the IT team, the Docker Desktop Enterprise is packaged as standard MSI (Win) and PKG (Mac) distribution files. These files work with existing endpoint management tools with lockable settings via policy files. This edition also provides developers with ready to code, customized and approved application templates. Enterprise Deployment & Configuration Packaging IT desktop admins can deploy and manage Docker Desktop Enterprise across distributed developer teams with their preferred endpoint management tools using standard MSI and PKG files. Desktop administrators can also enable or disable particular settings within Docker Desktop Enterprise to meet corporate standards and provide the best developer experience. Application architects provide developers with trusted, customized application templates through the Application Designer interface in Docker Desktop Enterprise, helping to improve reliability and security by ensuring developers start from approved designs. Increase Developer Productivity and Ship Production-ready Containerized Applications Developers can quickly use company-provided application templates that instantly replicates production-approved application configurations on the local desktop by using the configurable version packs. With these version packs, developers can now synchronize desktop development environment with the same Docker API and Kubernetes versions that are used in production with Docker Enterprise. No Docker CLI commands are required to get started with Configurable Version Packs. Developers can also use the Application Designer interface, template-based workflows for creating containerized applications. If one has never launched a container before, the Application Designer interface provides the foundational container artifacts and user’s organization’s skeleton code to help users get started with containers in minutes. Read more about Docker Desktop Enterprise here. Gremlin makes chaos engineering with Docker easier with new container discovery feature Docker faces public outcry as Docker for Mac and Windows can be downloaded only via Docker Store login Zeit releases Serverless Docker in beta  
Read more
  • 0
  • 0
  • 3355
article-image-stripe-open-sources-skycfg-a-configuration-builder-for-kubernetes
Melisha Dsouza
05 Dec 2018
2 min read
Save for later

Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes

Melisha Dsouza
05 Dec 2018
2 min read
On 3rd December, Stripe announced the open-sourcing of Skycfg which is a configuration builder for Kubernetes. Skycfg was developed by Stripe as an extension library for the Starlark language. It adds support for constructing Protocol Buffer messages. The team states that as the implementation of Skycfg stabilizes, the public API surface will be expanded so that Skycfg can be combined with other Starlark extensions. Benefits of Skycfg Skycfg ensures Type safety. It uses ‘Protobuf’  which has a statically-typed data model, and the type of every field is known to Skycfg when it's building a configuration. Users are free from the risk of accidentally assigning a string to a number, a struct to a different struct, or forgetting to quote a YAML value. Users can reduce duplicated typing and share logic by defining helper functions. Starlark supports importing modules from other files. This can be used to share common code between configurations. These modules can protect service owners from complex Kubernetes logic. Skycfg supports limited dynamic behavior through the use of context variables, which let the Go caller pass arbitrary key:value pairs in the ctx parameter. Skycfg simplifies the configuration of Kubernetes services, Envoy routes, Terraform resources, and other complex configuration data. Here is what users are saying about Skycfg over at HackerNews: Head over to GitHub for all the code and supporting files. Google Kubernetes Engine was down last Friday, users left clueless of outage status and RCA Introducing Alpha Support for Volume Snapshotting in Kubernetes 1.12 ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl  
Read more
  • 0
  • 0
  • 2855

article-image-aviatrix-introduces-aviatrix-orchestrator-to-provide-powerful-orchestration-for-aws-transit-network-gateway-at-reinvent-2018
Bhagyashree R
30 Nov 2018
2 min read
Save for later

Aviatrix introduces Aviatrix Orchestrator to provide powerful orchestration for AWS Transit Network Gateway at re:Invent 2018

Bhagyashree R
30 Nov 2018
2 min read
Yesterday, at Amazon re:Invent, Aviatrix, a tool that helps users manage cloud deployments, announced and demonstrated Aviatrix Orchestrator. This new feature will make connecting multiple networks much easier. Essentially, it unifies the management of both AWS native networking services and Aviatrix services via a single management console. How does Aviatrix Orchestrator support AWS Transit Gateway? AWS Transit Gateway helps customers to interconnect virtual private clouds and their on-premises networks to a single gateway. Users only need to create and manage a single connection from the central gateway to each Amazon VPC, on-premises data center, or remote office across your network. It basically acts as a hub that controls how traffic is routed among all the connected networks which act like spokes. Aviatrix Orchestrator adds an automation layer to AWS Transit Gateway that allows users to provision and implement route domains securely and accurately. Users can automatically configure and propagate segmentation policies and leverage built-in troubleshooting and visualization tools for monitoring the entire environment. Some of the advantages of combining Aviatrix Orchestrator and AWS Transit Gateway include: Ensuring your AWS network follows virtual private cloud  segmentation best practices Limiting lateral movement in the event of a security breach Reducing the impact of human error by removing the need for potentially tedious manual configuration. Minimizing the blast radius that can result from misconfigurations. Replacing a flat architecture with a transit architecture Aviatrix Orchestrator is now available as an optional feature of the Aviatrix AVX Controller. New customers can launch the Aviatrix Secure Networking Platform AMI from AWS Marketplace to get access to this functionality. The existing customers can upgrade to the latest version of AVX software to use this feature. For more detail, visit the Aviatrix website. cstar: Spotify’s Cassandra orchestration tool is now open source! Amazon re:Invent announces Amazon DynamoDB Transactions, CloudWatch Logs Insights and cloud security conference, Amazon re:Inforce 2019 AWS re:Invent 2018: Amazon announces a variety of AWS IoT releases
Read more
  • 0
  • 0
  • 2279