Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-amazon-s3-retiring-support-path-style-api-requests-sparks-censorship-fears
Fatema Patrawala
06 May 2019
5 min read
Save for later

Amazon S3 is retiring support for path-style API requests; sparks censorship fears

Fatema Patrawala
06 May 2019
5 min read
Last week on Tuesday Amazon announced that Amazon S3 will no longer support path-style API requests. Currently Amazon S3 supports two request URI styles in all regions: path-style (also known as V1) that includes bucket name in the path of the URI (example: //s3.amazonaws.com/<bucketname>/key) and virtual-hosted style (also known as V2) which uses the bucket name as part of the domain name (example: //<bucketname>.s3.amazonaws.com/key). Amazon team mentions in the announcement that, “In our effort to continuously improve customer experience, the path-style naming convention is being retired in favor of virtual-hosted style request format.” They have also asked customers to update their applications to use the virtual-hosted style request format when making S3 API requests. And this should be done before September 30th, 2020 to avoid any service disruptions. Customers using the AWS SDK can upgrade to the most recent version of the SDK to ensure their applications are using the virtual-hosted style request format. They have further mentioned that, “Virtual-hosted style requests are supported for all S3 endpoints in all AWS regions. S3 will stop accepting requests made using the path-style request format in all regions starting September 30th, 2020. Any requests using the path-style request format made after this time will fail.” Users on Hackernews see this as a poor development by Amazon and have noted its implications that collateral freedom techniques using Amazon S3 will no longer work. One of them has commented strongly on this, “One important implication is that collateral freedom techniques [1] using Amazon S3 will no longer work. To put it simply, right now I could put some stuff not liked by Russian or Chinese government (maybe entire website) and give a direct s3 link to https:// s3 .amazonaws.com/mywebsite/index.html. Because it's https — there is no way man in the middle knows what people read on s3.amazonaws.com. With this change — dictators see my domain name and block requests to it right away. I don't know if they did it on purpose or just forgot about those who are less fortunate in regards to access to information, but this is a sad development. This censorship circumvention technique is actively used in the wild and loosing Amazon is no good.” Amazon team suggests that if your application is not able to utilize the virtual-hosted style request format, or if you have any questions or concerns, you may reach out to AWS Support. To know more about this news check out the official announcement page from Amazon. Update from Amazon team on 8th May Amazon’s Chief Evangelist for AWS, Jeff Barr sat with the S3 team to understand this change in detail. After getting a better understanding he posted an update on why the team plans to deprecate the path based model. Here’s his comparison on old vs the new: S3 currently supports two different addressing models: path-style and virtual-hosted style. Take a quick look at each one. The path-style model looks either like this (the global S3 endpoint): https://s3.amazonaws.com/jbarr-public/images/ritchie_and_thompson_pdp11.jpeg https://s3.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png Or this (one of the regional S3 endpoints): https://s3-useast2.amazonaws.com/jbarrpublic/images/ritchie_and_thompson_pdp11.jpeg https://s3-us-east-2.amazonaws.com/jeffbarr-public/classic_amazon_door_desk.png For example, jbarr-public and jeffbarr-public are bucket names; /images/ritchie_and_thompson_pdp11.jpeg and /jeffbarr-public/classic_amazon_door_desk.png are object keys. Even though the objects are owned by distinct AWS accounts and are in different S3 buckets and possibly in distinct AWS regions, both of them are in the DNS subdomain s3.amazonaws.com. Hold that thought while we look at the equivalent virtual-hosted style references: https://jbarr-public.s3.amazonaws.com/images/ritchie_and_thompson_pdp11.jpeg https://jeffbarr-public.s3.amazonaws.com/classic_amazon_door_desk.png These URLs reference the same objects, but the objects are now in distinct DNS subdomains (jbarr-public.s3.amazonaws.com and jeffbarr-public.s3.amazonaws.com, respectively). The difference is subtle, but very important. When you use a URL to reference an object, DNS resolution is used to map the subdomain name to an IP address. With the path-style model, the subdomain is always s3.amazonaws.com or one of the regional endpoints; with the virtual-hosted style, the subdomain is specific to the bucket. This additional degree of endpoint specificity is the key that opens the door to many important improvements to S3. The select few in the community are in favor of this as per one of the user comment on Hacker News which says, “Thank you for listening! The original plan was insane. The new one is sane. As I pointed out here https://twitter.com/dvassallo/status/1125549694778691584 thousands of printed books had references to V1 S3 URLs. Breaking them would have been a huge loss. Thank you!” But for the other few Amazon team has failed to address the domain censorship issue as per another user which says, “Still doesn't help with domain censorship. This was discussed in-depth in the other thread from yesterday, but TLDR, it's a lot harder to block https://s3.amazonaws.com/tiananmen-square-facts than https://tiananmen-square-facts.s3.amazonaws.com because DNS lookups are made before HTTPS kicks in.” Read about this update in detail here. Amazon S3 Security access and policies 3 announcements about Amazon S3 from re:Invent 2018: Intelligent-Tiering, Object Lock, and Batch Operations Amazon introduces S3 batch operations to process millions of S3 objects
Read more
  • 0
  • 0
  • 6437

article-image-puppet-announces-updates-in-a-bid-to-help-organizations-manage-their-automation-footprint
Richard Gall
03 May 2019
3 min read
Save for later

Puppet announces updates in a bid to help organizations manage their "automation footprint"

Richard Gall
03 May 2019
3 min read
There are murmurs on the internet that tools like Puppet are being killed off by Kubernetes. The reality is a little more complex. True, Kubernetes poses some challenges to various players in the infrastructure automation market, but they nevertheless remain important tools for engineers charged with managing infrastructure. Kubernetes is forcing this market to adapt - and with Puppet announcing new tools and features to its portfolio in Puppet Enterprise 2019.1 yesterday, this it's clear that the team are making the necessary strides to remain a key part of the infrastructure automation landscape. Update: This article was amended to highlight that Puppet Enterprise is a distinct product separate from Continuous Delivery for Puppet Enterprise. What's new for Puppet Enterprise 2019.1? There are two key elements to the Puppet announcement: enhanced integration with Puppet Bolt - an open source, agentless task runner - and improved capabilities with Continuous Delivery for Puppet Enterprise. Puppet Bolt Puppet Bolt, the Puppet team argue, offers a really simple way to get started with infrastructure automation "without requiring an agent installed on a remote target." The Puppet team explain that Puppet Bolt essentially allows users to expand the scope of what they can automate without losing the consistency and control that you'd expect when using a tool like Puppet. This has some significant benefits in the context of Kubernetes. Bryan Belanger, Principal Consultant at Autostructure, said "We love using Puppet Bolt because it leverages our existing Puppet roles and classifications allowing us to easily make changes to large groups of servers and upgrade Kubernetes clusters quicker, which is often a pain if done manually." Belanger continues, saying "with the help of Puppet Bolt, we were also able to fix more than 1,000 servers within five minutes and upgrade our Kubernetes clusters within four hours, which included coding and tasks." Continuous Delivery for Puppet Enterprise Updates to the Continuous Delivery product aim to make DevOps practices easier - the Puppet team are clearly trying to make it easier for organizations to empower their colleagues and continue to build a culture where engineers are not simply encouraged to be responsible for code deployment, but also able to do it with minimal fuss. Module Delivery Pipelines now mean modules can be independently deployed without blocking others, while Simplified Puppet Deployments aims to make it easier for engineers that aren't familiar with Puppet to "push simple infrastructure changes immediately and easily perform complex rolling deployments to a group of nodes in batches in one step." But there is also another dimension that aims to help engineers take pro-active steps to tackle resiliency and security issues. With Impact Analysis teams will be able to look at the potential impact of a deployment before it's done. Read next: “This is John. He literally wrote the book on Puppet” – An Interview with John Arundel What's the big idea behind this announcement? The over-arching narrative that's coming from the top is about supporting teams to scale their DevOps processes. It's about making organizations' 'automation footprint' more manageable. "IT teams need a simple way to get started with automation and a solution that grows with them as their automation footprint grows," Matt Waxman, Head of Product at Puppet, explains. "You shouldn’t have to throw away your existing scripts or tools to scale automation across your organization. Organizations need a solution that is extensible — one that complements their current automation efforts and helps them scale beyond individuals to multiple teams." Puppet Enterprise 2019.1 will be out on general availability on May 7 2019. Learn more here.
Read more
  • 0
  • 0
  • 2497

article-image-the-major-dns-blunder-at-microsoft-azure-affects-office-365-one-drive-microsoft-teams-xbox-live-and-many-more-services
Amrata Joshi
03 May 2019
3 min read
Save for later

The major DNS blunder at Microsoft Azure affects Office 365, One Drive, Microsoft Teams, Xbox Live, and many more services

Amrata Joshi
03 May 2019
3 min read
It seems all is not well at Microsoft post yesterday’s outage as the Microsoft's Azure cloud been up and down globally because of a DNS configuration issue. This outage that started at 1:20 pm yesterday, lasted for more than an hour which ended up affecting Microsoft’s cloud services, including Office 365, One Drive, Microsoft Teams, Xbox Live, and many others that are used by Microsoft’s commercial customers. Due to the networking connectivity errors in Microsoft Azure even the third-party apps and sites running on Microsoft’s cloud got affected. Meanwhile, around 2:30 pm, Microsoft started gradually recovering Azure regions one by one. Though Microsoft is yet to completely troubleshoot this major issue and has already warned that it might take some time to get everyone back up and running. But this isn’t the first time that DNS outage has affected Azure. This year in January, a few customers' databases had gone missing, which affected a number of Azure SQL databases that utilize custom KeyVault keys for Transparent Data Encryption (TDE). https://twitter.com/AzureSupport/status/1124046510411460610 The Azure status page reads, "Customers may experience intermittent connectivity issues with Azure and other Microsoft services (including M365, Dynamics, DevOps, etc)." The Microsoft engineers found out that an incorrect name server delegation issue affected DNS resolution, network connectivity, and that affected the compute, storage, app service, AAD, and SQL database resources. Even on the Microsoft 365 status page, Redmond's techies have blamed an internal DNS configuration error for the downtime. Also, during the migration of the DNS system to Azure DNS, some domains for Microsoft services got incorrectly updated. The good thing is that no customer DNS records were impacted during this incident, also the availability of Azure DNS remained at 100% throughout this incident. Only records for Microsoft services got affected due to this issue. According to Microsoft, the broken systems have been fixed and the three-hour outage has come to an end and the Azure's network infrastructure will soon get back to normal. https://twitter.com/MSFT365Status/status/1124063490740826133 Users have reported issues with accessing the cloud service and are complaining. A user commented on HackerNews, “The sev1 messages in my inbox currently begs to differ. there's no issue maybe with the dns at this very moment but the platform is thoroughly fucked up.” Users are also questioning the reliability of Azure. Another comment reads, “Man... Azure seems to be an order of magnitude worse than AWS and GCP when it comes to reliability.” To know more about the status of the situation, check out Microsoft’s post. Microsoft brings PostgreSQL extension and SQL Notebooks functionality to Azure Data Studio Microsoft @MWC (Mobile World Congress) Day 1: HoloLens 2, Azure-powered Kinect camera and more! Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records    
Read more
  • 0
  • 0
  • 3165
Banner background image

article-image-docker-announces-collaboration-with-microsofts-net-at-dockercon-2019
Savia Lobo
02 May 2019
4 min read
Save for later

Docker announces collaboration with Microsoft’s .NET at DockerCon 2019

Savia Lobo
02 May 2019
4 min read
Using Docker and .NET together was brought up in the year 2017, where Microsoft explained the cons of using them together. Last year’s DockerCon update showed multiple .NET demos showing how one can use Docker for modern applications and for older applications that use traditional architectures. This eased users to containerize .NET applications using tools from both Microsoft and Docker. The team said that “most of their effort to improve the .NET Core Docker experience in the last year has been focused on .NET Core 3.0.” “This is the first release in which we’ve made substantive runtime changes to make CoreCLR much more efficient, honor Docker resource limits better by default, and offer more configuration for you to tweak”. Microsoft writes in one of their blog posts. The team also mentions that they are invested in making .NET Core a true container runtime. They look forward to hardening .NET’s runtime to make it container-aware and function efficiently in low-memory environments. Let’s have a look at the different advantages of bring Docker and .NET together Docker + .NET advantages Lesser memory allocation and fewer GC heaps by default With .NET 3.0, the team reduced the minimal generation 0 GC allocation budget to better align with modern processor cache sizes and cache hierarchy. With this, the initial allocation size, which was unnecessarily large, was significantly reduced without any perceivable loss of performance thus bringing in tens of percentage points of improvements. The team also mention about a new policy for determining how many GC heaps to create. This is most important on machines where a low memory limit is set, but no CPU limit is set on a machine with many CPU cores. The GC now reserves a memory segment with a minimum size of 16 MB per heap. This limits the number of heaps the GC will create. Both these changes result in lower memory usage by default and make the default .NET Core configuration better in more cases. Added PowerShell to .NET Core SDK container Images PowerShell Core has been added to the .NET Core SDK Docker container images, per requests from the community. There are two main scenarios that having PowerShell inside the .NET Core SDK container image enables, which were not otherwise possible: Write .NET Core application Dockerfiles with PowerShell syntax, for any OS. Write .NET Core application/library build logic that can be easily containerized. Note: PowerShell Core is now available as part of .NET Core 3.0 SDK container images. It is not part of the .NET Core 3.0 SDK. .NET Core Images now available via Microsoft Container Registry Microsoft teams are now publishing container images to the Microsoft Container Registry (MCR). There are two primary reasons for this change: Syndicate Microsoft-provided container images to multiple registries, like Docker Hub and Red Hat. Use Microsoft Azure as a global CDN for delivering Microsoft-provided container images. Platform matrix and support With .NET Core, we try to support a broad set of distros and versions. Following is the policy for each distro: Alpine: support tip and retain support for one quarter (3 months) after a new version is released. Currently, 3.9 is tip and the team will stop producing 3.8 images in a month or two. Debian: support one Debian version per latest .NET Core version. This is also the default Linux image used for a given multi-arch tag. For .NET Core 3.0, the team may publish Debian 10 based images. However, they may produce Debian 9 based images for .NET Core 2.1 and 2.2, and Debian 8 images for earlier .NET Core versions. Ubuntu:  support one Ubuntu version per the latest .NET Core version(18.04). Also, as the team gets to know of a new Ubuntu LTS versions, they will start supporting non-LTS Ubuntu versions a means of validating the new LTS versions. For Windows, they support the cross-product of Nano Server and .NET Core versions. ARM Architecture The team plans to add support for ARM64 on Linux with .NET Core 3.0, complementing the ARM32 and X64 support already in place. This will enable .NET Core to be used in even more environments. Apart from these advantages, the team has also added support for Docker Memory and CPU Limits. To know more about this partnership in detail, read Microsoft’s official blogpost. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Are Debian and Docker slowly losing popularity? Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 4278

article-image-gnu-shepherd-0-6-0-releases-with-updated-translations-faster-services-and-much-more
Amrata Joshi
24 Apr 2019
2 min read
Save for later

GNU Shepherd 0.6.0 releases with updated translations, faster services, and much more

Amrata Joshi
24 Apr 2019
2 min read
Yesterday, the team behind GNU Shepherd announced the release of GNU Shepherd version 0.6.0, a service manager which is written in Guile that looks after the herd of system services. It also provides a replacement for the service-managing capabilities of SysV-init (or any other init). This release has been bootstrapped with few tools including, Autoconf 2.69, Automake 1.16.1, Makeinfo 6.5, Help2man 1.47.8. What’s new in GNU Shepherd version 0.6.0? Services can now be “one-shot”. The ‘shepherd’ deletes its socket file upon termination. The bug ‘herd stop S’ is no longer an error when S is already stopped. The ‘herd’ exits with non-zero value while executing an action that fails. The ‘shepherd’ ignores reboot errors while running in a container. The translation of error messages has been fixed. This release comes with a new translation that is ta (Tamil). The list of updated translations include uk, zh_CN, fr, pt_BR, sv, da, es, ta, Most of the users are happy and excited about this release. A user commented on the HackerNews thread, “I've written previously about how much I appreciate the Guile info manual. For a document in a relatively obscure help system (other than Emacs users, who even knows about Texinfo?), it's carefully written with an eye to empowering its users. It's perhaps a bit quixotic, but you get the feeling that the GNU project really wants to deliver an OS written in Scheme all the way down, totally under the control of an enlightened end user. The Shepherd project certainly fits with that vision.” Another user commented, “I hesitate to speak for emacs users, because I'm not one really, but I suspect the info format feels really comfortable when viewed with emacs.” Few users think that Shepherd might be a replacement for systemd, a software that provides building blocks for Linux operating system. A comment reads, “Is Shepherd meant to be a replacement for systemd (et al), then?” To know more about this news, check out GNU’s official announcement. GNU Shepherd 0.5.0 releases GNU Nano 4.0 text editor releases! GNU Octave 5.1.0 releases with new changes and improvements  
Read more
  • 0
  • 0
  • 1450

article-image-fastly-edge-cloud-platform-files-for-ipo
Bhagyashree R
22 Apr 2019
3 min read
Save for later

Fastly, edge cloud platform, files for IPO

Bhagyashree R
22 Apr 2019
3 min read
Last week, Fastly Inc., a provider of an edge cloud platform announced that it has filed its proposed initial public offering (ipo) with the US Securities and Exchange Commission. Last year in July, in its last round of financing before a public offering,  the company raised $40 million investment. The book-running managers for the proposed offering are BofA Merrill Lynch, Citigroup, and Credit Suisse. William Blair, Raymond James, Baird, Oppenheimer & Co., Stifel, Craig-Hallum Capital Group and D.A. Davidson & Co. are co-managers for the proposed offering. Founded by Artur Bergman in 2011, Fastly is an American cloud computing services provider. Its edge cloud platform provides a content delivery network, Internet security services, load balancing, and video & streaming services. The edge cloud platform is designed from the ground up to be programmable and to support agile software development. This programmable edge cloud platform gives developers real-time visibility and control by stream logging data. So, developers are able to instantly see the impact of new code in production, troubleshoot issues as they occur, and rapidly identify suspicious traffic. Fastly boasts of catering to customers like The New York Times, Reddit, GitHub, Stripe, Ticketmaster and Pinterest. The company, in the unfinished prospectus shared how it has grown over the years, the risks of investing in the company, what are its plans for the future, and more. The company shows a steady growth in its revenue, while in December 2017 it was $104.9 million, it increased to $144.6 million, by the end of 2018. Its loss has also shown some decline from $32.5 million in December 2017 to $30.9 million in December 2018. Predicting its future market value, the prospectus says, “When incorporating these additional offerings, we estimate a total market opportunity of approximately $18.0 billion in 2019, based on expected growth from 2017, to $35.8 billion in 2022, growing with an expected CAGR of 25.6%.“ Fastly has not yet determined the number of shares to offered and the price range for the proposed offering. Currently, the company’s public filing has a placeholder amount of $100 million. However, looking at the amount of funding the company has received, TechCrunch predicts that it is more likely to get closer to $1 billion when it finally prices its shares. Fastly has two classes of authorized common stock: Class A and Class B. The rights of both the common stockholders are identical, except with respect to voting and conversion. Each Class A share is entitled to one vote per share and each Class B share is entitled to 10 votes per share. Class B shares are convertible into one shares of Class A common stock. The Class A common stock will be listed on The New York Stock Exchange under the symbol “FSLY.” To read more in detail, check out the ipo filing by Fastly. Fastly open sources Lucet, a native WebAssembly compiler and runtime Cloudflare raises $150M with Franklin Templeton leading the latest round of funding Dark Web Phishing Kits: Cheap, plentiful and ready to trick you  
Read more
  • 0
  • 0
  • 2595
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-openssh-8-0-released-addresses-scp-vulnerability-new-ssh-additions
Fatema Patrawala
19 Apr 2019
2 min read
Save for later

OpenSSH 8.0 released; addresses SCP vulnerability and new SSH additions

Fatema Patrawala
19 Apr 2019
2 min read
Theo de Raadt and the OpenBSD developers who maintain the OpenSSH, today released the latest version OpenSSH 8.0. OpenSSH 8.0 has an important security fix for a weakness in the scp(1) tool when you use scp for copying files to/from remote systems. Till now when copying files from remote systems to a local directory, SCP was not verifying the filenames of what was being sent from the server to client. This allowed a hostile server to create or clobber unexpected local files with attack-controlled data regardless of what file(s) were actually requested for copying from the remote server. OpenSSH 8.0 adds client-side checking that the filenames sent from the server match the command-line request. While this client-side checking added to SCP, the OpenSSH developers recommend against using it and instead use sftp, rsync, or other alternatives. "The scp protocol is outdated, inflexible and not readily fixed. We recommend the use of more modern protocols like sftp and rsync for file transfer instead.", mention OpenSSH developers. New to OpenSSH 8.0 meanwhile is support for ECDSA keys in PKCS#11 tokens, experimental quantum-computing resistant key exchange method. Also, the default RSA key size from ssh-keygen has been increased to 3072 bits and more SSH utilities supporting a "-v" flag for greater verbosity are added. It also comes with a wide range of fixes throughout including a number of portability fixes. More details on OpenSSH 8.0 is available on OpenSSH.com. OpenSSH, now a part of the Windows Server 2019 OpenSSH 7.8 released! OpenSSH 7.9 released
Read more
  • 0
  • 0
  • 5897

article-image-linkerd-2-3-introduces-zero-trust-networking-for-kubernetes
Savia Lobo
19 Apr 2019
2 min read
Save for later

Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes

Savia Lobo
19 Apr 2019
2 min read
This week, the team at Linkerd announced an updated version of the service mesh, Linkerd 2.3. In this release, the mTLS is out of experimental to a fully supported feature. Along with several important security primitives, the important update in Linkerd 2.3 is that it turns authenticated, confidential communication between meshed services on by default. Linkerd, a Cloud Native Computing Foundation (CNCF) project, is a service mesh, designed to give platform-wide observability, reliability, and security without requiring configuration or code changes. The team at Linkerd says, “Securing the communication between Kubernetes services is an important step towards adopting zero-trust networking. In the zero-trust approach, we discard assumptions about a datacenter security perimeter and instead push requirements around authentication, authorization, and confidentiality “down” to individual units. In Kubernetes terms, this means that services running on the cluster validate, authorize, and encrypt their own communication.” Linkerd 2.3 addresses challenges with the adoption of zero-trust networking as follows: The control plane ships with a certificate authority (called simply “identity”). The data plane proxies receive TLS certificates from this identity service, tied to the Kubernetes Service Account that the proxy belongs to, rotated every 24 hours. The data plane proxies automatically upgrade all communication between meshed services to authenticated, encrypted TLS connections using these certificates. Since the control plane also runs on the data plane, communication between control plane components is secured in the same way. All of these changes mentioned are enabled by default and requires no configuration. “This release represents a major step forward in Linkerd’s security roadmap. In an upcoming blog post, Linkerd creator Oliver Gould will be detailing the design tradeoffs in this approach, as well as covering Linkerd’s upcoming roadmap around certificate chaining, TLS enforcement, identity beyond service accounts, and authorization”, the Linkerd’s official blog mentions. These topics and all the other fun features in 2.3 will be further discussed in the upcoming Linkerd Online Community Meeting on Wednesday, April 24, 2019 at 10am PT. To know more about Linkerd 2.3 in detail, visit its official website. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Platform9 open sources Klusterkit to simplify the deployment and operations of Kubernetes clusters Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more
Read more
  • 0
  • 0
  • 3365

article-image-platform9-open-sources-klusterkit-to-simplify-the-deployment-and-operations-of-kubernetes-clusters
Bhagyashree R
16 Apr 2019
3 min read
Save for later

Platform9 open sources Klusterkit to simplify the deployment and operations of Kubernetes clusters

Bhagyashree R
16 Apr 2019
3 min read
Today, Platform9 open sourced Klusterkit under the Apache 2.0 license. It is a set of three open source tools that can be used separately or in tandem to simplify the creation and management of highly-available, multi-master, production-grade Kubernetes clusters on-premise, air-gapped environments. Tools included in Klusterkit ‘etcdadm’ Inspired by the ‘kubeadm’ command, ‘etcdadm’ is a command-line interface (CLI) for operating an etcd cluster. It makes the creation of a new cluster, addition of a new member, or the removal of a member from an existing cluster easier. It is adopted by Kubernetes Cluster Lifecycle SIG,  a group that focuses on deployment and upgrades of clusters. ‘nodeadm’ This is a CLI node administration tool that complements kubeadm by deploying all the dependencies required by kubeadm. You can easily deploy a Kubernetes control plane or nodes on any machine running Linux with the help of this tool. ‘cctl’ This is a cluster lifecycle management tool based on Kubernetes community's Cluster API spec. It uses the other two tools in Klusterkit to easily deploy and maintain highly-available Kubernetes clusters in on-premises, even air-gapped environments. Features of Klusterkit It comes with multi-master (K8s HA) support Users can deploy and manage secure etcd clusters It provides rolling upgrade and rollback capability It works in air-gapped environments Users can backup and recover etcd clusters from quorum loss You can control plane protection from low memory/ low CPU conditions. Klusterkit solution architecture Source: Platform 9 Klusterkit stores the metadata of the Kubernetes cluster you build, in a single file named ‘cctl-state.yaml’. You can invoke the cctl CLI to orchestrate the lifecycle of a Kubernetes cluster from any machine which contains this state file. For performing CRUD operations on clusters, cctl implements and calls into the cluster-api interface as a library. It uses ssh-provider, the machine controller for the cluster-api reference implementation. The ssh-provider then, in turn, calls etcdadm and nodeadm to perform cluster operations. In an email sent to us, Arun Sriraman, Kubernetes Technical Lead Manager at Platform9, explaining the importance of Klusterkit, said, “Klusterkit presents a powerful, yet easy-to-use Kubernetes toolset that complements community efforts like Cluster API and kubeadm to allow enterprises a path to modernize applications to use Kubernetes, and run them anywhere -- even in on-premise, air-gapped environments.” To know more in detail, check out the documentation on GitHub. Pivotal and Heroku team up to create Cloud Native Buildpacks for Kubernetes Kubernetes 1.14 releases with support for Windows nodes, Kustomize integration, and much more Introducing ‘Quarkus’, a Kubernetes native Java framework for GraalVM & OpenJDK HotSpot
Read more
  • 0
  • 0
  • 2355

article-image-google-cloud-introduces-traffic-director-beta-a-networking-management-tool-for-service-mesh
Amrata Joshi
12 Apr 2019
2 min read
Save for later

Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh

Amrata Joshi
12 Apr 2019
2 min read
This week, the team at Google Cloud announced the Beta version of Traffic Director, a networking management tool for service mesh, at the Google Cloud Next. Traffic Director Beta will help network managers understand what’s happening in their service mesh. Service mesh is a network of microservices that creates the applications and the interactions between them. Features of Traffic Director Beta Fully managed with SLA Traffic Director’s production-grade features have 99.99% SLA. Users don’t have to worry about deploying and managing the control plane. Traffic management With the help of Traffic Director, users can easily deploy everything from simple load balancing to advanced features like request routing and percentage-based traffic splitting. Build resilient services Users can keep their service up and running by deploying it across multiple regions as VMs or containers. Traffic Director can be used for delivering global load balancing with automatic cross-region overflow and failover. With Traffic Director users can deploy their service instances in multiple regions while requiring only a single service IP. Scaling Traffic Director handles the growth in deployments and it manages to scale for larger services and installations. Traffic management for open service proxies This management tool provides a GCP (Google Cloud Platform)-managed traffic management control plane for xDSv2-compliant open service proxies like Envoy. Compatible with VMs and containers Users can deploy their Traffic Director-managed VM service and container instances with the help of managed instance groups and network endpoint groups. Supports request routing policies This tool supports routing features like traffic splitting and enables use cases like canarying, URL rewrites/redirects, fault injection, traffic mirroring, and advanced routing capabilities that are based on header values such as cookies. To know more about this news, check out Google Cloud's official page. Google’s Cloud Healthcare API is now available in beta Ian Goodfellow quits Google and joins Apple as a director of machine learning Google Cloud Next’19 day 1: open-source partnerships, hybrid-cloud platform, Cloud Run, and more
Read more
  • 0
  • 0
  • 2546
article-image-googles-cloud-healthcare-api-is-now-available-in-beta
Amrata Joshi
09 Apr 2019
3 min read
Save for later

Google’s Cloud Healthcare API is now available in beta

Amrata Joshi
09 Apr 2019
3 min read
Last week, Google announced that its Cloud Healthcare API is now available in beta. The API acts as a bridge between on-site healthcare systems and applications that are hosted on Google Cloud. This API is HIPAA compliant, ecosystem-ready and developer-friendly. The aim of the team at Google is to give hospitals and other healthcare facilities more analytical power with the help of Cloud Healthcare API. The official post reads, "From the beginning, our primary goal with Cloud Healthcare API has been to advance data interoperability by breaking down the data silos that exist within care systems. The API enables healthcare organizations to ingest and manage key data and better understand that data through the application of analytics and machine learning in real time, at scale." This API offers a managed solution for storing and accessing healthcare data in Google Cloud Platform (GCP). With the help of this API, users can now explore new capabilities for data analysis, machine learning, and application development for healthcare solutions. The  Cloud Healthcare API also simplifies app development and device integration to speed up the process. This API also supports standards-based data formats and protocols of existing healthcare tech. For instance, it will allow healthcare organizations to stream data processing with Cloud Dataflow, analyze data at scale with BigQuery, and tap into machine learning with the Cloud Machine Learning Engine. Features of Cloud Healthcare API Compliant and certified This API is HIPAA compliant and HITRUST CSF certified. Google is also planning ISO 27001, ISO 27017, and ISO 27018 certifications for Cloud Healthcare API. Explore your data This API allows users to explore their healthcare data by incorporating advanced analytics and machine learning solutions such as BigQuery, Cloud AutoML, and Cloud ML Engine. Managed scalability Google’s Cloud Healthcare API provides web-native, serverless scaling which is optimized by Google’s infrastructure. Users can simply activate the API to send requests as the initial capacity configuration is not required. Apigee Integration This API integrates with Apigee, which is recognized by Gartner as a leader in full lifecycle API management, for delivering app and service ecosystems around user data. Developer-friendly This API organizes users’ healthcare information into datasets with one or more modality-specific stores per set where each store exposes both a REST and RPC interface. Enhanced data liquidity The API also supports bulk import and export of FHIR data and DICOM data, which accelerates delivery for applications with dependencies on existing datasets. It further provides a convenient API for moving data between projects. The official post reads, “While our product and engineering teams are focused on building products to solve challenges across the healthcare and life sciences industries, our core mission embraces close collaboration with our partners and customers.” Google will highlight what its partners, including the American Cancer Society, CareCloud, Kaiser Permanente, and iDigital are doing with the API at the ongoing Google Cloud Next. To know more about this news, check out Google’s official announcement. Ian Goodfellow quits Google and joins Apple as a director of machine learning Google dissolves its Advanced Technology External Advisory Council in a week after repeat criticism on selection of members Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council  
Read more
  • 0
  • 0
  • 4088

article-image-zabbix-4-2-release-for-data-collection-processing-and-visualization
Fatema Patrawala
03 Apr 2019
7 min read
Save for later

Zabbix 4.2 release packed with modern monitoring system for data collection, processing and visualization

Fatema Patrawala
03 Apr 2019
7 min read
Zabbix Team announced the release of Zabbix 4.2. The latest release of Zabbix is packed with modern monitoring system for: data collection and processing, distributed monitoring, real-time problem and anomaly detection, alerting and escalations, visualization and more. Let us check out what Zabbix 4.2 has actually brought to the table. Here is a list of the most important functionality included into the new release. Official support of new platforms In addition to existing official packages and appliances, Zabbix 4.2 will now cater to the following platforms: Zabbix package for RaspberryPi Zabbix package for SUSE Enterprise Linux Server Zabbix agent for Mac OS/X Zabbix agent for MSI for Windows Zabbix Docker images Built-in support of Prometheus data collection Zabbix is able to collect data in many different ways (push/pull) from various data sources including JMX, SNMP, WMI, HTTP/HTTPS, RestAPI, XML Soap, SSH, Telnet, agents, scripts and other data sources, with Prometheus being the latest addition to the bunch. Now the 4.2 release will offer an integration with the exporters using native support of PromQL language. Moreover, the use of dependent metrics will give the Zabbix team ability to collect massive amounts of Prometheus metrics in a highly efficient way: this way they get all the data using a single HTTP call and then just reuse it for corresponding dependent metrics. Zabbix can also transform Prometheus data into JSON format, which can be used directly for low-level discovery. Efficient high-frequency monitoring We all want to discover problems as fast as possible. Now with 4.2 we can collect data with high frequency, instantly discover problems without keeping excessive amount of history data in the Zabbix database. Validation of collected data and error handling No one wants to collect incorrect data. With Zabbix 4.2 we can address that via built-in preprocessing rules that validate data by matching or not matching regular expression, using JSONPath or XMLPath. Now it is also possible to extract error messages from collected data. This can be especially handy if we get an error from external APIs. Preprocessing data with JavaScript In Zabbix 4.2 you can fully harness the power of user-defined scripts written in JavaScript. Support of JavaScript gives absolute freedom of data preprocessing! In fact, you can now replace all external scripts with JavaScript. This will enable all sorts of data transformation, aggregation, filtering, arithmetical and logical operations and much more. Test preprocessing rules from UI As preprocessing becomes much more powerful, it is important to have a tool to verify complex scenarios. Zabbix 4.2 will allow to test preprocessing rules straight from the Web UI! Processing millions of metrics per second! Prior to 4.2, all preprocessing was handled solely by the Zabbix server. A combination of proxy-based preprocessing with throttling gives us the ability to perform high-frequency monitoring collecting millions of values per second without overloading the Zabbix Server. Proxies will perform massive preprocessing of collected data while the Server will only receive a small fraction of it. Easy low level discovery Low-level discovery (LLD) is a very effective tool for automatic discovery of all sorts of resources (filesystems, processes, applications, services, etc) and automatic creation of metrics, triggers and graphs related to them. It tremendously helps to save time and effort allowing to use just a single template for monitoring devices with different resources. Zabbix 4.2 supports processing based on arbitrary JSON input, which in turn allows us to communicate directly with external APIs, and use received data for automatic creation of hosts, metrics and triggers. Combined with JavaScript preprocessing it opens up fantastic opportunities for templates, that may work with various external data sources such as cloud APIs, application APIs, data in XML, JSON or any other format. Support of TimescaleDB TimescaleDB promises better performance due to more efficient algorithms and performance oriented data structures. Another significant advantage of TimescaleDB is automatic table partitioning, which improves performance and (combined with Zabbix) delivers fully automatic management of historical data. However, Zabbix team hasn’t performed any serious benchmarking yet. So it is hard to comment on real life experience of running TimescaleDB in production. At this moment TimescaleDB is an actively developed and rather young project. Simplified tag management Prior to Zabbix 4.2 we could only set tags for individual triggers. Now tag management is much more efficient thanks to template and host tags support. All detected problems get tag information not only from the trigger, but also from the host and corresponding templates. More flexible auto-registration Zabbix 4.2 auto-registration options gives the ability to filter host names based on a regular expression. It’s really useful if we want to create different auto-registration scenarios for various sets of hosts. Matching by regular expression is especially beneficial in case we have complex naming conventions for our devices. Control host names for auto-discovery Another improvement is related to naming hosts during auto-discovery. Zabbix 4.2 allows to assign received metric data to a host name and visible name. It is an extremely useful feature that enables great level of automation for network discovery, especially if we use Zabbix or SNMP agents. Test media type from Web UI Zabbix 4.2 allows us to send a test message or check that our chosen alerting method works as expected straight from the Zabbix frontend. This is quite useful for checking the scripts we are using for integration with external alerting and helpdesk systems etc. Remote monitoring of Zabbix components Zabbix 4.2 introduces remote monitoring of internal performance and availability metrics of the Zabbix Server and Proxy. Not only that, it also allows to discover Zabbix related issues and alert us even if the components are overloaded or, for example, have a large amount of data stored in local buffer (in case of proxies). Nicely formatted email messages Zabbix 4.2 comes with support of HTML format in email messages. It means that we are not limited to plain text anymore, the messages can use all power of HTML and CSS for much nicer and easy to read alert messages. Accessing remote services from network maps A new set of macros is now supported in network maps for creation of user-defined URLs pointing to external systems. It allows to open external tickets in helpdesk or configuration management systems, or do any other actions using just one or two mouse-clicks. LLD rule as a dependant metric This functionality allows to use received values of a master metric for data collection and LLD rules simultaneously. In case of data collection from Prometheus exporters, Zabbix will only execute HTTP query once and the result of the query will be used immediately for all dependent metrics (LLD rules and metric values). Animations for maps Zabbix 4.2 comes with support of animated GIFs making problems on maps more noticeable. Extracting data from HTTP headers Web-monitoring brings the ability to extract data from HTTP headers. With this we can now create multi-step scenarios for Web-monitoring and for external APIs using the authentication token received in one of the steps. Zabbix Sender pushes data to all IP addresses Zabbix Sender will now send metric data to all IP addresses defined in the “ServerActive” parameter of the Zabbix Agent configuration file. Filter for configuration of triggers Configuration of triggers page got a nice extended filter for quick and easy selection of triggers by a specified criteria. Showing exact time in graph tooltip It is a minor yet very useful improvement. Zabbix will show you timestamp in graph tooltip. Other improvements Non-destructive resizing and reordering of dashboard widgets Mass-update for item prototypes Support of IPv6 for DNS related checks (“net.dns” and “new.dns.record”) “skip” parameter for VMWare event log check “vmware.eventlog” Extended preprocessing error messages to include intermediate step results Expanded information and the complete list of Zabbix 4.2 developments, improvements and new functionality is available in Zabbix Manual. Encrypting Zabbix Traffic Deploying a Zabbix proxy Zabbix and I – Almost Heroes
Read more
  • 0
  • 0
  • 4759

article-image-you-can-now-integrate-chaos-engineering-into-your-ci-and-cd-pipelines-thanks-to-gremlin-and-spinnaker
Richard Gall
02 Apr 2019
3 min read
Save for later

You can now integrate chaos engineering into your CI and CD pipelines thanks to Gremlin and Spinnaker

Richard Gall
02 Apr 2019
3 min read
Chaos engineering is a trend that has been evolving quickly over the last 12 months. While for the decade it has largely been the preserve of Silicon Valley's biggest companies, thanks to platforms and tools like Gremlin, and an increased focus on software resiliency, that's been changing. Today, however, is a particularly important step in chaos engineering, as Gremlin have partnered with Netflix-built continuous deployment platform Spinnaker to allow engineering teams to automate chaos engineering 'experiments' throughout their CI and CD pipelines. Ultimately it means DevOps teams can think differently about chaos engineering. Gradually, this could help shift the way we think about chaos engineering, as it moves from localized experiments that require an in depth understanding of one's infrastructure, to something that is built-into the development and deployment process. More importantly, it makes it easier for engineering teams to take complete ownership of the reliability of their software. At a time when distributed systems bring more unpredictability into infrastructure, and when downtime has never been more costly (a Gartner report suggested downtime costs the average U.S. company $5,600 a minute all the way back in 2014) this is a step that could have a significant impact on how engineers work in the future. Read next: How Gremlin is making chaos engineering accessible [Interview] Spinnaker and chaos engineering Spinnaker is an open source continuous delivery platform built by Netflix and supported by Google, Microsoft, and Oracle. It's a platform that has been specifically developed for highly distributed and hybrid systems. This makes it a great fit for Gremlin, and also highlights that the growth of chaos engineering is being driven by the move to cloud. Adam Jordens, a Core Contributor to Spinnaker and a member of the Spinnaker Technical Oversight Committee said that "with the rise of microservices and distributed architectures, it’s more important than ever to understand how your cloud infrastructure behaves under stress.” Jordens continued; "by integrating with Gremlin, companies will be able to automate chaos engineering into their continuous delivery platform for the continual hardening and resilience of their internet systems.” Kolton Andrus, Gremlin CEO and Co-Founder explained the importance of Spinnaker in relation to chaos engineering, saying that "by integrating with Gremlin, users can now automate chaos experiments across multiple cloud providers including AWS EC2, Kubernetes, Google Compute Engine, Google Kubernetes Engine, Google App Engine, Microsoft Azure, Openstack, and more, enabling enterprises to build more resilient software." In recent months Gremlin has been working hard on products and features that make chaos engineering more accessible to companies and their engineering teams. In February, it released Gremlin Free, a free version of Gremlin designed to offer users a starting point for performing chaos experiments.
Read more
  • 0
  • 0
  • 2986
article-image-cloudflare-adds-warp-a-free-vpn-to-1-1-1-1-dns-app-to-improve-internet-performance-and-security
Natasha Mathur
02 Apr 2019
3 min read
Save for later

Cloudflare adds Warp, a free VPN to 1.1.1.1 DNS app to improve internet performance and security

Natasha Mathur
02 Apr 2019
3 min read
Cloudflare announced yesterday that it is adding Warp, a free VPN to the 1.1.1.1 DNS resolver app. Cloudflare team states that it began its plans to integrate 1.1.1.1 app with warp performance and security tech, about two years ago. The 1.1.1.1 app was released in November last year for iOS and Android. The mobile app included features such as VPN support that helped move the mobile traffic towards 1.1.1.1 DNS servers, thereby, helping improve speeds. Now with warp integration, 1.1.1.1 app will speed up mobile data using Cloudflare network to resolve DNS queries at a faster pace.  With Warp, all the unencrypted connections are encrypted automatically by default. Also, Warp comes with end-to-end encryption and doesn’t require users to install a root certificate to observe the encrypted Internet traffic. For cases when you browse the unencrypted Internet through Warp, Cloudflare’s network can cache and compress content to improve performance and decrease your data usage and mobile carrier bill. “In the 1.1.1.1 App, if users decide to enable Warp, instead of just DNS queries being secured and optimized, all Internet traffic is secured and optimized. In other words, Warp is the VPN for people who don't know what V.P.N. stands for”, states the Cloudflare team. Apart from that, Warp also offers excellent performance and reliability. Warp is built around a UDP-based protocol that has been optimized for the mobile Internet. Warp also makes use of Cloudflare’s massive global network and allows Warp to connect with servers within milliseconds. Moreover, Warp has been tested to show that it increases internet performance. Another factor is reliability which has also significantly improved. Warp is not as capable of eliminating mobile dead spots, but it is very efficient at recovering from loss. Warp doesn’t increase your battery usage as it is built around WireGuard, a new and efficient VPN protocol. The basic version of Warp has been added as a free option with the 1.1.1.1 app for free. However, Cloudflare team will be charging for Warp+, a premium version of Warp, that will be even faster with Argo technology. A low monthly fee will be charged for Warp+ that will vary based on different regions. Also, the 1.1.1.1 App with Warp will have all the privacy protections launched formerly with the 1.1.1.1 app. Cloudflare team states that 1.1.1.1 app with warp is still under works, and although sign-ups for Warp aren’t open yet, Cloudflare has started a waiting list where you can “claim your place” by downloading the 1.1.1.1 app or by updating the existing app. Once the service is available, you’ll be notified. “Our whole team is proud that today, for the first time, we’ve extended the scope of that mission meaningfully to the billions of other people who use the Internet every day”, states the Cloudflare team. For more information, check out the official Warp blog post. Cloudflare takes a step towards transparency by expanding its government warrant canaries Cloudflare raises $150M with Franklin Templeton leading the latest round of funding workers.dev will soon allow users to deploy their Cloudflare Workers to a subdomain of their choice
Read more
  • 0
  • 0
  • 4030

article-image-ubuntu-19-04-disco-dingo-beta-releases-with-support-for-linux-5-0-and-gnome-3-32
Bhagyashree R
01 Apr 2019
2 min read
Save for later

Ubuntu 19.04 Disco Dingo Beta releases with support for Linux 5.0 and GNOME 3.32

Bhagyashree R
01 Apr 2019
2 min read
Last week, the team behind Ubuntu announced the release of Ubuntu 19.04 Disco Dingo Beta, which comes with Linux 5.0 support, GNOME 3.32, and more. Its stable version is expected to release on April 18th, 2019. Following are some of the updates in Ubuntu 19.04 Disco Dingo: Updates in Linux kernel Ubuntu 19.04 is based on Linux 5.0, which was released last month. It comes with support for AMD Radeon RX Vega M graphics processor, complete support for the Raspberry Pi 3B and the 3B+, Qualcomm Snapdragon 845, and much more. Toolchain Upgrades The tools are upgraded to their latest releases. The upgraded toolchain includes glibc 2.29, OpenJDK 11, Boost 1.67, Rustc 1.31, and updated GCC 8.3, Python 3.7.2 as default,  Ruby 2.5.3, PHP 7.2.15, and more. Updates in Ubuntu Desktop This release ships with the latest GNOME 3.32 giving it a refreshed visual design. It also brings a few performance improvements and new features: GNOME Disks now supports VeraCrypt, a utility used for on-the-fly encryption. A panel is added to the Settings menu to help users manage Thunderbolt devices. With this release, more shell components are cached in GPU RAM, which reduces load and increases FPS count. Desktop zoom works much smoother. An option is added to automatically submit error reports to the error reporting dialog window. Other updates include new Yaru icon sets, Mesa 19.0, QEMU 13.1, and libvirt 14.0. This release will be supported for 9 months until January 2020. Users who require Long Term Support are recommended to use Ubuntu 18.04 LTS instead. To read the full list of updates, visit Ubuntu’s official website. Chromium blacklists nouveau graphics device driver for Linux and Ubuntu users Ubuntu releases Mir 1.0.0 Ubuntu free Linux Mint Project, LMDE 3 ‘Cindy’ Cinnamon, released
Read more
  • 0
  • 0
  • 5849