Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-kata-containers-1-5-released-with-firecracker-support-integration-improvements-and-ibm-z-series-support
Melisha Dsouza
24 Jan 2019
3 min read
Save for later

Kata Containers 1.5 released with Firecracker support, integration improvements and IBM Z series support

Melisha Dsouza
24 Jan 2019
3 min read
Yesterday, Kata Containers 1.5 was released with a host of updates like preliminary support for the Firecracker hypervisor, s390x architecture support, and significant integration improvements! Kata Containers is an open source project and community building a standard implementation of lightweight Virtual Machines (VMs) that perform like containers and provide the workload isolation and security advantages of Virtual machines. The project is managed by The OpenStack Foundation and combines the technology from Intel® Clear Containers and Hyper runV. Features of Kata Containers 1.5 #1 Firecracker support Eric Ernest, an architecture committee member for Kata Containers project, states that the Kata Containers project was designed “to support multiple hypervisor solutions.” The new Firecracker support introduced in this update aims to do just that. At the Amazon re:Invent conference 2018, the AWS team released ‘Firecracker’ that they explained to be a new Virtualization Technology and Open Source Project for Running Multi-Tenant Container Workloads. Firecracker enables service owners to operate secure multi-tenant container-based services while combining the speed, resource efficiency, and performance enabled by containers with the security and isolation offered by traditional VMs. Firecracker can be used in Kata Containers 1.5 for feature constrained workloads, while using the QEMU when working with more advanced workloads. The blog also mentions a small limitation of the Kubernetes functionality when using Kata+Firecracker. The inability to dynamically adjust memory and CPU definitions for a pod and Firecrackers support for only block-based storage drivers and volumes gives rise to the requirement of devicemapper. This is available in Kubernetes + CRI-O and Docker version 18.06. Users can expect more storage driver options soon. Check out this screencast for an example of Kata configured in CRIO+K8S, utilizing both QEMU and Firecracker. You can head over to GitHub to understand how to get started quickly with Kata + runtimeClass in Kubernetes. #2 s390x architecture support Kata Containers 1.5 adds IBM Z-Series support. According to CIO, IBM Z platform includes notable security features. It has a proprietary ASIC on-chip hardware dedicated specifically for cryptographic processes, enabling all-encompassing encryption. This keeps data always encrypted except when that data is being processed. Data is only decrypted during computations before it is encrypted again. #3 containerd integration The 1.5 release simplifies how Kata Containers integrate with containerd. Following the discussion last year to add a shim API to containerd, the 1.5 release includes an initial implementation meeting this shim API. Eric Ernest , an architecture committee member for Kata Containers project, says the API  will result in a better interface to Kata Containers and provide the ability to directly access container level statistics from the Kata runtime. TheKata team plans to have several presentations on this topic at the Open Infrastructure Summit in Denver, April 29- May 1, 2019. You can head over to Eric’s blog for more insights on this announcement or head over to AWS blog to know more about the Firecracker support for Kata 1.5. CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure Tumblr open sources its Kubernetes tools for better workflow integration Implementing Azure-Managed Kubernetes and Azure Container Service [Tutorial]  
Read more
  • 0
  • 0
  • 2504

article-image-how-dropbox-uses-automated-data-center-operations-to-reduce-server-outage-and-downtime
Melisha Dsouza
17 Jan 2019
3 min read
Save for later

How Dropbox uses automated data center operations to reduce server outage and downtime

Melisha Dsouza
17 Jan 2019
3 min read
Today, in a blog post, Dropbox explained how the Prilo system used by the team has automated most of the processes of the company, that were previously manually attended to by Dropbox personnel. Pirlo is used by Dropbox in two main areas- validate and configure network switches and ensure the reliability of servers before entering production. This has, in turn, helped Dropbox to safely manage their physical infrastructure operations with ease. Pirlo consists of a distributed MySQL-backed job queue built by Dropbox itself, using primitives like gRPC, service discovery, and our managed MySQL clusters. Switch provisioning at Dropbox is handled by the TOR STarter which is a Pirlo component. The TOR Starter validates and configures switches in Dropbox datacenter server racks, PoP server racks, and at the different layers of the data center fabric; responsible to connect racks in the same facility together. Server provisioning and repair validation is handled by Pirlo Server Validation. All new servers arriving at the company are validated using this component. Repaired servers are also validated before they are transitioned back into production. Pirlo has automated these manual processes at Dropbox and has led to a reduction in downtime, outages, and inefficiencies associated with the incomplete or erroneous fixing of the systems. By reducing manual work, employees can now focus their attention to more value adding jobs. Before using Pirlo, the above tasks had to be performed by operations engineers and subject matter experts who used various server error logs to take appropriate actions to fix failed hardware. After applying the remediation actions, the engineer would send the machine back into production by sending the server to Dropbox re-imaging system. If the remediation actions didn’t fix the system or properly prepare it for re-imaging, the server would be sent back to the operations engineer for additional fixing. This would end up consuming a lot of the operation engineer's time as well as company resources. Operating engineers who used Pirlo system steadily increased their output by 40+%. The automation of manual tasks allowed engineers to address more issues in the same amount of time. You can head over to Dropbox’s official blog to explore the workings of Pirlo and how it benefited the organization. How to navigate files in a Vue app using the Dropbox API Tech jobs dominate LinkedIn’s most promising jobs in 2019 NGINX Hybrid Application Delivery Controller Platform improves API management, manages microservices and much more!
Read more
  • 0
  • 0
  • 3050

article-image-go-1-11-support-announced-for-google-cloud-functions
Melisha Dsouza
17 Jan 2019
2 min read
Save for later

Go 1.11 support announced for Google Cloud Functions!

Melisha Dsouza
17 Jan 2019
2 min read
Yesterday, Google cloud announced the support for Go 1.11 (in beta) on Cloud Functions. Developers can now write Go functions that scale dynamically and seamlessly integrate with Google Cloud events. The Go language follows suite after Node.js and Python were announced as supported languages for Google Cloud Functions. Google Cloud functions ensures that developers do not have to worry about server management and scaling. Google Cloud functions scale automatically and developers only pay for the time a function runs. By using the familiar blocks of Go functions, developers can build a variety of applications like: Serverless application backends real-time data processing pipelines Chatbots video or image analysis tools And much more! The two types of Go functions that developers can use with cloud functions are the HTTP and background functions. The HTTP functions are invoked by HTTP requests, while background functions are triggered by events. The Google cloud runtime system provides support for multiple Go packages via the Go modules. Go 1.11 modules allow the integration of third-party dependencies into an application’s code. Go Developers and Google Cloud users have taken this news well. Reddit and Youtube did see a host of positive comments from users. Users have commented on Go being a good fit for cloud functions and making the process of adopting cloud functions much more easier. https://www.reddit.com/r/golang/comments/agne4o/get_going_with_cloud_functions_go_111_is_now_a/ee7sd35 https://www.reddit.com/r/golang/comments/agne4o/get_going_with_cloud_functions_go_111_is_now_a/ee84cej It is easy and efficient to deploy a Go function in Google Cloud. Check out the examples on Google Cloud’s official blog page. Alternatively, you can watch this video to know more about this announcement. Google Cloud releases a beta version of SparkR job types in Cloud Dataproc Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move? Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more  
Read more
  • 0
  • 0
  • 1789
Banner background image

article-image-baidu-open-sources-openedge-to-create-a-lightweight-secure-reliable-and-scalable-edge-computing-community
Melisha Dsouza
16 Jan 2019
2 min read
Save for later

Baidu open sources ‘OpenEdge’ to create a ‘lightweight, secure, reliable and scalable edge computing community’

Melisha Dsouza
16 Jan 2019
2 min read
On 9th January, at CES 2019, Chinese technology giant Baidu Inc. announced the open sourcing of its edge computing platform called ‘OpenEdge’ that can be used by developers to extend cloud computing to their edge devices “Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy. By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications,” said Baidu VP and GM of Baidu Cloud Watson Yin. “ Baidu said that systems built using OpenEdge will automatically be enabled with features like artificial intelligence, cloud synchronization, data collection, function compute and message distribution.OpenEdge is a component of the Baidu Intelligent Edge platform (BIE). The BIE offers tools to manage edge nodes, resources such as certifications, passwords and program code and other functions. BIE is designed to run on the Baidu cloud and supports common AI frameworks such as the Baidu-developed PaddlePaddle and TensorFlow. Developers can, therefore, use Baidu’s cloud to train AI models and then deploy them to the systems that are built using OpenEdge. According to TechRepublic, OpenEdge also gives developers the ability to exchange data with Baidu ABC Intelligent Cloud, perform filtering calculation on sensitive data and provide real-time feedback control when a network connection is unstable. A company spokesperson told Techcrunch that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud. You can head over to GitHub to know more about this release. Unity and Baidu collaborate for simulating the development of autonomous vehicles Baidu releases EZDL – a platform that lets you build AI and machine learning models without any coding knowledge Baidu Apollo autonomous driving vehicles gets machine learning based auto-calibration system  
Read more
  • 0
  • 0
  • 2920

article-image-cncf-releases-9-security-best-practices-for-kubernetes-to-protect-a-customers-infrastructure
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure

Melisha Dsouza
15 Jan 2019
3 min read
According to CNCF’s bi-annual survey conducted in August 2018, 83% of the respondents prefer Kubernetes for its container management tools. 58% of respondents use Kubernetes in production, while 42% are evaluating it for future use and 40% of enterprise companies (5000+) are running Kubernetes in production. These statistics give us a clear picture of the popularity of Kubernetes amongst developers as a container orchestrator. However, the recent security flaw discovered in Kubernetes (now patched) that enable attackers to compromise clusters and perform illicit activities, did raise concerns among developers. A container environment like Kubernetes consisting of multiple layers needs to be secured on all fronts. Taking this into consideration, the cncf has released ‘9 Kubernetes Security Best Practices Everyone Must Follow’ #1 Upgrade to the Latest Version Kubernetes has a quarterly update that features various bug and security fixes. Customers are advised to always upgrade to the latest release with updated security patches to fool proof their system. #2 Role-Based Access control (RBAC) Users can control who can access the Kubernetes API and what permissions they have by enabling the RBAC. The blog advises users against giving anyone cluster admin privileges and to grant access only as needed on a case-by-case basis. #3 Namespaces for security boundaries Namespaces generate an important level of isolation between components. Also, cncf states that it is easier to have various security controls and policies when workloads are deployed in separate namespaces #4 Keeping sensitive workloads separate Sensitive workloads should be run on a dedicated set of machines. This means that if a less secure application connected to a sensitive workload is compromised, the latter remains unaffected. #5 Securing Cloud Metadata Access Sensitive metadata storing confidential information such as credentials, can be stolen and misused. The blog advises users to use Google Kubernetes Engine’s metadata concealment feature to avoid this mishap. #6 Cluster Network Policies Developers will be able to control network access of their containerized applications through network policies. #7 Implementing a Cluster-wise Pod Security Policy This will define how workloads are allowed to run in a cluster. #8 Improve Node Security Users should ensure that the host is configured in the right way and that it is secure by checking the node’s configuration against CIS benchmarks. Ensure your network blocks access to ports that can be exploited by malicious actors and minimize the administrative access given to Kubernetes nodes. #9 Audit Logging Audit logs should be enabled and monitored for anomalous API calls and authorization failures. This an indicate that a malicious hacker is trying to get into your system. The blog advises users to further look for tools to assist them in continuous monitoring and protection of their containers.  You can head over to Cloud Native computing foundation official blog to read more about these best practices. CNCF accepts Cloud Native Buildpacks to the Cloud Native Sandbox Google Cloud hands over Kubernetes project operations to CNCF, grants $9M in GCP credits Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes      
Read more
  • 0
  • 0
  • 2946

article-image-tumblr-open-sources-its-kubernetes-tools-for-better-workflow-integration
Melisha Dsouza
15 Jan 2019
3 min read
Save for later

Tumblr open sources its Kubernetes tools for better workflow integration

Melisha Dsouza
15 Jan 2019
3 min read
Yesterday, Tumblr announced the open sourcing of three tools developed at Tumblr itself, that will help developers integrate Kubernetes into their workflows. These tools were developed by Tumblr throughout their eleven-year journey to migrate their workflow to Kubernetes with ease. These are the 3 tools and their features as listed on the Tumblr blog: #1 k8s- sidecar injector Containerizing complex applications can be time-consuming. Sidecars come as a savior option, that allows developers to emulate older deployments with co-located services on Virtual machines or physical hosts. The k8s-sidecar injector dynamically injects sidecars, volumes, and environment data into pods as they are launched. This reduced the overhead and work involved in copy-pasting code to add sidecars to a developer's deployments and cronjobs. What’s more, the tool listens to the specific sidecar to be injected, contained within the Kubernetes API for Pod launch. This tool will be useful when containerizing legacy applications requiring a complex sidecar configuration. #2 k8s-config-projector The k8s-config projector is a command line tool that was generated out of the necessity of accessing a subset of configuration data (feature flags, lists of hosts/IPs+ports, and application settings) and a need to be informed as soon as this data changes. Config data defines how deployed services operate at Tumblr. Kubernetes ConfigMap resource enables users to provide their service with configuration data. It also allows them to update the data in running pods without redeployment of the application. To use this feature to configure Tumblr’s services and jobs in a Kubernetes-native manner, the team had to bridge the gap between their canonical configuration store (git repo of config files) to ConfigMaps. k8s-config-projector combines the git repo hosting configuration data with “projection manifest” files, that describe how to group/extract settings from the config repo and transmute them into ConfigMaps. Developers can now encode a set of configuration data that the application needs to run into a projection manifest. The blog states that ‘as the configuration data changes in the git repository, CI will run the projector, projecting and deploying new ConfigMaps containing this updated data, without needing the application to be redeployed’. #3 k8s-secret-projector Tumblr stores secure credentials (passwords, certificates, etc) in access controlled vaults. With k8s-secret-projector tool, developers will now be able to request access to subsets of credentials for a given application. This can be done now without granting the user access to the secrets as a whole. The tool ensures applications always have the appropriate secrets at runtime, while enabling automated systems including certificate refreshers, DB password rotations, etc to automatically manage and update these credentials, without the need to redeploy/restart the application. It performs the same by combining two repositories- projection manifests and credential repositories. A Continuous Integration (CI) tool like Jenkins will run the tool against any changes in the projection manifests repository. This will generate new Kubernetes Secret YAML files which will lead to the Continuous Deployment to deploy the generated and validated Secret files to any number of Kubernetes clusters. The tool will allow secrets to be deployed in Kubernetes environments by encrypting generated Secrets before they touch the disk. You can head over to Tumblr’s official blog for examples on each tool. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes
Read more
  • 0
  • 0
  • 3580
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-amazon-is-reportedly-building-a-video-game-streaming-service-says-information
Sugandha Lahoti
14 Jan 2019
2 min read
Save for later

Amazon is reportedly building a video game streaming service, says Information

Sugandha Lahoti
14 Jan 2019
2 min read
According to a report by Information, Amazon is developing a video game streaming service. Microsoft and Google have also previously announced similar game streaming offerings. In October, Google announced a new experimental game streaming service, namely, Project Stream. In the same month, Microsoft’s gaming chief Phil Spencer confirmed a streaming game service for any device at the E3 conference called the Project X Cloud. Amazon’s idea is to potentially bring top gaming titles to virtually anyone with a smartphone or streaming device. The service will handle all the compute-intensive calculations needed to run graphics-intensive games in the cloud. It would then stream them directly into a smart device so that gamers can get the same experience as running the titles natively on a high-end gaming system. Information says that although the Amazon gaming service isn’t likely to be launched until next year, Amazon has begun talking to games publishers about distributing their titles through its service. Most likely, this initiative would succeed considering Amazon is the biggest player in the cloud market. Amazon currently owns 32 percent of the cloud market, compared with Microsoft Azure’s 17 percent and Google Cloud’s 8 percent. These make better chances for Amazon to succeed. This would make it easier for gamers to take advantage of Amazon’s vast cloud offerings and play elaborate, robust games even on their mobile devices As the Information noted, a successful streaming platform may possibly overcome the long-standing business model of the gaming world, in which customers pay out $50 to $60 for a Triple-A title. Amazon is yet to shell out the details of such a video gaming service officially. Check out the full report on The Information. Microsoft announces Project xCloud, a new Xbox game streaming service Now you can play Assassin’s Creed in Chrome thanks to Google’s new game streaming service. Corona Labs open sources Corona, its free and cross-platform 2D game engine
Read more
  • 0
  • 0
  • 2469

article-image-triggermesh-announces-open-source-knative-lambda-runtime-aws-lambda-functions-can-now-be-deployed-on-knative
Melisha Dsouza
10 Jan 2019
2 min read
Save for later

TriggerMesh announces open source ‘Knative Lambda Runtime’; AWS Lambda functions can now be deployed on Knative!

Melisha Dsouza
10 Jan 2019
2 min read
"We believe that the key to enabling cloud native applications, is to provide true portability and communication across disparate cloud infrastructure." Mark Hinkle, co-founder of TriggerMesh Yesterday, TriggerMesh- the open source multi-cloud service management platform- announced their open source project ‘Knative Lambda Runtime’ (TriggerMesh KLR). KLR will bring AWS Lambda serverless computing to Kubernetes which will enable users to run Lambda functions on Knative-enabled clusters and serverless clouds. Amazon Web Services' (AWS) Lambda for serverless computing can only be used on AWS and not on another cloud platform. TriggerMesh KLR changes the game completely as now, users can avail complete portability of Amazon Lambda functions to Knative native enabled clusters, and Knative enabled serverless cloud infrastructure “without the need to rewrite these serverless functions”. [box type="shadow" align="" class="" width=""]Fun fact: KLR is pronounced as ‘clear’[/box] Features of TriggerMesh Knative Lambda Runtime Knative is a  Google Cloud-led Kubernetes-based platform which can be used to build, deploy, and manage modern serverless workloads. KLR are Knative build templates that can be used to runan AWS Lambda function in a Kubernetes cluster as is in a Knative powered Kubernetes cluster (installed with Knative). KLR enables serverless users to move functions back and forth between their Knative and AWS Lambda. AWS  Lambda Custom Runtime API in combination with the Knative Build system makes deploying KLR possible. Serverless users have shown a positive response to this announcement, with most of them excited for this news. Kelsey Hightower, developer advocate, Google Cloud Platform, calls this news ‘dope’ and we can understand why! His talk at KubeCon+CloudNativeCon 2018 had focussed on serveless and its security aspects. Now that AWS Lambda functions can be run on Google’s Knative, this marks a new milestone for TriggerMesh. https://twitter.com/kelseyhightower/status/1083079344937824256 https://twitter.com/sebgoa/status/1083014086609301504 It would be interesting to see how this moulds the path to a Kubernetes hybrid-cloud model. Head over to TriggerMesh’s official blog for more insights to this news. Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes  
Read more
  • 0
  • 0
  • 2838

article-image-google-cloud-releases-a-beta-version-of-sparkr-job-types-in-cloud-dataproc
Natasha Mathur
21 Dec 2018
2 min read
Save for later

Google Cloud releases a beta version of SparkR job types in Cloud Dataproc

Natasha Mathur
21 Dec 2018
2 min read
Google released a beta version of SparkR jobs on Cloud Dataproc, a cloud service that lets you run Apache Spark and Apache Hadoop in a cost-effective manner, earlier this week. SparkR Jobs will build R support on GCP. It is a package that delivers a lightweight front-end to use Apache Spark from R. This new package supports distributed machine learning using MLlib. It can be used to process against large cloud storage datasets and for performing work that is computationally intensive. Moreover, this new package also allows the developers to use “dplyr-like operations” i.e. a powerful R-package, which transforms and summarizes tabular data with rows and columns on datasets stored in Cloud Storage. The R programming language is very efficient when it comes to building data analysis tools and statistical apps. With cloud computing all the rage, even newer opportunities have opened up for developers working with R. Using GCP’s Cloud Dataproc Jobs API, it gets easier to submit SparkR jobs to a cluster without any need to open firewalls for accessing web-based IDEs or SSH onto the master node. With the API, it is easy to automate the repeatable R statistics that users want to be running on their datasets. Additionally, GCP for R also helps avoid the infrastructure barriers that put a limit on understanding data. This includes selecting datasets that need to be sampled due to compute or data size limits. GCP also allows you to build large-scale models that help analyze the datasets of sizes that would previously require big investments in high-performance computing infrastructures. For more information, check out the official Google Cloud blog post. Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Google Cloud’s Titan and Android Pie come together to secure users’ data on mobile devices
Read more
  • 0
  • 0
  • 2708

article-image-microsoft-open-sources-trill-a-streaming-engine-that-employs-algorithms-to-process-a-trillion-events-per-day
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”

Prasad Ramesh
18 Dec 2018
3 min read
Yesterday, Microsoft open sourced Trill, previously an internal project used for processing “a trillion events per day”. It was the first streaming engine to incorporate algorithms that process events in small batches of data based on latency on the user side. It powers services like Financial Fabric, Bing ads, Azure stream analytics, Halo, etc. With the increasing flow of data, the ability to process huge amounts of data each millisecond is a necessity. Microsoft has open sourced Trill for processing a trillion events per day to ‘address this growing trend’. Microsoft Trill features Trill is a single-node engine library and any .NET application, service, or platform can readily use Trill to start processing queries. It has a temporal query language which allows users to use complex queries over real-time and offline data sets. Trill has high performance which allows users to get results with great speed and low latency. How did Trill start? Trill was a research project at Microsoft Research in 2012. It has been described in various research papers like VLDB and the IEEE Data Engineering Bulletin. Trill is based on a former Microsoft service called StreamInsight—a platform that allowed developers to develop and deploy event processing applications. Both of these systems are based on an extended query and data model which extends the relational model with a component for time. Systems before Trill could only achieve a part of the benefits. All these advantages come in one package with Trill. Trill was the very first streaming engine that incorporated algorithms to process events in data batches based on the latency tolerated by users. It was also the first engine that organized data batches in a columnar format. This enabled queries to execute with much higher efficiency. Using Trill is similar to working with any .NET library. Trill has the same performance for real-time and offline datasets. Trill allows users to perform advanced time-oriented analytics and also look for complex patterns over streaming datasets. Open-sourcing Trill Microsoft believes there Trill is the best available tool in this domain in the developer community. By open sourcing it, they want to offer the features of IStreamable abstraction to all customers. There are opportunities for community involvement for future development of Trill. It allows users to write custom aggregates. There are also research projects built on Trill where the code is present but is not yet ready to use. For more details on Trill, visit the Microsoft website. Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft confirms replacing EdgeHTML with Chromium in Edge Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced
Read more
  • 0
  • 0
  • 2964
article-image-docker-store-and-docker-cloud-are-now-part-of-docker-hub
Amrata Joshi
14 Dec 2018
3 min read
Save for later

Docker Store and Docker Cloud are now part of Docker Hub

Amrata Joshi
14 Dec 2018
3 min read
Yesterday, the team at Docker announced that Docker Store and Docker Cloud are now part of Docker Hub. This makes the process of finding, storing and sharing container images easy. The new Docker Hub has an updated user experience where Docker certified and verified publisher images are available for discovery and download. Docker Cloud, a service provided by Docker helps users to connect the Docker Cloud to their existing cloud providers like Azure or AWS. Docker store is used for creating a self-service portal for Docker's ecosystem partners for publishing and distributing their software through Docker images. https://twitter.com/Docker/status/1073369942660067328 What’s new in this Docker Hub update? Repositories                                            Source: Docker Users can now view recently pushed tags and automated builds on their repository page. Pagination has now been added to the repository tags. The repository filtering on the Docker Hub homepage has been improved. Organizations and Teams Organization owners can now view the team permissions across all of their repositories at one glance. Existing Docker Hub users can now be added to a team via their email IDs instead of their Docker IDs. Automated Builds Source: Docker Build Caching is now used to speed up builds. It is now possible to add environment variables and run tests in the builds. Automated builds can now be added to existing repositories. Account credentials for organizations like GitHub and BitBucket need to re-linked to the organization for leveraging the new automated builds. Improved container image search Filter by Official, Verified Publisher, and Certified images guarantees a level of quality in the Docker images. Docker Hub provides filter by categories for quick search of images. There is no need of updating any bookmarks on Docker Hub. Verified publisher and certified images The Docker certified and verified publisher images are now available for discovery and download on Docker Hub. Just like official Images, even publisher images have been vetted by Docker. The certified and verified publisher images are provided by the third-party software vendors. Certified images are tested and supported by verified publishers on Docker Enterprise platform. Certified images adhere to Docker’s container best practices. The certified images pass a functional API test suite and also display a unique quality mark “Docker Certified”. Read more about this release on Docker’s blog post. Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format Docker announces Docker Desktop Enterprise Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 7625

article-image-cockroach-labs-2018-cloud-report-aws-outperforms-gcp-hands-down
Melisha Dsouza
14 Dec 2018
5 min read
Save for later

Cockroach Labs 2018 Cloud Report: AWS outperforms GCP hands down

Melisha Dsouza
14 Dec 2018
5 min read
While testing the features for CockroachDB 2.1, the team discovered that AWS offered 40% greater throughput than GCP. To understand the reason for this result, the team compared GCP and AWS on TPC-C performance (e.g., throughput and latency), CPU, Network, I/O, and cost. This has resulted in CockroachDB releasing a 2018 Cloud Report to help customers decide on which cloud solution to go with based on the most commonly faced questions, such as should they use Amazon Web Services (AWS), Google Cloud Platform (GCP) or Microsoft Azure? How should they tune their workload for different offerings? Which of the platforms are more reliable? Note: They did not test Microsoft Azure due to bandwidth constraints but will do so in the near future. The tests conducted For GCP, the team chose the n1-standard-16 machine with Intel Xeon Scalable Processor (Skylake) in the us-east region and for AWS  they chose the latest compute-optimized AWS instance type, c5d.4xlarge instances, to match n1-standard-16, because they both have 16 cpus and SSDs. #1 TPC-C Benchmarking test The team tested the workload performance by using TPC-C. The results were surprising as CockroachDB 2.1 achieves 40% more throughput (tpmC) on TPC-C when tested on AWS using c5d.4xlarge than on GCP via n1-standard-16. They then tested the TPC-C against some of the most popular AWS instance types. Taking the testing a step ahead, they focused on the higher performing c5 series with SSDs, EBS-gp2, and EBS-io1 volume types. The AWS Nitro System present in c5and m5 series offers approximately similar or superior performance when compared to a similar GCP instance. The results were clear: AWS wins on TPC-C benchmark. #2 CPU Experiment The team chose stress-ng as according to them, it offered more benchmarks and provided more flexible configurations as compared to sysbench benchmarking test. On running the Stress-ng command stress-ng --metrics-brief --cpu 16 -t 1m five times on both AWS and GCP, they found that   AWS offered 28% more throughput (~2,900) on stress-ng than GCP. #3 Network throughput and latency test The team measured network throughput using a tool called iPerf and latency via another tool PING. They have given a detailed setup of the iPerf tool used for this experiment in a blog post. The tests were run 4 times, each for AWS and GCP. The results once again showed AWS was better than GCP. GCP showed a fairly normal distribution of network throughput centered at ~5.6 GB/sec. Throughput ranges from 4.01 GB/sec to 6.67 GB/sec, which according to the team is “a somewhat unpredictable spread of network performance”, reinforced by the observed average variance for GCP of 0.487 GB/sec. AWS, offers significantly higher throughput, centered on 9.6 GB/sec, and providing a much tighter spread between 9.60 GB/sec and 9.63 GB/sec when compared to GCP. On checking network throughput variance, for AWS, the variance is only 0.006 GB/sec. This indicates that the GCP network throughput is 81x more variable when compared to AWS. The network latency test showed that, AWS has a tighter network latency than GCP. AWS’s values are centered on an average latency, 0.057 ms. AWS offers significantly better network throughput and latency with none of the variability present in GCP. #4 I/O Experiment The team tested I/O using a configuration of Sysbench that simulates small writes with frequent syncs for both write and read performance. This test measures throughput based on a fixed set of threads, or the number of items concurrently writing to disk. The write performance showed that AWS consistently offers more write throughput across all thread variance from 1 thread up to 64. In fact, it can be as high as 67x difference in throughput. AWS also offers better average and 95th percentile write latency across all thread tests. At 32 and 64 threads, GCP provides marginally more throughput. For read latency, AWS tops the charts for up to 32 threads. At 32 and 64 threads GCP and AWS split the results. The test also shows that GCP offers a marginally better performance with similar latency to AWS for read performance at 32 threads and up. The team also used the no barrier method of writing directly to disk without waiting for the write cache to be flushed. The result for this were reverse as compared to the above experiments. They found that GCP with no barrier speeds things up by 6x! On AWS, no barrier (vs. not setting no barrier) is only a 25% speed up. #5 Cost Considering AWS outperformed GCP at the TPC-C benchmarks, the team wanted to check the cost involved on both platforms. For both clouds we assumed the following discounts available: On GCP :a  three-year committed use price discount with local SSD in the central region. On AWS : a three-year standard contract paid up front. They found that GCP is more expensive as compared to AWS, given the performance it has shown in the tests conducted. GCP costs 2.5 times more than AWS per tpmC. In response to this generated report, Google Cloud developer advocate, Seth Vargo, posted a comment on Hacker News assuring users that Google’s team would look into the tests and conduct their own benchmarking to provide customers with the much needed answers to the questions generated by this report. It would be interesting to see the results GCP comes up with in response to this report. Head over to cockroachlabs.com for more insights on the tests conducted. CockroachDB 2.0 is out! Cockroach Labs announced managed CockroachDB-as-a-Service Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 2501

article-image-introducing-grafanas-loki-alpha-a-scalable-ha-multi-tenant-log-aggregator-for-cloud-natives-optimized-for-grafana-prometheus-and-kubernetes
Melisha Dsouza
13 Dec 2018
2 min read
Save for later

Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes

Melisha Dsouza
13 Dec 2018
2 min read
On 11th December, at the KubeCon+CloudNativeCon conference held at Seattle, Graffana labs announced the release of ‘Loki’, which is a horizontally-scalable, highly-available, multi-tenant log aggregation system for cloud natives that was inspired by Prometheus. As compared to other log aggregation systems, Loki does not index the contents of the logs but rather a set of labels for each log stream. Storing compressed, unstructured logs and only indexing metadata, makes it cost effective as well as easy to operate. Users can seamlessly switch between metrics and logs using the same labels that they are already using with Prometheus. Loki can store Kubernetes Pod logs; metadata such as Pod labels is automatically scraped and indexed. Features of Loki Loki is optimized to search, visualize and explore a user's logs natively in Grafana. It is optimized for Grafana, Prometheus and Kubernetes. Grafana 6.0 provides a native Loki data source and a new Explore feature that makes logging a first-class citizen in Grafana. Users can streamline instant response, switch between metrics and logs using the same Kubernetes labels that they are already using with Prometheus. Loki is an open source alpha software with a static binary and no dependencies Loke can be used outside of Kubernetes. But the team says that their r initial use case is “very much optimized for Kubernetes”. With promtail, all Kubernetes labels for a user's logs are automatically set up the same way as in Prometheus. It is possible to manually label log streams, and the team will be exploring integrations to make Loki “play well with the wider ecosystem”. Twitter is buzzing with positive comments for Grafana. Users are pretty excited for this release, complimenting Loki’s cost-effectiveness and ease of use. https://twitter.com/pracucci/status/1072750265982509057 https://twitter.com/AnkitTimbadia/status/1072701472737902592 Head over to Grafana lab’s official blog to know more about this release. Alternatively, you can check out GitHub for a demo on three ways to try out Loki: using Grafana free hosted demo, running it locally with Docker or building from source. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Uber open sources its large scale metrics platform, M3 for Prometheus DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps
Read more
  • 0
  • 0
  • 5729
article-image-google-expands-its-machine-learning-hardware-portfolio-with-cloud-tpu-pods-alpha-to-effectively-train-and-deploy-tensorflow-machine-learning-models-on-gcp
Melisha Dsouza
13 Dec 2018
3 min read
Save for later

Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP

Melisha Dsouza
13 Dec 2018
3 min read
Today, Google cloud announced the alpha availability of ‘Cloud TPU Pods’  that are tightly-coupled supercomputers built with hundreds of Google’s custom Tensor Processing Unit (TPU) chips and dozens of host machines, linked via an ultrafast custom interconnect. Google states that these pods make it easier, faster, and more cost-effective to develop and deploy cutting-edge machine learning workloads on Google Cloud. Developers can iterate over the training data in minutes and train huge production models in hours or days instead of weeks. The Tensor Processing Unit (TPU), is an ASIC that powers several of Google’s major products, including Translate, Photos, Search, Assistant, and Gmail. It provides up to 11.5 petaflops of performance in a single pod. Features of Cloud TPU Pods #1 Proven Reference Models Customers can take advantage of  Google-qualified reference models that are optimized for performance, accuracy, and quality for many real-world use cases. These include object detection, language modeling, sentiment analysis, translation, image classification, and more. #2 Connect Cloud TPUs to Custom Machine Types Users can connect to Cloud TPUs from custom VM types. This will them optimally balance processor speeds, memory, and high-performance storage resources for their individual workloads. #3 Preemptible Cloud TPU Preemptible Cloud TPUs are 70% cheaper than on-demand instances. Long training runs with checkpointing or batch prediction on large datasets can now be done at an optimal rate using Cloud TPU’s. #4 Integrated with GCP Cloud TPUs and Google Cloud's Data and Analytics services are fully integrated with other GCP offerings. This provides developers unified access across the entire service line. Developers can run machine learning workloads on Cloud TPUs and benefit from Google Cloud Platform’s storage, networking, and data analytics technologies. #5 Additional features Cloud TPUs perform really well at synchronous training. The Cloud TPU software stack transparently distributes ML models across multiple TPU devices in a Cloud TPU Pod to help customers achieve scalability. All Cloud TPUs are integrated with Google Cloud’s high-speed storage systems, ensuring that data input pipelines can keep up with the TPUs. Users do not have to manage parameter servers, deal with complicated custom networking configurations, or set up exotic storage systems to achieve unparalleled training performance in the cloud. Performance and Cost benchmarking of Cloud TPU Google compared the Cloud TPU Pods and Google Cloud VMs with NVIDIA Tesla V100 GPUs attached- using one of the MLPerf models called TensorFlow 1.12 implementations of ResNet-50 v1.5 (GPU version, TPU version). They trained ResNet-50 on the ImageNet image classification dataset. The results of the test show that Cloud TPU Pods deliver near-linear speedups for large-scale training task; the largest Cloud TPU Pod configuration tested (256 chips) delivers a 200X speedup over an individual V100 GPU. Check out their methodology page for further details on this test. Training ResNet-50 on a full Cloud TPU v2 Pod costs almost 40% less than training the same model to the same accuracy on an n1-standard-64 Google Cloud VM with eight V100 GPUs attached. The full Cloud TPU Pod completes the training task 27 times faster. Head over to Google Cloud’s official page to know more about Cloud TPU Pods. Alternatively, check out Cloud TPU’s documentation for more insights on the same. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?
Read more
  • 0
  • 0
  • 3817

article-image-redhat-contributes-etcd-a-distributed-key-value-store-project-to-the-cloud-native-computing-foundation-at-kubecon-cloudnativecon
Amrata Joshi
12 Dec 2018
2 min read
Save for later

RedHat contributes etcd, a distributed key-value store project, to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, at the ongoing KubeCon + CloudNativeCon North America 2018, RedHat announced its contribution towards etcd, an open source project and its acceptance into the Cloud Native Computing Foundation (CNCF). Red Hat is participating in developing etcd, as a part of the enterprise Kubernetes product, Red Hat OpenShift. https://twitter.com/coreos/status/1072562301864161281 etcd is an open source, distributed, consistent key-value store for service discovery, shared configuration, and scheduler coordination. It is a core component of software that comes with safer automatic updates and it also sets up overlay networking for containers. The CoreOS team created etcd in 2013 and the Red Hat engineers maintained it by working alongside a team of professionals from across the industry. The etcd project focuses on safely storing critical data of a distributed system and demonstrating its quality. It is also the primary data store for Kubernetes. It uses the Raft consensus algorithm for replicated logs. With etcd, applications can maintain more consistent uptime and work smoothly even when the individual servers are failing. Etcd is progressing and it already has 157 releases with etcd v3.3.10 being the latest one that got released just two month ago. etcd is designed as a consistency store across environments including public cloud, hybrid cloud and bare metal. Where is etcd used? Kubernetes clusters use etcd as their primary data store. Red Hat OpenShift customers and Kubernetes users benefit from the community work on the etcd project. It is also used by communities and users like Uber, Alibaba Cloud, Google Cloud, Amazon Web Services, and Red Hat. etcd will be under Linux Foundation and the domains and accounts will be managed by CNCF. The community of etcd maintainers, including Red Hat, Alibaba Cloud, Google Cloud, Amazon, etc, won’t be changed. The project will continue to focus on the communities that depend on it. Red Hat will continue extending etcd with the etcd Operator in order to bring more security and operational ease. It will enable users to easily configure and manage etcd by using a declarative configuration that creates, configures, and manages etcd clusters. Read more about this news on RedHat’s official blog. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 3006