Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cloud & Networking

376 Articles
article-image-microsoft-open-sources-trill-a-streaming-engine-that-employs-algorithms-to-process-a-trillion-events-per-day
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”

Prasad Ramesh
18 Dec 2018
3 min read
Yesterday, Microsoft open sourced Trill, previously an internal project used for processing “a trillion events per day”. It was the first streaming engine to incorporate algorithms that process events in small batches of data based on latency on the user side. It powers services like Financial Fabric, Bing ads, Azure stream analytics, Halo, etc. With the increasing flow of data, the ability to process huge amounts of data each millisecond is a necessity. Microsoft has open sourced Trill for processing a trillion events per day to ‘address this growing trend’. Microsoft Trill features Trill is a single-node engine library and any .NET application, service, or platform can readily use Trill to start processing queries. It has a temporal query language which allows users to use complex queries over real-time and offline data sets. Trill has high performance which allows users to get results with great speed and low latency. How did Trill start? Trill was a research project at Microsoft Research in 2012. It has been described in various research papers like VLDB and the IEEE Data Engineering Bulletin. Trill is based on a former Microsoft service called StreamInsight—a platform that allowed developers to develop and deploy event processing applications. Both of these systems are based on an extended query and data model which extends the relational model with a component for time. Systems before Trill could only achieve a part of the benefits. All these advantages come in one package with Trill. Trill was the very first streaming engine that incorporated algorithms to process events in data batches based on the latency tolerated by users. It was also the first engine that organized data batches in a columnar format. This enabled queries to execute with much higher efficiency. Using Trill is similar to working with any .NET library. Trill has the same performance for real-time and offline datasets. Trill allows users to perform advanced time-oriented analytics and also look for complex patterns over streaming datasets. Open-sourcing Trill Microsoft believes there Trill is the best available tool in this domain in the developer community. By open sourcing it, they want to offer the features of IStreamable abstraction to all customers. There are opportunities for community involvement for future development of Trill. It allows users to write custom aggregates. There are also research projects built on Trill where the code is present but is not yet ready to use. For more details on Trill, visit the Microsoft website. Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft confirms replacing EdgeHTML with Chromium in Edge Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced
Read more
  • 0
  • 0
  • 4664

article-image-docker-store-and-docker-cloud-are-now-part-of-docker-hub
Amrata Joshi
14 Dec 2018
3 min read
Save for later

Docker Store and Docker Cloud are now part of Docker Hub

Amrata Joshi
14 Dec 2018
3 min read
Yesterday, the team at Docker announced that Docker Store and Docker Cloud are now part of Docker Hub. This makes the process of finding, storing and sharing container images easy. The new Docker Hub has an updated user experience where Docker certified and verified publisher images are available for discovery and download. Docker Cloud, a service provided by Docker helps users to connect the Docker Cloud to their existing cloud providers like Azure or AWS. Docker store is used for creating a self-service portal for Docker's ecosystem partners for publishing and distributing their software through Docker images. https://twitter.com/Docker/status/1073369942660067328 What’s new in this Docker Hub update? Repositories                                            Source: Docker Users can now view recently pushed tags and automated builds on their repository page. Pagination has now been added to the repository tags. The repository filtering on the Docker Hub homepage has been improved. Organizations and Teams Organization owners can now view the team permissions across all of their repositories at one glance. Existing Docker Hub users can now be added to a team via their email IDs instead of their Docker IDs. Automated Builds Source: Docker Build Caching is now used to speed up builds. It is now possible to add environment variables and run tests in the builds. Automated builds can now be added to existing repositories. Account credentials for organizations like GitHub and BitBucket need to re-linked to the organization for leveraging the new automated builds. Improved container image search Filter by Official, Verified Publisher, and Certified images guarantees a level of quality in the Docker images. Docker Hub provides filter by categories for quick search of images. There is no need of updating any bookmarks on Docker Hub. Verified publisher and certified images The Docker certified and verified publisher images are now available for discovery and download on Docker Hub. Just like official Images, even publisher images have been vetted by Docker. The certified and verified publisher images are provided by the third-party software vendors. Certified images are tested and supported by verified publishers on Docker Enterprise platform. Certified images adhere to Docker’s container best practices. The certified images pass a functional API test suite and also display a unique quality mark “Docker Certified”. Read more about this release on Docker’s blog post. Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format Docker announces Docker Desktop Enterprise Creating a Continuous Integration commit pipeline using Docker [Tutorial]
Read more
  • 0
  • 0
  • 11009

article-image-cockroach-labs-2018-cloud-report-aws-outperforms-gcp-hands-down
Melisha Dsouza
14 Dec 2018
5 min read
Save for later

Cockroach Labs 2018 Cloud Report: AWS outperforms GCP hands down

Melisha Dsouza
14 Dec 2018
5 min read
While testing the features for CockroachDB 2.1, the team discovered that AWS offered 40% greater throughput than GCP. To understand the reason for this result, the team compared GCP and AWS on TPC-C performance (e.g., throughput and latency), CPU, Network, I/O, and cost. This has resulted in CockroachDB releasing a 2018 Cloud Report to help customers decide on which cloud solution to go with based on the most commonly faced questions, such as should they use Amazon Web Services (AWS), Google Cloud Platform (GCP) or Microsoft Azure? How should they tune their workload for different offerings? Which of the platforms are more reliable? Note: They did not test Microsoft Azure due to bandwidth constraints but will do so in the near future. The tests conducted For GCP, the team chose the n1-standard-16 machine with Intel Xeon Scalable Processor (Skylake) in the us-east region and for AWS  they chose the latest compute-optimized AWS instance type, c5d.4xlarge instances, to match n1-standard-16, because they both have 16 cpus and SSDs. #1 TPC-C Benchmarking test The team tested the workload performance by using TPC-C. The results were surprising as CockroachDB 2.1 achieves 40% more throughput (tpmC) on TPC-C when tested on AWS using c5d.4xlarge than on GCP via n1-standard-16. They then tested the TPC-C against some of the most popular AWS instance types. Taking the testing a step ahead, they focused on the higher performing c5 series with SSDs, EBS-gp2, and EBS-io1 volume types. The AWS Nitro System present in c5and m5 series offers approximately similar or superior performance when compared to a similar GCP instance. The results were clear: AWS wins on TPC-C benchmark. #2 CPU Experiment The team chose stress-ng as according to them, it offered more benchmarks and provided more flexible configurations as compared to sysbench benchmarking test. On running the Stress-ng command stress-ng --metrics-brief --cpu 16 -t 1m five times on both AWS and GCP, they found that   AWS offered 28% more throughput (~2,900) on stress-ng than GCP. #3 Network throughput and latency test The team measured network throughput using a tool called iPerf and latency via another tool PING. They have given a detailed setup of the iPerf tool used for this experiment in a blog post. The tests were run 4 times, each for AWS and GCP. The results once again showed AWS was better than GCP. GCP showed a fairly normal distribution of network throughput centered at ~5.6 GB/sec. Throughput ranges from 4.01 GB/sec to 6.67 GB/sec, which according to the team is “a somewhat unpredictable spread of network performance”, reinforced by the observed average variance for GCP of 0.487 GB/sec. AWS, offers significantly higher throughput, centered on 9.6 GB/sec, and providing a much tighter spread between 9.60 GB/sec and 9.63 GB/sec when compared to GCP. On checking network throughput variance, for AWS, the variance is only 0.006 GB/sec. This indicates that the GCP network throughput is 81x more variable when compared to AWS. The network latency test showed that, AWS has a tighter network latency than GCP. AWS’s values are centered on an average latency, 0.057 ms. AWS offers significantly better network throughput and latency with none of the variability present in GCP. #4 I/O Experiment The team tested I/O using a configuration of Sysbench that simulates small writes with frequent syncs for both write and read performance. This test measures throughput based on a fixed set of threads, or the number of items concurrently writing to disk. The write performance showed that AWS consistently offers more write throughput across all thread variance from 1 thread up to 64. In fact, it can be as high as 67x difference in throughput. AWS also offers better average and 95th percentile write latency across all thread tests. At 32 and 64 threads, GCP provides marginally more throughput. For read latency, AWS tops the charts for up to 32 threads. At 32 and 64 threads GCP and AWS split the results. The test also shows that GCP offers a marginally better performance with similar latency to AWS for read performance at 32 threads and up. The team also used the no barrier method of writing directly to disk without waiting for the write cache to be flushed. The result for this were reverse as compared to the above experiments. They found that GCP with no barrier speeds things up by 6x! On AWS, no barrier (vs. not setting no barrier) is only a 25% speed up. #5 Cost Considering AWS outperformed GCP at the TPC-C benchmarks, the team wanted to check the cost involved on both platforms. For both clouds we assumed the following discounts available: On GCP :a  three-year committed use price discount with local SSD in the central region. On AWS : a three-year standard contract paid up front. They found that GCP is more expensive as compared to AWS, given the performance it has shown in the tests conducted. GCP costs 2.5 times more than AWS per tpmC. In response to this generated report, Google Cloud developer advocate, Seth Vargo, posted a comment on Hacker News assuring users that Google’s team would look into the tests and conduct their own benchmarking to provide customers with the much needed answers to the questions generated by this report. It would be interesting to see the results GCP comes up with in response to this report. Head over to cockroachlabs.com for more insights on the tests conducted. CockroachDB 2.0 is out! Cockroach Labs announced managed CockroachDB-as-a-Service Facebook GEneral Matrix Multiplication (FBGEMM), high performance kernel library, open sourced, to run deep learning models efficiently
Read more
  • 0
  • 0
  • 2827

article-image-lxd-3-8-released-with-automated-container-snapshots-zfs-compression-support-and-more
Melisha Dsouza
14 Dec 2018
5 min read
Save for later

LXD 3.8 released with automated container snapshots, ZFS compression support and more!

Melisha Dsouza
14 Dec 2018
5 min read
Yesterday, the LXD team announced the release of LXD 3.8. This is the last update for 2018, improving the previous version features as well as adding new upgrades to 3.8. LXD, also known as ‘Linux Daemon’ system container manager. It offers a user experience similar to virtual machines but using Linux containers instead. LXD is written in Go which is a free software and is developed under the Apache 2 license. LXD is secure by design in terms of unprivileged containers, resource restrictions and much more. Customers can use LXD from containers on their laptop to thousands of compute nodes. WIth advanced resource control and support for multiple storage backends, storage pools and storage volumes, LXD has been well received by the community. Features of LXD 3.8 #1 Automated container snapshots The new release includes three configuration keys to control automated snapshots and configure how their naming convention. snapshots.schedule uses a CRON pattern to determine when to perform the snapshot snapshots.schedule.stopped is a boolean used to control whether to snapshot stopped containers snapshots.pattern is a format string with pongo2 templating support used to set what the name of the snapshots should be when no name is given to a snapshot. This applicable to both, automated and unnamed, manually created snapshots. #2 Support for copy/move between projects Users can now copy or move containers between projects using the newly available  --target-project option added to both lxc copy and lxc move #3 cluster.https_address server option LXD 3.8 includes a new cluster.https_address option. This option will help users facilitate internal cluster communication, making it easy to prioritize and filter cluster traffic. Until recently, clustered LXD servers had to be configured to listen on a single IPv4 or IPv6 address and both the internal cluster traffic and regular client traffic used the same address. This new option with a write-once key holds the address used for cluster communication and cannot currently be changed without having to remove the node from the cluster. Users can now change the regular core.https_address on clustered nodes to any address they want, making it possible to use a completely different network for internal cluster communication. #4 Cluster image replication LXD 3.8 introduces automatic image replication. Prior to this update, images would only get copied to other cluster members as containers on those systems request them. The downside of this method was that if the image is only present on a single system and that system goes offline, then the image cannot be used until the system recovers. In LXD 3.8,  all manually created or imported images will be replicated on at least 3 systems. Images that are stored in the image store only as a cache entry do not get replicated. #5 security.protection.shift container option In previous versions, LXD had to rely on slow rewriting of all uid/gid on the filesystem whenever the container’s idmap changes. This can be dangerous when run on systems that are prone to sudden shutdowns as the operation cannot be safely resumed if interrupted partway. The newly introduced security.protection.shift configuration option will prevent any such remapping, instead making any action that would result in one fail until the key is unset. #6 Support for passing all USB devices All USB devices can be passed to a container by not specifying any filter, without specifying any vendorid or productid filter. Every USB device will be made visible to the container, including any device hotplugged after the fact. #7 CLI override of default project After reports from users that interacting with multiple projects can be tedious due to having to constantly use lxc project switch to switch the client between projects, LXD 3.8 now has made available a --project option available throughout the command line client, which lets users override the project for a particular operation. #8 Bi-directional rsync negotiation Recent LXD releases use the rsync feature negotiation where the source could tell the server what rsync features it’s using for the server to match them on the receiving end. LXD 3.8 introduces the reverse of that by having the LXD server indicate what it supports as part of the migration protocol, allowing for the source to restrict the features it uses. This will provide a robust migration when a newer LXD will be able to migrate containers out to an older LXD without running into rsync feature mismatches. #9 ZFS compression support The LXD migration protocol implements a detection and use of ZFS compression support when available. On combining with zpool compression, this can very significantly reduce the size of the migration stream. HackerNews was buzzing with positive remarks for this release, with users requesting more documentation on how to use LXD containers. Some users also have compared LXD containers to Docker and Kubernetes, preferring the former over the latter. In addition to these new upgrades, the release also fixes multiple bugs from the previous version. You can head over to Linuxcontainers.org for more insights on this news. Linux 4.20 kernel slower than its previous stable releases, Spectre flaw to be blamed, according to Phoronix An update on Bcachefs- the “next generation Linux filesystem” The Ceph Foundation has been launched by the Linux Foundation to support the open source storage project  
Read more
  • 0
  • 0
  • 3607

article-image-introducing-grafanas-loki-alpha-a-scalable-ha-multi-tenant-log-aggregator-for-cloud-natives-optimized-for-grafana-prometheus-and-kubernetes
Melisha Dsouza
13 Dec 2018
2 min read
Save for later

Introducing Grafana’s ‘Loki’ (alpha), a scalable HA multi-tenant log aggregator for cloud natives; optimized for Grafana, Prometheus and Kubernetes

Melisha Dsouza
13 Dec 2018
2 min read
On 11th December, at the KubeCon+CloudNativeCon conference held at Seattle, Graffana labs announced the release of ‘Loki’, which is a horizontally-scalable, highly-available, multi-tenant log aggregation system for cloud natives that was inspired by Prometheus. As compared to other log aggregation systems, Loki does not index the contents of the logs but rather a set of labels for each log stream. Storing compressed, unstructured logs and only indexing metadata, makes it cost effective as well as easy to operate. Users can seamlessly switch between metrics and logs using the same labels that they are already using with Prometheus. Loki can store Kubernetes Pod logs; metadata such as Pod labels is automatically scraped and indexed. Features of Loki Loki is optimized to search, visualize and explore a user's logs natively in Grafana. It is optimized for Grafana, Prometheus and Kubernetes. Grafana 6.0 provides a native Loki data source and a new Explore feature that makes logging a first-class citizen in Grafana. Users can streamline instant response, switch between metrics and logs using the same Kubernetes labels that they are already using with Prometheus. Loki is an open source alpha software with a static binary and no dependencies Loke can be used outside of Kubernetes. But the team says that their r initial use case is “very much optimized for Kubernetes”. With promtail, all Kubernetes labels for a user's logs are automatically set up the same way as in Prometheus. It is possible to manually label log streams, and the team will be exploring integrations to make Loki “play well with the wider ecosystem”. Twitter is buzzing with positive comments for Grafana. Users are pretty excited for this release, complimenting Loki’s cost-effectiveness and ease of use. https://twitter.com/pracucci/status/1072750265982509057 https://twitter.com/AnkitTimbadia/status/1072701472737902592 Head over to Grafana lab’s official blog to know more about this release. Alternatively, you can check out GitHub for a demo on three ways to try out Loki: using Grafana free hosted demo, running it locally with Docker or building from source. Cortex, an open source, horizontally scalable, multi-tenant Prometheus-as-a-service becomes a CNCF Sandbox project Uber open sources its large scale metrics platform, M3 for Prometheus DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps
Read more
  • 0
  • 0
  • 8489

article-image-google-expands-its-machine-learning-hardware-portfolio-with-cloud-tpu-pods-alpha-to-effectively-train-and-deploy-tensorflow-machine-learning-models-on-gcp
Melisha Dsouza
13 Dec 2018
3 min read
Save for later

Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP

Melisha Dsouza
13 Dec 2018
3 min read
Today, Google cloud announced the alpha availability of ‘Cloud TPU Pods’  that are tightly-coupled supercomputers built with hundreds of Google’s custom Tensor Processing Unit (TPU) chips and dozens of host machines, linked via an ultrafast custom interconnect. Google states that these pods make it easier, faster, and more cost-effective to develop and deploy cutting-edge machine learning workloads on Google Cloud. Developers can iterate over the training data in minutes and train huge production models in hours or days instead of weeks. The Tensor Processing Unit (TPU), is an ASIC that powers several of Google’s major products, including Translate, Photos, Search, Assistant, and Gmail. It provides up to 11.5 petaflops of performance in a single pod. Features of Cloud TPU Pods #1 Proven Reference Models Customers can take advantage of  Google-qualified reference models that are optimized for performance, accuracy, and quality for many real-world use cases. These include object detection, language modeling, sentiment analysis, translation, image classification, and more. #2 Connect Cloud TPUs to Custom Machine Types Users can connect to Cloud TPUs from custom VM types. This will them optimally balance processor speeds, memory, and high-performance storage resources for their individual workloads. #3 Preemptible Cloud TPU Preemptible Cloud TPUs are 70% cheaper than on-demand instances. Long training runs with checkpointing or batch prediction on large datasets can now be done at an optimal rate using Cloud TPU’s. #4 Integrated with GCP Cloud TPUs and Google Cloud's Data and Analytics services are fully integrated with other GCP offerings. This provides developers unified access across the entire service line. Developers can run machine learning workloads on Cloud TPUs and benefit from Google Cloud Platform’s storage, networking, and data analytics technologies. #5 Additional features Cloud TPUs perform really well at synchronous training. The Cloud TPU software stack transparently distributes ML models across multiple TPU devices in a Cloud TPU Pod to help customers achieve scalability. All Cloud TPUs are integrated with Google Cloud’s high-speed storage systems, ensuring that data input pipelines can keep up with the TPUs. Users do not have to manage parameter servers, deal with complicated custom networking configurations, or set up exotic storage systems to achieve unparalleled training performance in the cloud. Performance and Cost benchmarking of Cloud TPU Google compared the Cloud TPU Pods and Google Cloud VMs with NVIDIA Tesla V100 GPUs attached- using one of the MLPerf models called TensorFlow 1.12 implementations of ResNet-50 v1.5 (GPU version, TPU version). They trained ResNet-50 on the ImageNet image classification dataset. The results of the test show that Cloud TPU Pods deliver near-linear speedups for large-scale training task; the largest Cloud TPU Pod configuration tested (256 chips) delivers a 200X speedup over an individual V100 GPU. Check out their methodology page for further details on this test. Training ResNet-50 on a full Cloud TPU v2 Pod costs almost 40% less than training the same model to the same accuracy on an n1-standard-64 Google Cloud VM with eight V100 GPUs attached. The full Cloud TPU Pod completes the training task 27 times faster. Head over to Google Cloud’s official page to know more about Cloud TPU Pods. Alternatively, check out Cloud TPU’s documentation for more insights on the same. Intel Optane DC Persistent Memory available first on Google Cloud Google Cloud Storage Security gets an upgrade with Bucket Lock, Cloud KMS keys and more Oracle’s Thomas Kurian to replace Diane Greene as Google Cloud CEO; is this Google’s big enterprise Cloud market move?
Read more
  • 0
  • 0
  • 5555
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at R$50/month. Cancel anytime
article-image-redhat-contributes-etcd-a-distributed-key-value-store-project-to-the-cloud-native-computing-foundation-at-kubecon-cloudnativecon
Amrata Joshi
12 Dec 2018
2 min read
Save for later

RedHat contributes etcd, a distributed key-value store project, to the Cloud Native Computing Foundation at KubeCon + CloudNativeCon

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, at the ongoing KubeCon + CloudNativeCon North America 2018, RedHat announced its contribution towards etcd, an open source project and its acceptance into the Cloud Native Computing Foundation (CNCF). Red Hat is participating in developing etcd, as a part of the enterprise Kubernetes product, Red Hat OpenShift. https://twitter.com/coreos/status/1072562301864161281 etcd is an open source, distributed, consistent key-value store for service discovery, shared configuration, and scheduler coordination. It is a core component of software that comes with safer automatic updates and it also sets up overlay networking for containers. The CoreOS team created etcd in 2013 and the Red Hat engineers maintained it by working alongside a team of professionals from across the industry. The etcd project focuses on safely storing critical data of a distributed system and demonstrating its quality. It is also the primary data store for Kubernetes. It uses the Raft consensus algorithm for replicated logs. With etcd, applications can maintain more consistent uptime and work smoothly even when the individual servers are failing. Etcd is progressing and it already has 157 releases with etcd v3.3.10 being the latest one that got released just two month ago. etcd is designed as a consistency store across environments including public cloud, hybrid cloud and bare metal. Where is etcd used? Kubernetes clusters use etcd as their primary data store. Red Hat OpenShift customers and Kubernetes users benefit from the community work on the etcd project. It is also used by communities and users like Uber, Alibaba Cloud, Google Cloud, Amazon Web Services, and Red Hat. etcd will be under Linux Foundation and the domains and accounts will be managed by CNCF. The community of etcd maintainers, including Red Hat, Alibaba Cloud, Google Cloud, Amazon, etc, won’t be changed. The project will continue to focus on the communities that depend on it. Red Hat will continue extending etcd with the etcd Operator in order to bring more security and operational ease. It will enable users to easily configure and manage etcd by using a declarative configuration that creates, configures, and manages etcd clusters. Read more about this news on RedHat’s official blog. RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018
Read more
  • 0
  • 0
  • 4361

article-image-oracle-introduces-oracle-cloud-native-framework-at-kubeconcloudnativecon-2018
Amrata Joshi
12 Dec 2018
3 min read
Save for later

Oracle introduces Oracle Cloud Native Framework at KubeCon+CloudNativeCon 2018

Amrata Joshi
12 Dec 2018
3 min read
Yesterday, at the ongoing KubeCon + CloudNativeCon North America 2018, the Oracle team introduced the Oracle Cloud Native Framework. This framework provides developers with a cloud native solution for public cloud, on premises and hybrid cloud deployments. The Oracle Cloud Native Framework supports modern cloud native and traditional applications like, WebLogic, Java, and database. It comprises of the recently announced Oracle Linux Cloud Native Environment and Oracle cloud infrastructure native services. The Oracle Cloud Native Framework supports both dev and ops so it can be used by startups and enterprises. What’s new in Oracle Cloud Native Framework? Application definition & development Oracle Functions: It is a serverless cloud service based on the open source Fn Project that can run on-premises, in a data center, or on any cloud. With Oracle Functions, developers can seamlessly deploy and execute function-based applications without the hassle of managing compute infrastructure. It is Docker container-based and follows the pay-per-use method. Streaming: It is a highly scalable and multi-tenant streaming platform that makes the process of collecting and managing streaming data easy. It also enables applications like security, supply chain and IoT, where large amounts of data gets collected from various sources and is processed in real time. Provisioning Resource Manager: It is a managed service that provisions Oracle Cloud Infrastructure resources and services. It reduces configuration errors while increasing productivity by managing infrastructure as code. Observability & Analysis Monitoring: It is an integrated service that helps in reporting metrics from all resources and services in Oracle Cloud Infrastructure. It uses predefined metrics and dashboards, or service API for getting a holistic view of the performance, health, and capacity of the system. This monitoring service uses alarms for tracking metrics and takes action when they vary or exceed defined thresholds. Notification Service: It is a scalable service that broadcasts messages to distributed components like, PagerDuty and email. The notification service helps users to deliver messages about Oracle Cloud Infrastructure to a large numbers of subscribers. Events: It can store information to object storage and trigger functions to take actions. It also enables users to react to changes in the state of Oracle Cloud Infrastructure resources. The Oracle Cloud Native Framework provides cloud-native capabilities and offerings to the customers by using the open standards established by CNFC. Don Johnson, executive vice president, product development, Oracle Cloud Infrastructure said, “With the growing popularity of the CNCF as a unifying and organizing force in the cloud native ecosystem and organizations increasingly embracing multi cloud and hybrid cloud models, developers should have the flexibility to build and deploy their applications anywhere they choose without the threat of cloud vendor lock-in. Oracle is making this a reality,” To know more about this news, check out the press release. Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Red Hat acquires Israeli multi-cloud storage software company, NooBaa Cloud Native Application Bundle (CNAB): Docker, Microsoft partner on an open source cloud-agnostic all-in-one packaging format
Read more
  • 0
  • 0
  • 4284

article-image-introducing-gitlab-serverless-to-deploy-cloud-agnostic-serverless-functions-and-applications
Amrata Joshi
12 Dec 2018
2 min read
Save for later

Introducing GitLab Serverless to deploy cloud-agnostic serverless functions and applications

Amrata Joshi
12 Dec 2018
2 min read
Yesterday, GitLab and TriggerMesh introduced GitLab Serverless which helps enterprises run serverless workloads on any cloud with the help of Google’s Kubernetes-based platform Knative, which is used to build, deploy, and manage serverless workloads. GitLab Serverless enables businesses in deploying serverless functions and applications on any cloud or infrastructure from GitLab UI by using Knative. GitLab Serverless is scheduled for public release on 22 December 2018 and will be available in GitLab 11.6. It involves a technology developed by TriggerMesh, a multi cloud serverless platform, for enabling businesses to run serverless workloads on Kubernetes. Sid Sijbrandij, co-founder and CEO of GitLab said, “We’re pleased to offer cloud-agnostic serverless as a built-in part of GitLab’s end-to-end DevOps experience, allowing organizations to go from planning to monitoring in a single application.” Functions as a service (Faas) With GitLab Serverless, users can run their own Function-as-a-Service (FaaS) on any infrastructure without worrying about vendor lock-in. FaaS allows users to write small and discrete units of code with event-based execution. While deploying the code, developers need not worry about the infrastructure it will run on. It saves resources as the code executes only when needed, so resources don’t get used while the app is idle. Kubernetes and Knative Flexibility and portability can be achieved by running serverless workloads on Kubernetes. The Serverless uses Knative for creating a seamless experience for the entire DevOps lifecycle. Deploy on any infrastructure With Serverless, users can deploy to any cloud or on-premises infrastructure. GitLab can connect to any Kubernetes cluster so users can choose to run their serverless workloads anywhere Kubernetes runs. Auto-scaling with ‘scale to zero’ The Kubernetes cluster automatically scales up and down based on the load. The "Scale to zero" is used for stopping consumption of resources when there are no requests. To know more about this news, check out the official announcement. Haskell is moving to GitLab due to issues with Phabricator GitLab 11.5 released with group security and operations-focused dashboard, control access to GitLab pages GitLab 11.4 is here with merge request reviews and many more features
Read more
  • 0
  • 0
  • 5126

article-image-digitalocean-launches-its-kubernetes-as-a-service-at-kubeconcloudnativecon-to-ease-running-containerized-apps
Melisha Dsouza
12 Dec 2018
2 min read
Save for later

DigitalOcean launches its Kubernetes-as-a-service at KubeCon+CloudNativeCon to ease running containerized apps

Melisha Dsouza
12 Dec 2018
2 min read
At KubeCon+CloudNativeCon this week, DigitalOcean announced the launch of its Kubernetes-as-a-Service offering to all developers. This is a limited release with full general availability planned for early 2019. DigitalOcean had first announced its container offering through an early access program in May this year followed by its limited availability in October. Building up on the simplicity factor that was appreciated the most by customers, DigitalOcean Kubernetes (DOK8s) claims to be a powerfully simple managed Kubernetes service. Once customers define the size and location of their worker nodes, DigitalOcean will provision, manage, and optimize the services needed to run a Kubernetes cluster. The DOK8s is easy to setup as well. During the announcement, DigitalOcean VP of Product Shiven Ramji said that “Kubernetes promises to be one of the leading technologies in a developer’s arsenal to gain the scalability, portability and availability needed to build modern apps. Unfortunately, for many, it’s extremely complex to manage and deploy. With DigitalOcean Kubernetes, we make running containerized apps consumable for any developer, regardless of their skills or resources.” The new release builds on the early access release of the service including capabilities like node provisioning, handling durable storage, firewall, load balancing and similar tools. Further, the new features added now include: Guided configuration experiences to assist users in provisioning, configuring and deploying clusters Open APIs to enable easy integrations with developer tools Ability to programmatically create and update cluster and nodes settings Expanded version support including Kubernetes version 1.12.1 with support for 1.13.1 shortly. Support released for DOK8s in the DigitalOcean API, making it easy for users to create and manage their clusters through DigitalOcean’s API. Effective pricing for DigitalOcean Kubernetes- Customers pay only for the underlying resources they use (Droplets, Block Storage, and Load Balancers) Head over to DigitalOcean’s blog to know more about this announcement. ‘AWS Service Operator’ for Kubernetes now available allowing the creation of AWS resources using kubectl Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes
Read more
  • 0
  • 0
  • 4367
article-image-openssh-now-a-part-of-the-windows-server-2019
Savia Lobo
12 Dec 2018
2 min read
Save for later

OpenSSH, now a part of the Windows Server 2019

Savia Lobo
12 Dec 2018
2 min read
Yesterday, Microsoft announced that the OpenSSH client and server are available as a supported feature-on-Demand in Windows Server 2019 and Windows 10 1809. OpenSSH is a collection of client/server utilities allowing secure login, remote file transfer, and public/private key pair management. It originated as a part of the OpenBSD project and has been used across the BSD, Linux, macOS, and Unix ecosystems, for years. In 2015, Microsoft said they would build OpenSSH into Windows, while also making contributions to its development. The Win32 port of OpenSSH was first included in the Windows 10 Fall Creators Update and Windows Server 1709 as a pre-release feature. With OpenSSH in the Windows Server 2019, organizations can work across a broad range of operating systems and also utilize a consistent set of tools for remote server administration. The community welcomes OpenSSH on Windows Server 2019 According to some on HackerNews, “Having used DSC and PowerShell remoting extensively, these create as many problems as they solve. Nothing works smoothly. Not a thing. The saving grace here will be SSH because then at least we can drive all our kit across both platforms from Ansible and be done with the entire MSFT management stack.” Another review says, “Mounting requires other ports to be opened, which no sysadmin will do on the internet. Ssh, on the other hand, can be started on a non-standard port.” “SSH is an awesome tool & capability as a relatively high-level network channel. The defacto “shell” approach leads to a lot of problems when used as a management device. It encourages ad-hoc, unstructured, and opaque changes. Managing your hosts via Secure Shell simply leads to bespoke, unrepeatable, outcomes and crushing debt.” To know more about this news in detail, visit the Windows official blog. Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Microsoft releases first test build of Windows Server 1803 How to use PowerShell Web Access to manage Windows Server
Read more
  • 0
  • 0
  • 4826

article-image-elastic-launches-helm-charts-alpha-for-faster-deployment-of-elasticsearch-and-kibana-to-kubernetes
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

Elastic launches Helm Charts (alpha) for faster deployment of Elasticsearch and Kibana to Kubernetes

Melisha Dsouza
12 Dec 2018
3 min read
At the KubeCon+CloudNativeCon happening at Seattle this week, Elastic N.V., the pioneer behind Elasticsearch and the Elastic Stack, announced the alpha availability of Helm Charts for Elasticsearch on Kubernetes. Helm Charts will make it possible to deploy Elasticsearch and Kibana to Kubernetes almost instantly. Developers use Helm charts for its flexibility in creating, publishing and sharing Kubernetes applications. The ease of using Kubernetes to manage containerized workloads has also lead to Elastic users deploying their ElasticSearch workloads to Kubernetes. Now, with the Helm chart support provided for Elasticsearch on Kubernetes, developers can harness the benefits of both, Helm charts and Kubernetes, to instal, configure, upgrade and run their applications on Kubernetes. With this new functionality in place, users can now take advantage of the best practices and templates to deploy Elasticsearch and Kibana. They will obtain access to some basic free features like monitoring, Kibana Canvas and spaces. According to the blog post, Helm charts will serve as a “ way to help enable Elastic users to run the Elastic Stack using modern, cloud-native deployment models and technologies.” Why should developers consider Helm charts? Helm charts have been known to provide users with the ability to leverage Kubernetes packages through the click of a button or single CLI command. Kubernetes is sometimes complex to use, thus impairing developer productivity. Helm charts improve their productivity as follows: With helm charts, developers can focus on developing applications rather than  deploying dev-test environments. They can author their own chart, which in turn automates deployment of their dev-test environment It comes with a “push button” deployment and deletion of apps, making adoption and development of Kubernetes apps easier for those with little container or microservices experience. Combating the complexity related of deploying a Kubernetes-orchestrated container application, Helm Charts allows software vendors and developers to preconfigure their applications with sensible defaults. This enables users/deployers to change parameters of the application/chart using a consistent interface. Developers can incorporate production-ready packages while building applications in a Kubernetes environment thus eliminating deployment errors due to incorrect configuration file entries or mangled deployment recipes. Deploying and maintaining Kubernetes applications can be tedious and error prone. Helm Charts reduces the complexity of maintaining an App Catalog in a Kubernetes environment. Central App Catalog reduces duplication of charts (when shared within or between organizations) and spreads best practices by encoding them into Charts. To know more about Helm charts, check out the README files for the Elasticsearch and Kibana charts available on GitHub. In addition to this announcement, Elastic also announced its collaboration with Cloud Native Computing Foundation (CNCF) to promote and support open cloud native technologies and companies. This is another step towards Elastic’s mission towards building products in an open and transparent way. You can head over to Elastic’s official blog for an in-depth coverage of this news. Alternatively, check out MarketWatch for more insights on this article. Dejavu 2.0, the open source browser by ElasticSearch, now lets you build search UIs visually Elasticsearch 6.5 is here with cross-cluster replication and JDK 11 support How to perform Numeric Metric Aggregations with Elasticsearch
Read more
  • 0
  • 0
  • 6576

article-image-istio-available-in-beta-for-google-kubernetes-engine-will-accelerate-app-delivery-and-improve-microservice-management
Melisha Dsouza
12 Dec 2018
3 min read
Save for later

‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management

Melisha Dsouza
12 Dec 2018
3 min read
The KubeCon+CloudNativeCon happening at Seattle this week has excited developers with its plethora of new announcements and releases. This conference dedicated to Kubernetes and other cloud native technologies, brings together adopters and technologists from leading open source and cloud native communities to discuss new advancements at the cloud front. At this year’s conference, Google Cloud announced the beta availability of ‘Istio’ for its Google Kubernetes Engine. Istio was launched in the middle of 2017, as a result of a collaboration between Google, IBM and Lyft. According to Google, this open-source “service mesh” that is used to connect, manage and secure microservices on a variety of platforms- like Kubernetes- will play a vital role in helping developers make the most of their microservices. Yesterday, Google Cloud Director of Engineering Chen Goldberg and Director of Product Management Jennifer Lin said in a blog post that the availability of Istio on Google Kubernetes Engine will provide “more granular visibility, security and resilience for Kubernetes-based apps”. This service will be made available through Google’s Cloud Services Platform that bundles together all the tools and services needed by developers to get their container apps up and running on the company’s cloud or in on-premises data centres. In an interview with SiliconANGLE, Holger Mueller, principal analyst and vice president of Constellation Research Inc., compared software containers to “cars.” He says that  “Kubernetes has built the road but the cars have no idea where they are, how fast they are driving, how they interact with each other or what their final destination is. Enter Istio and enterprises get all of the above. Istio is a logical step for Google and a sign that the next level of deployments is about manageability, visibility and awareness of what enterprises are running.” Additional features of Istio Istio allows developers and operators to manage applications as services and not as lots of different infrastructure components. Istio allows users to encrypt all their network traffic, layering transparently onto any existing distributed application. Users need not embed any client libraries in their code to avail this functionality. Istio on GKE also comes with an integration into Stackdriver, Google Cloud’s monitoring and logging service. Istio securely authenticates and connects a developer’s services to one another. It transparently adds mTLS to a service communication, thus encrypting all information in transit. It provides a service identity for each service, allowing developers to create service-level policies enforced for each individual application transaction, while providing non-replayable identity protection. Istio is yet another step for GKE that will make it easier to secure and efficiently manage containerized applications. Head over to TechCrunch for more insights on this news. Google’s Cloud Robotics platform, to be launched in 2019, will combine the power of AI, robotics and the cloud What’s new in Google Cloud Functions serverless platform Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 4009
article-image-freebsd-12-0-is-now-out
Bhagyashree R
12 Dec 2018
3 min read
Save for later

FreeBSD 12.0 is now out!

Bhagyashree R
12 Dec 2018
3 min read
Yesterday, the FreeBSD release engineering team announced the availability of FreeBSD 12.0, which marks the first release of the stable/12 branch. This version is available for the amd64, i386, powerpc, powerpc64, powerpcspe, sparc64, armv6, armv7, and aarch64 architectures. FreeBSD is an open source, Unix-like operating system for x86, ARM, AArch64, RISC-V, MIPS, POWER, PowerPC, and Sun UltraSPARC computers. It is based on the 4.4BSD-Lite release from Computer Systems Research Group (CSRG) at the University of California at Berkeley. It comes with features like preemptive multitasking, memory protection, virtual memory, multi-user facilities, and SMP support. Following are some of the updates introduced in FreeBSD 12.0: The bsdinstall installer and zfsboot are updated to allow a UEFI+GELI installation option. GOST is removed, and LDNS now enables DANE-TA. sshd now comes with additional support for capsicum. Also, capsicum is enabled on armv6 and armv7 by default. The VIMAGE kernel configuration option is enabled by default. The NUMA option is enabled by default in the amd64 GENERIC and MINIMAL kernel configurations. The netdump driver is added for transmitting kernel crash dumps to a remote host after a system panic. The vt driver now comes with better performance, drawing text at rates ranging from 2- to 6-times faster. The UFS/FFS filesystem is updated to consolidate TRIM/BIO_DELETE commands, resulting in fewer read/write requests. This is enabled by default in the UFS/FFS filesystem and can be disabled by setting the vfs.ffs.dotrimcons sysctl to 0, or adding vfs.ffs.dotrimcons=0 to sysctl.conf. The pf packet filter can now be used within a jail using vnet. The bhyve utility is updated to add NVMe device emulation and it is now also able to be run within a jail. Various Lua loader improvements such as detecting a list of installed kernels to boot and support for module blacklists. Upgraded components Clang, LLVM, LLD, LLDB, compiler-rt, and libc++ is updated to 6.0.1. OpenSSL is updated to 1.1.1a (LTS). Unbound is updated to 1.8.1 OpenSSH is updated to 7.8p1. The vt(4) Terminus BSD Console font is updated to 4.46. KDE has been updated to version 5.12.5. The NFS version 4.1 server is updated to include pNFS server support. You can install FreeBSD 12.0 from a bootable ISO image or over the network. Some architectures also support installing from a USB memory stick. To read the entire list of update in FreeBSD 12.0, check out its release notes. LibrePCB 0.1.0 released with major changes in library editor and file format Systems programming with Go in UNIX and Linux AMD ROCm GPUs now support TensorFlow v1.8, a major milestone for AMD’s deep learning plans  
Read more
  • 0
  • 0
  • 4052

article-image-neuvector-upgrades-kubernetes-container-security-with-the-release-of-containerd-and-cri-o-run-time-support
Sugandha Lahoti
11 Dec 2018
2 min read
Save for later

NeuVector upgrades Kubernetes container security with the release of Containerd and CRI-O run-time support

Sugandha Lahoti
11 Dec 2018
2 min read
At the ongoing KubeCon + CloudNativeCon North America 2018, NeuVector has upgraded their line of container network security with the release of containerd and CRI-O run-time support. Attendees of the conference are invited to learn how customers use NeuVector and get 1:1 demos of the platform’s new capabilities. Containerd is a Cloud Native Computing Foundation incubating project. It’s basically a container run-time built to emphasize simplicity, robustness, and portability while managing the complete container lifecycle of its host system. This includes managing the lifecycle of its host system, from image transfer and storage to container execution and supervision to low-level storage to network attachments and more. NeuVector is testing the containerd version on the latest IBM Cloud Kubernetes Service version, which uses the containerd run-time. CRI-O is an implementation of the Kubernetes container run-time interface enabling OCI compatible run-times. It is a lightweight alternative to Docker as a run-time for Kubernetes. CRI-O is made up of several components including: OCI compatible runtime containers/storage containers/image networking (CNI) container monitoring (common) security is provided by several core Linux capabilities With this newly added support, organizations using containerd or CRI-O can deploy NeuVector to secure their container environments. Stripe open sources ‘Skycfg’, a configuration builder for Kubernetes. Kubernetes 1.13 released with new features and fixes to a major security flaw Introducing ‘Pivotal Function Service’ (alpha): an open, Kubernetes based, multi-cloud serverless framework for developer workloads
Read more
  • 0
  • 0
  • 3763