Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Networking

54 Articles
article-image-google-ai-introduces-snap-a-microkernel-approach-to-host-networking
Savia Lobo
29 Oct 2019
4 min read
Save for later

Google AI introduces Snap, a microkernel approach to ‘Host Networking’

Savia Lobo
29 Oct 2019
4 min read
A few days ago, the Google AI team introduced Snap, a microkernel-inspired approach to host networking at the 27th ACM Symposium on Operating Systems Principles. Snap is a userspace networking system with flexible modules that implement a range of network functions, including edge packet switching, virtualization for our cloud platform, traffic shaping policy enforcement, and a high-performance reliable messaging and RDMA-like service. The Google AI team says, “Snap has been running in production for over three years, supporting the extensible communication needs of several large and critical systems.” Why Snap? Prior to Snap, Google AI team says they were limited in their ability to develop and deploy new network functionality and performance optimizations in several ways. This is because developing kernel code was slow and drew on a smaller pool of software engineers. Second, feature release through the kernel module reloads covered only a subset of functionality and often required disconnecting applications, while the more common case of requiring a machine reboot necessitated draining the machine of running applications. Unlike prior microkernel systems, Snap benefits from multi-core hardware for fast IPC and does not require the entire system to adopt the approach wholesale, as it runs as a userspace process alongside our standard Linux distribution and kernel. Source: Snap Research paper Using Snap, the Google researchers also created a new communication stack called Pony Express that implements a custom reliable transport and communications API. Pony Express provides significant communication efficiency and latency advantages to Google applications, supporting use cases ranging from web search to storage. Features of the Snap userspace networking system Snap’s architecture comprises of recent ideas in userspace networking, in-service upgrades, centralized resource accounting, programmable packet processing, kernel-bypass RDMA functionality, and optimized co-design of transport, congestion control, and routing. With these, Snap: Enables a high rate of feature development with a microkernel-inspired approach of developing in userspace with transparent software upgrades. It also retains the benefits of centralized resource allocation and management capabilities of monolithic kernels and also improves upon accounting gaps with existing Linux-based systems. Implements a custom kernel packet injection driver and a custom CPU scheduler that enables interoperability without requiring the adoption of new application runtimes and while maintaining high performance across use cases that simultaneously require packet processing through both Snap and the Linux kernel networking stack. Encapsulates packet processing functions into composable units called “engines”, which enables both modular CPU scheduling as well as incremental and minimally disruptive state transfer during upgrades. Through Pony Express, it provides support for OSI layer 4 and 5 functionality through an interface similar to an RDMA-capable “smart” NIC. This enables transparently leveraging offload capabilities in emerging hardware NICs as a means to further improve server efficiency and throughput. Supports 3x better transport processing efficiency than the baseline Linux kernel and supporting RDMA-like functionality at speeds of 5M ops/sec/core. MicroQuanta: Snap’s new lightweight kernel scheduling class To dynamically scale CPU resources, Snap works in conjunction with a new lightweight kernel scheduling class called MicroQuanta. It provides a flexible way to share cores between latency-sensitive Snap engine tasks and other tasks, limiting the CPU share of latency-sensitive tasks and maintaining low scheduling latency at the same time. A MicroQuanta thread runs for a configurable runtime out of every period time units, with the remaining CPU time available to other CFS-scheduled tasks using a variation of a fair queuing algorithm for high and low priority tasks (rather than more traditional fixed time slots). MicroQuanta is a robust way for Snap to get priority on cores runnable by CFS tasks that avoid starvation of critical per-core kernel threads. While other Linux real-time scheduling classes use both per-CPU tick-based and global high-resolution timers for bandwidth control, MicroQuanta uses only per-CPU highresolution timers. This allows scalable time-slicing at microsecond granularity. Snap is being received positively by many in the community. https://twitter.com/copyconstruct/status/1188514635940421632 To know more about Snap in detail, you can read it’s complete research paper. Amazon announces improved VPC networking for AWS Lambda functions Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels ReactOS 0.4.12 releases with kernel improvements, Intel e1000 NIC driver support, and more
Read more
  • 0
  • 0
  • 5318

article-image-cloudflare-finally-launches-warp-and-warp-plus-after-a-delay-of-more-than-five-months
Vincy Davis
27 Sep 2019
5 min read
Save for later

Cloudflare finally launches Warp and Warp Plus after a delay of more than five months

Vincy Davis
27 Sep 2019
5 min read
More than five months after announcing Warp, Cloudflare has finally made it available to the general public, yesterday. With two million people on the waitlist to try Warp, the Cloudflare team says that it took them harder than they thought to build a next-generation service to secure consumer mobile connections, without compromising on speed and power usage. Along with Warp, Cloudflare is also launching Warp Plus. Warp is a free VPN to the 1.1.1.1 DNS resolver app which will speed up mobile data using the Cloudflare network to resolve DNS queries at a faster pace. It also comes with end-to-end encryption and does not require users to install a root certificate to observe encrypted internet traffic. It is built around a UDP-based protocol that is optimized for the mobile internet and offers excellent performance and reliability. Why Cloudflare delayed the Warp release? A few days before Cloudflare announced Warp on April 1st, Apple released its new version iOS 12.2 with significant changes in its underlying network stack implementation. This made the Warp network unstable thus making the Cloudflare team arrange for workarounds in their networking code, which took more time. Cloudflare adds, “We had a version of the WARP app that (kind of) worked on April 1. But, when we started to invite people from outside of Cloudflare to use it, we quickly realized that the mobile Internet around the world was far more wild and varied than we'd anticipated.” As the internet is made up of diverse network components, the Cloudflare team found it difficult to include all the diversity of mobile carriers, mobile operating systems, and mobile device models in their network. The Cloudflare team also found it testing to include users’ diverse network settings in their network. Warp uses a technology called Anycast to route user traffic to the Cloudflare network, however, it moves the users’ data between entire data centers, which made the Warp functioning complex.  To overcome all these barriers, the Cloudflare team has now changed its approach by focussing more on iOS. The team has also solidified the shared underpinnings of the app to ensure that it would even work with future network stack upgrades. The team has also tested Warp with network-based users to discover as many corner cases as possible. Thus, the Cloudflare team has successfully invented new technologies to keep the session state stable even with multiple mobile networks. Cloudflare introduces Warp Plus - an unlimited version of Warp Along with Warp, the Cloudflare team has also launched Warp Plus, an unlimited version of WARP for a monthly subscription fee. Warp Plus is faster than Warp and uses Cloudflare’s Argo Smart Routing to achieve a higher speed than Warp. The official blog post states, “Routing your traffic over our network often costs us more than if we release it directly to the internet.” To cover these costs, Warp Plus will charge a monthly fee of $4.99/month or less, depending on the user location. The Cloudflare team also added that they will be launching a test tool within the 1.1.1.1 app in a few weeks to make users “see how your device loads a set of popular sites without WARP, with WARP, and with WARP Plus.” Read Also: Cloudflare plans to go public; files S-1 with the SEC  To know more details about Warp Plus, read the technical post by Cloudflare team. Privacy features offered by Warp and Warp Plus The 1.1.1.1 DNS resolver app provides strong privacy protections such as all the debug logs will be kept only long enough to ensure the security of the service. Also, Cloudflare will only retain the limited transaction data for legitimate operational and research purposes.  Warp will not only maintain the 1.1.1.1 DNS protection layers but will also ensure: User’s-identifiable log data will be written to disk The user’s browsing data will not be sold for advertising purposes Warp will not demand any personal information (name, phone number, or email address) to use Warp or Warp Plus Outside editors will regularly regulate Warp’s functioning The Cloudflare team has also notified users that the newly available Warp will have bugs present in them. The blog post also specifies that the most popular bug currently in Warp is due to traffic misroute, which is making the Warp function slower than the speed of non-Warp mobile internet.  Image Source: Cloudflare blog The team has made it easier for users to report bugs as they have to just click on the little bug icon near the top of the screen on the 1.1.1.1 app or shake their phone with the app open and send a bug report to Cloudflare. Visit the Cloudflare blog for more information on Warp and Warp Plus. Facebook will no longer involve third-party fact-checkers to review the political content on their platform GNOME Foundation’s Shotwell photo manager faces a patent infringement lawsuit from Rothschild Patent Imaging A zero-day pre-auth vulnerability is currently being exploited in vBulletin, reports an anonymous researcher
Read more
  • 0
  • 0
  • 8687

article-image-istio-1-3-releases-with-traffic-management-improved-security-and-more
Amrata Joshi
16 Sep 2019
3 min read
Save for later

Istio 1.3 releases with traffic management, improved security, and more!

Amrata Joshi
16 Sep 2019
3 min read
Last week, the team behind Istio, an open-source service mesh platform, announced Istio 1.3. This release makes using the service mesh platform easier for users. What’s new in Istio 1.3? Traffic management In this release, automatic determination of HTTP or TCP has been added for outbound traffic when ports are not correctly named as per Istio’s conventions. The team has added a mode to the Gateway API that is used for mutual TLS operation. Envoy proxy has been improved,  it now checks Envoy’s readiness status. The team has improved the load balancing for directing the traffic to the same region and zone by default. And the Redis load balancer has now defaulted to MAGLEV while using the Redis proxy. Improved security This release comes with trust domain validation for services that use mutual TLS. By default, the server only authenticates the requests from the same trust domain. The team has added SDS (Software Defined Security) support for delivering the private key and certificates to each of the Istio control plane services. The team implemented major security policies including RBAC, directly into Envoy.  Experimental telemetry  In this release, the team has improved the Istio proxy to emit HTTP metrics directly to Prometheus, without the need of istio-telemetry service.  Handles inbound traffic securely Istio 1.3 secures and handles all inbound traffic on any port without the need of containerPort declarations. The team has eliminated the infinite loops that are caused in the IP tables rules when workload instances send traffic to themselves. Enhanced EnvoyFilter API The team has enhanced the EnvoyFilter API so that users can fully customize HTTP/TCP listeners, their filter chains returned by LDS (Listener discovery service ), Envoy HTTP route configuration that is returned by RDS (Route Discovery Service) and much more. Improved control plane monitoring The team has enhanced control plane monitoring by adding new metrics to monitor configuration state, metrics for sidecar injector and a new Grafana dashboard for Citadel. Users all over seem to be excited about this release.  https://twitter.com/HamzaZ21823474/status/1172235176438575105 https://twitter.com/vijaykodam/status/1172237003506798594 To know more about this news, check out the release notes. Other interesting news in Cloud & networking StackRox App integrates into the Sumo Logic Dashboard  for improved Kubernetes security The Continuous Intelligence report by Sumo Logic highlights the rise of Multi-Cloud adoption and open source technologies like Kubernetes Kong announces Kuma, an open-source project to overcome the limitations of first-generation service mesh technologies        
Read more
  • 0
  • 0
  • 2840

article-image-kong-announces-kuma-an-open-source-project-to-overcome-the-limitations-of-first-generation-service-mesh-technologies
Amrata Joshi
10 Sep 2019
3 min read
Save for later

Kong announces Kuma, an open-source project to overcome the limitations of first-generation service mesh technologies

Amrata Joshi
10 Sep 2019
3 min read
Today, the team at Kong, the creators of the API and service lifecycle management platform for modern architectures announced the release of Kuma, a new open-source project.  Kuma is based on the open-source Envoy proxy that addresses limitations of first-generation service mesh technologies by seamlessly managing services on the network. The first-generation meshes didn't have a mature control plane, and later on, when they provided a control plane, it wasn’t easy to use them as they were hard to deploy. Kuma is easy to use and enables rapid adoption of mesh. Also Read: Kong CTO Marco Palladino on how the platform is paving the way for microservices adoption [Interview] Features of Kuma Runs on all the platforms Kuma can run on any platform including Kubernetes, containers, virtual machines, and legacy environments. It also includes a fast data plane as well as an advanced control plane that makes it easier to use.  It is reliable The initial service mesh solutions were not flexible and it was difficult to use them. Kuma ensures reliability by automating the process of securing the underlying network.  Support for all the environments Kuma has support for all the environments in the organization, so the existing applications can still be used in their traditional environments. This provides comprehensive coverage across an organization. Couples a fast data plane using control plane Kuma couples a fast data plane with a control plane that helps users to set permissions, routing rules and expose metrics with just a few commands. Tracing and logging Kuma helps users to implement tracing and logging and analyze metrics for rapid debugging. Routing and Control  Kuma provides traffic control capabilities including circuit breakers and health checks in order to enhance L4 (Layer 4) routing. Marco Palladino, CTO and co-founder of Kong, said, “We now have more microservices talking to each other and connectivity between them is the most unreliable piece: prone to failures, insecure and hard to observe.”  Palladino further added, “It was important for us to make Kuma very easy to get started with on both Kubernetes and VM environments, so developers can start using service mesh immediately even if their organization hasn’t fully moved to Kubernetes yet, providing a smooth path to containerized applications and to Kubernetes itself. We are thrilled to be open-sourcing Kuma and extending the adoption of Envoy, and we will continue to contribute back to the Envoy project like we have done in the past. Just as Kong transformed and modernized API Gateways with open-source Kong, we are now doing that for service mesh with Kuma.” The Kuma platform will be on display during the second annual Kong Summit, which is to be held on October 2-3, 2019. Other interesting news in Cloud and Networking  Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models
Read more
  • 0
  • 0
  • 4153

article-image-containous-introduces-maesh-a-lightweight-and-simple-service-mesh-to-ease-microservices-adoption
Savia Lobo
05 Sep 2019
2 min read
Save for later

Containous introduces Maesh, a lightweight and simple Service Mesh to ease microservices adoption

Savia Lobo
05 Sep 2019
2 min read
Yesterday, Containous, a cloud-native networking company, announced Maesh, a lightweight and simple Service Mesh. Maesh is aimed at making service-to-service communications simpler for developers building modern, cloud-native applications. It is easy to use and fully featured to help developers connect, secure and monitor traffic to and from their microservices-based applications. Mesh also supports the latest Service Mesh Interface specification (SMI), a standard specification for service mesh interoperability in Kubernetes. Maesh allows developers to adopt microservices thus, improving the service mesh experience by offering an easy way to connect, secure and monitor the network traffic in any Kubernetes environment. It helps developers optimize internal traffic, visualize traffic patterns, and secure communication channels, all while improving application performance. Also Read: Red Hat announces the general availability of Red Hat OpenShift Service Mesh Maesh is designed to be completely non-invasive, allowing development teams across the organization to incrementally “opt-in” applications progressively over time. It is backed by Traefik’s rich feature-set thus, providing OpenTracing, load balancing for HTTP, gRPC, WebSocket, TCP, rich routing rules, retries and fail-overs, not to mention access controls, rate limits, and circuit breakers. Maesh can run in both TCP and HTTP mode. “In HTTP mode, Maesh leverages Traefik’s feature set to enable rich routing on virtual-host, path, headers, cookies. Using TCP mode allows seamless and easy integration with SNI routing support,” Containous team reports. It also enables critical features across any Kubernetes environment including observability, Multi-Protocol Support, Traffic Management, Security and Safety. Also Read: Mapbox introduces MARTINI, a client-side terrain mesh generation code In an email statement to us, Emile Vauge, CEO, Containous said, “With Maesh, Containous continues to innovate with the mission to drastically simplify cloud-native adoption for all enterprises. We’ve been proud of how popular Traefik has been for developers as a critical open source solution, and we’re excited to now bring them Maesh.” https://twitter.com/resouer/status/1169310994490748928 To know more about Maesh in detail, read the Containous’ Medium blog post. Other interesting news in Networking Amazon announces improved VPC networking for AWS Lambda functions Pivotal open sources kpack, a Kubernetes-native image build service Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more
Read more
  • 0
  • 0
  • 3360

article-image-amazon-announces-improved-vpc-networking-for-aws-lambda-functions
Amrata Joshi
04 Sep 2019
3 min read
Save for later

Amazon announces improved VPC networking for AWS Lambda functions

Amrata Joshi
04 Sep 2019
3 min read
Yesterday, the team at Amazon announced improved VPC (Virtual Private Cloud) networking for AWS Lambda functions. It is a major improvement on how AWS Lambda function will work with Amazon VPC networks.  In case a Lambda function is not configured to connect to your VPCs then the function can access anything available on the public internet including other AWS services, HTTPS endpoints for APIs, or endpoints and services outside AWS. So, the function has no way to connect to your private resources that are inside your VPC. When the Lambda function is configured to connect to your own VPC, it creates an elastic network interface within the VPC and does a cross-account attachment. Image Source: Amazon These Lambda functions run inside the Lambda service’s VPC but they can only access resources over the network with the help of your VPC. But in this case, the user still won’t have direct network access to the execution environment where the functions run. What has changed in the new model? AWS Hyperplane for providing NAT (Network Address Translation) capabilities  The team is using AWS Hyperplane, the Network Function Virtualization platform that is used for Network Load Balancer and NAT Gateway. It also has supported inter-VPC connectivity for AWS PrivateLink. With the help of Hyperplane the team will provide NAT capabilities from the Lambda VPC to customer VPCs. Network interfaces within VPC are mapped to the Hyperplane ENI The Hyperplane ENI (Elastic Network Interfaces), a network resource controlled by the Lambda service, allows multiple execution environments to securely access resources within the VPCs in your account. So, in the previous model, the network interfaces in your VPC were directly mapped to Lambda execution environments. But in this case, the network interfaces within your VPC are mapped to the Hyperplane ENI. Image Source: Amazon How is Hyperplane useful? To reduce latency When a function is invoked, the execution environment now uses the pre-created network interface and establishes a network tunnel to it which reduces the latency. To reuse network interface cross functions Each of the unique security group:subnet combination across functions in your account needs a distinct network interface. If such a combination is shared across multiple functions in your account, it is now possible to reuse the same network interface across functions. What remains unchanged? AWS Lambda functions will still need the IAM permissions for creating and deleting network interfaces in your VPC. Users can still control the subnet and security group configurations of the network interfaces.  Users still need to use a NAT device(for example VPC NAT Gateway) for giving a function internet access or for using VPC endpoints to connect to services outside of their VPC. The types of resources that your functions can access within the VPCs still remain the same. The official post reads, “These changes in how we connect with your VPCs improve the performance and scale for your Lambda functions. They enable you to harness the full power of serverless architectures.” To know more about this news, check out the official post. What’s new in cloud & networking this week? Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models  
Read more
  • 0
  • 0
  • 6265
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-kubernetes-releases-etcd-v3-4-with-better-backend-storage-improved-raft-voting-process-new-raft-non-voting-member-and-more
Fatema Patrawala
02 Sep 2019
5 min read
Save for later

Kubernetes releases etcd v3.4 with better backend storage, improved raft voting process, new raft non-voting member and more

Fatema Patrawala
02 Sep 2019
5 min read
Last Friday, a team at Kubernetes announced the release of etcd 3.4 version. etcd 3.4 focuses on stability, performance and ease of operation. It includes features like pre-vote and non-voting member and improvements to storage backend and client balancer. Key features and improvements in etcd v3.4 Better backend storage etcd v3.4 includes a number of performance improvements for large scale Kubernetes workloads. In particular, etcd experienced performance issues with a large number of concurrent read transactions even when there is no write (e.g. “read-only range request ... took too long to execute”). Previously, the storage backend commit operation on pending writes, blocks incoming read transactions, even when there was no pending write. Now, the commit does not block reads which improve long-running read transaction performance. The team has further made backend read transactions fully concurrent. Previously, ongoing long-running read transactions block writes and upcoming reads. With this change, write throughput is increased by 70% and P99 write latency is reduced by 90% in the presence of long-running reads. They also ran Kubernetes 5000-node scalability test on GCE with this change and observed similar improvements. Improved raft voting process etcd server implements Raft consensus algorithm for data replication. Raft is a leader-based protocol. Data is replicated from leader to follower; a follower forwards proposals to a leader, and the leader decides what to commit or not. Leader persists and replicates an entry, once it has been agreed by the quorum of cluster. The cluster members elect a single leader, and all other members become followers. The elected leader periodically sends heartbeats to its followers to maintain its leadership, and expects responses from each follower to keep track of its progress. In its simplest form, a Raft leader steps down to a follower when it receives a message with higher terms without any further cluster-wide health checks. This behavior can affect the overall cluster availability. For instance, a flaky (or rejoining) member drops in and out, and starts campaign. This member ends up with higher terms, ignores all incoming messages with lower terms, and sends out messages with higher terms. When the leader receives this message of a higher term, it reverts back to follower. This becomes more disruptive when there’s a network partition. Whenever the partitioned node regains its connectivity, it can possibly trigger the leader re-election. To address this issue, etcd Raft introduces a new node state pre-candidate with the pre-vote feature. The pre-candidate first asks other servers whether it’s up-to-date enough to get votes. Only if it can get votes from the majority, it increments its term and starts an election. This extra phase improves the robustness of leader election in general. And helps the leader remain stable as long as it maintains its connectivity with the quorum of its peers. Introducing a new raft non-voting member, “Learner” The challenge with membership reconfiguration is that it often leads to quorum size changes, which are prone to cluster unavailabilities. Even if it does not alter the quorum, clusters with membership change are more likely to experience other underlying problems. In order to address failure modes, etcd introduced a new node state “Learner”, which joins the cluster as a non-voting member until it catches up to leader’s logs. This means the learner still receives all updates from leader, while it does not count towards the quorum, which is used by the leader to evaluate peer activeness. The learner only serves as a standby node until promoted. This relaxed requirements for quorum provides the better availability during membership reconfiguration and operational safety. Improvements to client balancer failover logic etcd is designed to tolerate various system and network faults. By design, even if one node goes down, the cluster “appears” to be working normally, by providing one logical cluster view of multiple servers. But, this does not guarantee the liveness of the client. Thus, etcd client has implemented a different set of intricate protocols to guarantee its correctness and high availability under faulty conditions. Historically, etcd client balancer heavily relied on old gRPC interface: every gRPC dependency upgrade broke client behavior. A majority of development and debugging efforts were devoted to fixing those client behavior changes. As a result, its implementation has become overly complicated with bad assumptions on server connectivity. The primary goal in this release was to simplify balancer failover logic in etcd v3.4 client; instead of maintaining a list of unhealthy endpoints, whenever client gets disconnected from the current endpoint. To know more about this release, check out the Changelog page on GitHub. What’s new in cloud and networking this week? VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more! The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models Pivotal open sources kpack, a Kubernetes-native image build service
Read more
  • 0
  • 0
  • 3190

article-image-vmworld-2019-vmware-tanzu-on-kubernetes-new-hybrid-cloud-offerings-collaboration-with-multi-cloud-platforms-and-more
Fatema Patrawala
30 Aug 2019
7 min read
Save for later

VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more!

Fatema Patrawala
30 Aug 2019
7 min read
VMware kicked off its VMworld 2019 US in San Francisco last week on 25th August and ended yesterday with a series of updates, spanning Kubernetes, Azure, security and more. This year’s event theme was “Make Your Mark” aimed at empowering VMworld 2019 attendees to learn, connect and innovate in the world of IT and business. 20,000 attendees from more than 100 countries descended to San Francisco for VMworld 2019. VMware CEO Pat Gelsinger took the stage, and articulated VMware’s commitment and support for TechSoup, a one-stop IT shop for global nonprofits. Gelsinger also put emphasis on the company's 'any cloud, any application, any device, with intrinsic security' strategy. “VMware is committed to providing software solutions to enable customers to build, run, manage, connect and protect any app, on any cloud and any device,” said Pat Gelsinger, chief executive officer, VMware. “We are passionate about our ability to drive positive global impact across our people, products and the planet.” Let us take a look at the key highlights of the show: VMworld 2019: CEO's take on shaping tech as a force for good The opening keynote from Pat Gelsinger had everything one would expect; customer success stories, product announcements and the need for ethical fix in tech. "As technologists, we can't afford to think of technology as someone else's problem," Gelsinger told attendees, adding “VMware puts tremendous energy into shaping tech as a force for good.” Gelsinger cited three benefits of technology which ended up opening the Pandora's Box. Free apps and services led to severely altered privacy expectations; ubiquitous online communities led to a crisis in misinformation; while the promise of blockchain has led to illicit uses of cryptocurrencies. "Bitcoin today is not okay, but the underlying technology is extremely powerful," said Gelsinger, who has previously gone on record regarding the detrimental environmental impact of crypto. This prism of engineering for good, alongside good engineering, can be seen in how emerging technologies are being utilised. With edge, AI and 5G, and cloud as the "foundation... we're about to redefine the application experience," as the VMware CEO put it. Read also: VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Gelsinger’s 2018 keynote was about the theme of tech 'superpowers'. Cloud, mobile, AI, and edge. This time, more focus was given to how the edge was developing. Whether it was a thin edge, containing a few devices and an SD-WAN connection, a thick edge of a remote data centre with NFV, or something in between, VMware aims to have it all covered. "Telcos will play a bigger role in the cloud universe than ever before," said Gelsinger, referring to the rise of 5G. "The shift from hardware to software [in telco] is a great opportunity for US industry to step in and play a great role in the development of 5G." VMworld 2019 introduces Tanzu to build, run and manage software on Kubernetes VMware is moving away from virtual machines to containerized applications. On the product side VMware Tanzu was introduced, a new product portfolio that aims to enable enterprise-class building, running, and management of software on Kubernetes. In Swahili, ’tanzu’ means the growing branch of a tree and in Japanese, ’tansu’ refers to a modular form of cabinetry. For VMware, Tanzu is their growing portfolio of solutions that help build, run and manage modern apps. Included in this is Project Pacific, which is a tech preview focused on transforming VMware vSphere into a Kubernetes native platform. "With project Pacific, we're bringing the largest infrastructure community, the largest set of operators, the largest set of customers directly to the Kubernetes. We will be the leading enabler of Kubernetes," Gelsinger said. Read also: VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform Other product launches included an update to collaboration program Workspace ONE, including an AI-powered virtual assistant, as well as the launch of CloudHealth Hybrid by VMware. The latter, built on cloud cost management tool CloudHealth, aims to help organisations save costs across an entire multi-cloud landscape and will be available by the end of Q3. Collaboration, not compete with major cloud providers - Google Cloud, AWS & Microsoft Azure At VMworld 2019 VMware announced an extended partnership with Google Cloud earlier this month led the industry to consider the company's positioning amid the hyperscalers. VMware Cloud on AWS continues to gain traction - Gelsinger said Outposts, the hybrid tool announced at re:Invent last year, is being delivered upon - and the company also has partnerships in place with IBM and Alibaba Cloud. Further, VMware in Microsoft Azure is now generally available, with the facility to gradually switch across Azure data centres. By the first quarter of 2020, the plan is to make it available across nine global areas. Read also: Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users The company's decision not to compete, but collaborate with the biggest public clouds has paid off. Gelsinger also admitted that the company may have contributed to some confusion over what hybrid cloud and multi-cloud truly meant. But the explanation from Gelsinger was pretty interesting. Increasingly, with organisations opting for different clouds for different workloads, and changing environments, Gelsinger described a frequent customer pain point for those nearer the start of their journeys. Do they migrate their applications or do they modernise? Increasingly, customers want both - the hybrid option. "We believe we have a unique opportunity for both of these," he said. "Moving to the hybrid cloud enables live migration, no downtime, no refactoring... this is the path to deliver cloud migration and cloud modernisation." As far as multi-cloud was concerned, Gelsinger argued: "We believe technologists who master the multi-cloud generation will own it for the next decade." Collaboration with NVIDIA to accelerate GPU services on AWS NVIDIA and VMware today announced their intent to deliver accelerated GPU services for VMware Cloud on AWS to power modern enterprise applications, including AI, machine learning and data analytics workflows. These services will enable customers to seamlessly migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video processing applications. Through this partnership, VMware Cloud on AWS customers will gain access to a new, highly scalable and secure cloud service consisting of Amazon EC2 bare metal instances to be accelerated by NVIDIA T4 GPUs, and new NVIDIA Virtual Compute Server (vComputeServer) software. “From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Jensen Huang, founder and CEO, NVIDIA. “Together with VMware, we’re designing the most advanced GPU infrastructure to foster innovation across the enterprise, from virtualization, to hybrid cloud, to VMware's new Bitfusion data center disaggregation.” Read also: NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Apart from this, Gelsinger made special note to mention VMware's most recent acquisitions, with Pivotal and Carbon Black and discussed about where they fit in the VMware stack at the back. VMware’s hybrid cloud platform for Next-gen Hybrid IT VMware introduced new and expanded cloud offerings to help customers meet the unique needs of traditional and modern applications. VMware empowers IT operators, developers, desktop administrators, and security professionals with the company’s hybrid cloud platform to build, run, and manage workloads on a consistent infrastructure across their data center, public cloud, or edge infrastructure of choice. VMware uniquely enables a consistent hybrid cloud platform spanning all major public clouds – AWS, Azure, Google Cloud, IBM Cloud – and more than 60 VMware Cloud Verified partners worldwide. More than 70 million workloads run on VMware. Of these, 10 million are in the cloud. These are running in more than 10,000 data centers run by VMware Cloud providers. Take a look at the full list of VMworld 2019 announcements here. What’s new in cloud and virtualization this week? VMware signs definitive agreement to acquire Pivotal Software and Carbon Black Pivotal open sources kpack, a Kubernetes-native image build service Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal
Read more
  • 0
  • 0
  • 2855

article-image-nmap-7-80-releases-with-a-new-npcap-windows-packet-capture-driver-and-other-80-improvements
Vincy Davis
14 Aug 2019
3 min read
Save for later

Nmap 7.80 releases with a new Npcap Windows packet capture driver and other 80+ improvements!

Vincy Davis
14 Aug 2019
3 min read
On August 10, Gordon Lyon, the creator of Nmap announced the release of Nmap 7.80 during the recently concluded DefCon 2019 in Las Vegas. This is a major release of Nmap as it contains 80+ enhancements and is the first stable release in over a year. The major highlight of this release is the newly built Npcap Windows packet capturing library. Ncap uses modern APIs and accords better performance, features and is more secure. What’s new in Nmap 7.80? Npcap Windows packet capture driver: Npcap is based on the discontinued WinPcap library, but with improved speed, portability, and efficiency. It uses the ‘Libpcap‘ library which enables Windows applications to use a portable packet capturing API and supported on Linux and Mac OS X. Npcap can optionally be restricted to only allow administrators to sniff packets, thus providing increased security. New 11 NSE scripts added: NSE scripts has been added from 8 authors, thus taking the total number of NSE scripts to 598. The new 11 scripts are focussed on HID devices, Jenkins servers, HTTP servers, Logical Units (LU) of TN3270E servers and more. pcap_live_open has been replaced with pcap_create: pcap_create solves the packet loss problems on Linux and also performance improvements on other platforms. rand.lua library: The new ‘rand.lua’ library uses the best sources of random available on the system to generate random strings. oops.lua library: This new library helps in easily reporting errors, including plenty of debugging details. TLS support added: TLS support has been added to rdp-enum-encryption, which enables the regulation of protocol version against servers that require TLS. New service probe and match lines: New service probe and match lines have been added for adb and the Android Debug Bridge, to enable remote code execution. Two new common error strings: Two new common error strings has been added to improve MySQL detection by the script http-sql-injection. New script-arg http.host: It allows users to force a particular value for the Host header in all HTTP requests. Users love the new improvements in Nmap 7.80. https://twitter.com/ExtremePaperC/status/1160388567098515456 https://twitter.com/Jiab77/status/1160555015041363968 https://twitter.com/h4knet/status/1161367177708093442 For the full list of changes in Nmap 7.80, head over to the Nmap announcement. Amazon adds UDP load balancing support for Network Load Balancer Brute forcing HTTP applications and web applications using Nmap [Tutorial] Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap[Tutorial]
Read more
  • 0
  • 0
  • 4495

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 3164
article-image-slack-was-down-for-an-hour-yesterday-causing-disruption-during-work-hours
Fatema Patrawala
30 Jul 2019
2 min read
Save for later

Slack was down for an hour yesterday, causing disruption during work hours

Fatema Patrawala
30 Jul 2019
2 min read
Yesterday, Slack reported of an outage which started at 7:23 a.m. PDT and was fully resolved at 8:48 a.m. PDT. The Slack status page said that some people had issues in sending messages while others couldn't access their channels at all. Slack said it was fully up and running again about an hour after the issues emerged. https://twitter.com/SlackStatus/status/1155869112406437889 According to Business Insider, more than 2,000 users reported issues with Slack via Downdetector. Employees around the globe, rely on Slack to communicate, organize tasks and share information. Downdetector’s live outage map showed a concentration of reports in the United States and a few of them in Europe and Japan. Slack has not yet shared the reason which caused the disruption on its status page. Last month as well Slack had suffered an outage which was caused due to server unavailability. Users took to Twitter sending funny memes and gifs about how they really depend on Slack all the time to communicate. https://twitter.com/slabodnick/status/1155858811518930946 https://twitter.com/gbhorwood/status/1155864432527867905 https://twitter.com/envyvenus/status/1155857852625555456 https://twitter.com/nhetmalaluan/status/1155863456991436800 While on Hacker News, users were annoyed and said that such issues have become quite common. One user commented, “This is becoming so often it's embarrassing really. The way it's handled in the app is also not ideal to say the least - only indication that something is wrong is that the text you are trying to send is greyed out.” Why did Slack suffer an outage on Friday? How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services
Read more
  • 0
  • 0
  • 2264

article-image-twitter-experienced-major-outage-yesterday-due-to-an-internal-configuration-issue
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Twitter experienced major outage yesterday due to an internal configuration issue

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Twitter went down across major parts of the world including the US and the UK. Twitter users reported being unable to access the platform on web and mobile devices. The outage lasted on the site for approximately an hour. According to DownDetector.com, the site began experiencing major issues at 2:46pm EST, with problems being reported from users attempting to access Twitter through its website, iPhone or iPad app and via Android devices. While the majority of problems being reported from Twitter were website issues (51%), nearly 30% were from iPhone and iPad app usage and another 18% from Android users, as per the outage report. Twitter acknowledged that the platform was experiencing issues on its status page shortly after the first outages were reported online. The company listed the status as “investigating” and noted a service disruption was causing the seemingly global issue. “We are currently investigating issues people are having accessing Twitter,” the statement read. “We will keep you updated on what's happening.” This month has experienced several high-profile outages among social networks. Facebook and Instagram experienced a day-long outage affecting large parts of the world on July 3rd. LinkedIn went down for several hours on Wednesday. Cloudfare suffered two major outages in the span of two weeks this month. One was due to an internal software glitch and another was caused when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Reddit was experiencing outages on its website and app earlier in the day, but appeared to be back up and running for most users an hour before Twitter went down, according to DownDetector.com. In March, Facebook and its family of apps experience a 14 hour long outage which was reasoned as server config change issue. Twitter site then began operating normally nearly an hour later at approximately 3:45pm EST. The users on Twitter joked saying they were "all censored for the last hour" when the site eventually was back up and running. On the status page of the outage report Twitter said that the outage was caused due to “an internal configuration change, which we're now fixing.” “Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible,” the company said in a follow up statement. https://twitter.com/TwitterSupport/status/1149412158121267200 On Hacker News too users discussed about number of outages in major tech companies and why is this happening. One of the user comments reads, “Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses: 1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability. 2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change 3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted 4) Just a series of unconnected errors at big companies 5) Other possibilities?” On this comment another user adds, “I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4. #1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.” Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Facebook family of apps hits 14 hours outage, longest in its history How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others
Read more
  • 0
  • 0
  • 2485

article-image-ispa-nominated-mozilla-in-the-internet-villain-category-for-dns-over-https-push-withdrew-nominations-and-category-after-community-backlash
Fatema Patrawala
11 Jul 2019
6 min read
Save for later

ISPA nominated Mozilla in the “Internet Villain” category for DNS over HTTPs push, withdrew nominations and category after community backlash

Fatema Patrawala
11 Jul 2019
6 min read
On Tuesday, the Internet Services Providers' Association (ISPA) which is also UK's Trade Association for providers of internet services announced that the nomination of Mozilla Firefox has been withdrawn from the “Internet Villain Category”. This decision came after they saw a global backlash to their nomination of Mozilla for their DNS-over-HTTPS (DoH) push. ISPA withdrew the Internet Villain category as a whole from the ISPA Awards 2019 ceremony which will be held today in London. https://twitter.com/ISPAUK/status/1148636700467453958 The official blog post reads, “Last week ISPA included Mozilla in our list of Internet Villain nominees for our upcoming annual awards. In the 21 years the event has been running it is probably fair to say that no other nomination has generated such strong opinion. We have previously given the award to the Home Secretary for pushing surveillance legislation, leaders of regimes limiting freedom of speech and ambulance-chasing copyright lawyers. The villain category is intended to draw attention to an important issue in a light-hearted manner, but this year has clearly sent the wrong message, one that doesn’t reflect ISPA’s genuine desire to engage in a constructive dialogue. ISPA is therefore withdrawing the Mozilla nomination and Internet Villain category this year.” Mozilla Firefox, which is the preferred browser for a lot of users encourages privacy protection and feature options to keep one’s Internet activity as private as possible. One of the recently proposed features – DoH (DNS-over-HTTPS) which is still in the testing phase didn’t receive a good response from the ISPA trade association. Hence, the ISPA decided to nominate Mozilla as one of the “Internet Villains” among the nominees for 2019. In their announcement, the ISPA mentioned that Mozilla is one of the Internet Villains for supporting DoH (DNS-over-HTTPS). https://twitter.com/ISPAUK/status/1146725374455373824 Mozilla on this announcement responded by saying that this is one way to know that they are fighting the good fight. https://twitter.com/firefox/status/1147225563649564672 On the other hand this announcement amongst the community garnered a lot of criticism. They rebuked ISPA for promoting online censorship and enabling rampant surveillance. Additionally there were comments of ISPA being the Internet Villian in this scenario. Some the tweet responses are given below: https://twitter.com/larik47/status/1146870658246352896 https://twitter.com/gon_dla/status/1147158886060908544 https://twitter.com/ultratethys/status/1146798475507617793 Along with Mozilla, Article 13 Copyright Directive and United States President Donald Trump also appeared in the nominations list. Here’s how ISPA explained in their announcement: “Mozilla – for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Article 13 Copyright Directive – for threatening freedom of expression online by requiring ‘content recognition technologies’ across platforms President Donald Trump – for causing a huge amount of uncertainty across the complex, global telecommunications supply chain in the course of trying to protect national security” Why are the ISPs pushing back against DNS-over-HTTPS? DoH basically means that your DNS requests will be encrypted over an HTTPS connection. Traditionally, the DNS requests are unencrypted and your DNS provider or the ISP can monitor/control your browsing activity. Without DoH, you can easily enforce blocking/content filtering through your DNS provider or the ISP can do that when they want. However, DoH takes that out of the equation and hence, you get a private browsing experience. Admittedly big broadband ISPs and politicians are concerned that large scale third-party deployments of DoH, which encrypts DNS requests using the common HTTPS protocol for websites (i.e. turning IP addresses into human readable domain names), could disrupt their ability to censor, track and control related internet services. The above position is however a particularly narrow way of looking at the technology, because at its core DoH is about protecting user privacy and making internet connections more secure. As a result DoH is often praised and widely supported by the wider internet community. Mozilla is not alone in pushing DoH but they found themselves being singled out by the ISPA because of their proposal to enable the feature by default within Firefox which is yet to happen. Google is also planning to introduce its own DoH solution in its Chrome browser. The result could be that ISPs lose a lot of their control over DNS and break their internet censorship plans. Is DoH useful for internet users? If so, how? On one side of the coin, DoH lets users bypass any content filters enforced by the DNS or the ISPs. So, it is a good thing that it will put a stop to Internet censorship and DoH will help in this. But, on the other side, if you are a parent, you can no longer set content filters if your kid utilizes DoH on Mozilla Firefox. And potentially DoH could be a solution for some to bypass parental controls, which could be a bad thing. And this particular reason is given by the ISPA for nominating Mozilla for the Internet Villian category. It says that DNS-over-HTTPS will bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Also, using DoH means that you can no longer use the local host file, in case you are using it for ad blocking or for any other reason. The Internet community criticized the way ISPA handled the back lash and withdrew the category as a whole. One of the user comments on Hacker News read, “You have to love how all their "thoughtful criticisms" of DNS over HTTPS have nothing to do with the things they cited in their nomination of Mozilla as villain. Their issue was explicitly "bypassing UK filtering obligations" not that load of flaming horseshit they just pulled out of their ass in response to the backlash.” https://twitter.com/VModifiedMind/status/1148682124263866368   Highlights from Mary Meeker’s 2019 Internet trends report How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher
Read more
  • 0
  • 0
  • 3911
article-image-google-suffers-another-outage-as-google-cloud-servers-in-the-us-east1-region-are-cut-off
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, Google Cloud servers in the us-east1 region were cut off from the rest of the world as there was an issue reported with Cloud Networking and Load balancing within us-east1. These issues with Google Cloud Networking and Load Balancing have caused physical damage to multiple concurrent fiber bundles that serve network paths in us-east1. At 10:25 am PT yesterday, the status was updated that the “Customers may still observe traffic through Global Load-balancers being directed away from back-ends in us-east1 at this time.” It was later posted on the status dashboard that the mitigation work was underway for addressing the issue with Google Cloud Networking and Load Balancing in us-east1. However, the rate of errors was decreasing at the time but few users faced elevated latency. Around 4:05 pm PT, the status was updated, “The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours. In the meantime, we are electively rerouting traffic to ensure that customers' services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period. We will provide another status update either as the situation warrants or by Wednesday, 2019-07-03 12:00 US/Pacific tomorrow.” This outage seems to be the second major one that hit Google's services in recent times. Last month, Google Calendar was down for nearly three hours around the world. Last month Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. According to a person who works on Google Cloud, the team is experiencing an issue with a subset of the fiber paths that supply the region and the team is working towards resolving the issue. They have mostly removed all the Google.com traffic out of the Region to prefer GCP customers. A Google employee commented on the HackerNews thread, “I work on Google Cloud (but I'm not in SRE, oncall, etc.). As the updates to [1] say, we're working to resolve a networking issue. The Region isn't (and wasn't) "down", but obviously network latency spiking up for external connectivity is bad. We are currently experiencing an issue with a subset of the fiber paths that supply the region. We're working on getting that restored. In the meantime, we've removed almost all Google.com traffic out of the Region to prefer GCP customers. That's why the latency increase is subsiding, as we're freeing up the fiber paths by shedding our traffic.” Google Cloud users are tensed about this outage and awaiting the services to get restored back to normal. https://twitter.com/IanFortier/status/1146079092229529600 https://twitter.com/beckynagel/status/1146133614100221952 https://twitter.com/SeaWolff/status/1146116320926359552 Ritiko, a cloud-based EHR company is also experiencing issues because of the Google Cloud outage, as they host their services there. https://twitter.com/ritikoL/status/1146121314387857408 As of now there is no further update from Google on if the outage is resolved, but they expect a full resolution within the next 24 hours. Check this space for new updates and information. Google Calendar was down for nearly three hours after a major outage Do Google Ads secretly track Stack Overflow users? Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard  
Read more
  • 0
  • 0
  • 3214

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 4008