Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Cloud & Networking

376 Articles
article-image-introducing-kdevops-modern-devops-framework-for-linux-kernel-development
Fatema Patrawala
20 Aug 2019
3 min read
Save for later

Introducing kdevops, a modern DevOps framework for Linux kernel development

Fatema Patrawala
20 Aug 2019
3 min read
Last Friday, Luis Chamberlain announced the release of kdevops as a Linux kernel development DevOps framework. Chamberlain wrote in his email, “the goal behind this project is to provide a modern devops framework for Linux kernel development. It is not a test suite, it is designed to use any test suites, and more importantly, it allows us to let us easily set up test environments in a jiffie. It supports different virtualization environments, and different cloud environments, and supports different Operating Systems.” kdevops is a sample framework which lets you to easily set up a testing environment for a number of different use cases. How does kdevops work? kdevops relies on Vagrant, Terraform and Ansible to get you going with your virtualization/bare metal/cloud provisioning environment. It relies heavily on public ansible galaxy roles and terraform modules. This lets the kdevops team share codes with the community and allow them to use the project as a demo framework which uses theses ansible roles and terraform modules. There are three parts to the long terms ideals for kdevops: Provisioning required virtual hosts/cloud environment Provisioning your requirements Running whatever you want Ansible will be used to get all the required ansible roles. Then Vagrant or Terraform can be used to provision hosts. Vagrant makes use of two ansible roles to setup update ~/.ssh/config and update the systems with basic development preference files, things like .git config or bashrc hacks. This last part is handled by the devconfig ansible role. Since ~/.ssh/config is updated you can then run further ansible roles manually when using Vagrant. If using Terraform for cloud environments, it updates ~/.ssh/config directly without ansible, however since access to hosts on cloud environments can vary in time running all ansible roles is expected to be done manually. What you can do with kdevops Full vagrant provisioning, including updating your ~/.ssh/config Terraform provisioning on different cloud providers Running ansible to install dependencies on debian Using ansible to clone, compile and boot into any random kernel git tree with a supplied config Updating ~/.ssh/config for terraform, first tested with the OpenStack provider, with both generic and special minicloud support. Other terraform providers just require making use of the newly published terraform module add-host-ssh-config On Hacker News, this release has gained positive reviews, but the only concern for users is if it has anything to do with devops as it appears to be an automated testing environment provision. One of them comments, “This looks cool, but I'm not sure what it has to do with devops? It just seems to be automated test environment provisioning, am I missing something?” On Reddit as well, Linux users are happy with this setup and they find it really promising, one of the comments read, “I have so much hacky scriptwork around kvm, have always been looking for a cleaner setup; this looks super promising. thank you.” To know more about this release, check out the official announcement page as well as the GitHub page. Why do IT teams need to transition from DevOps to DevSecOps? Is DevOps really that different from Agile? No, says Viktor Farcic [Podcast] Azure DevOps report: How a bug caused ‘sqlite3 for Python’ to go missing from Linux images
Read more
  • 0
  • 0
  • 3532

article-image-nmap-7-80-releases-with-a-new-npcap-windows-packet-capture-driver-and-other-80-improvements
Vincy Davis
14 Aug 2019
3 min read
Save for later

Nmap 7.80 releases with a new Npcap Windows packet capture driver and other 80+ improvements!

Vincy Davis
14 Aug 2019
3 min read
On August 10, Gordon Lyon, the creator of Nmap announced the release of Nmap 7.80 during the recently concluded DefCon 2019 in Las Vegas. This is a major release of Nmap as it contains 80+ enhancements and is the first stable release in over a year. The major highlight of this release is the newly built Npcap Windows packet capturing library. Ncap uses modern APIs and accords better performance, features and is more secure. What’s new in Nmap 7.80? Npcap Windows packet capture driver: Npcap is based on the discontinued WinPcap library, but with improved speed, portability, and efficiency. It uses the ‘Libpcap‘ library which enables Windows applications to use a portable packet capturing API and supported on Linux and Mac OS X. Npcap can optionally be restricted to only allow administrators to sniff packets, thus providing increased security. New 11 NSE scripts added: NSE scripts has been added from 8 authors, thus taking the total number of NSE scripts to 598. The new 11 scripts are focussed on HID devices, Jenkins servers, HTTP servers, Logical Units (LU) of TN3270E servers and more. pcap_live_open has been replaced with pcap_create: pcap_create solves the packet loss problems on Linux and also performance improvements on other platforms. rand.lua library: The new ‘rand.lua’ library uses the best sources of random available on the system to generate random strings. oops.lua library: This new library helps in easily reporting errors, including plenty of debugging details. TLS support added: TLS support has been added to rdp-enum-encryption, which enables the regulation of protocol version against servers that require TLS. New service probe and match lines: New service probe and match lines have been added for adb and the Android Debug Bridge, to enable remote code execution. Two new common error strings: Two new common error strings has been added to improve MySQL detection by the script http-sql-injection. New script-arg http.host: It allows users to force a particular value for the Host header in all HTTP requests. Users love the new improvements in Nmap 7.80. https://twitter.com/ExtremePaperC/status/1160388567098515456 https://twitter.com/Jiab77/status/1160555015041363968 https://twitter.com/h4knet/status/1161367177708093442 For the full list of changes in Nmap 7.80, head over to the Nmap announcement. Amazon adds UDP load balancing support for Network Load Balancer Brute forcing HTTP applications and web applications using Nmap [Tutorial] Discovering network hosts with ‘TCP SYN’ and ‘TCP ACK’ ping scans in Nmap[Tutorial]
Read more
  • 0
  • 0
  • 4524

article-image-macstadium-announces-orka-orchestration-with-kubernetes-on-apple
Savia Lobo
13 Aug 2019
2 min read
Save for later

MacStadium announces ‘Orka’ (Orchestration with Kubernetes on Apple)

Savia Lobo
13 Aug 2019
2 min read
Today, MacStadium, an enterprise-class cloud solution for Apple Mac infrastructure, announced ‘Orka’ (Orchestration with Kubernetes on Apple). Orka is a new virtualization layer for Mac build infrastructure based on Docker and Kubernetes technology. It offers a solution for orchestrating macOS in a cloud environment using Kubernetes on genuine Apple Mac hardware. With Orka, users can apply native Kubernetes commands for macOS virtual machines (VMs) on genuine Apple hardware. “While Kubernetes and Docker are not new to full-stack developers, a solution like this has not existed in the Apple ecosystem before,” MacStadium wrote in an email statement to us. “The reality is that most enterprises need to develop applications for Apple platforms, but these enterprises prefer to use nimble, software-defined build environments,” said Greg McGraw, Chief Executive Officer, MacStadium. “With Orka, MacStadium’s flagship orchestration platform, developers and DevOps teams now have access to a software-defined Mac cloud experience that treats infrastructure-as-code, similar to what they are accustomed to using everywhere else.” Developers creating apps for Mac or iOS must build on genuine Apple hardware. However, until now, popular orchestration and container technologies like Kubernetes and Docker have been unable to leverage Mac operating systems. With Orka, Apple OS development teams can use container technology features in a Mac cloud, the same way they build on other cloud platforms like AWS, Azure or GCP. As part of its initial release, Orka will ship with a plugin for Jenkins, an open-source automation tool that enables developers to build, test and deploy their software using continuous integration techniques. Macstadium will also present a session at DevOps World | Jenkins World in San Francisco (August 12-15) demonstrating users how Orka integrates with Jenkins build pipelines and how it leverages the capability and power of Docker/Kubernetes in a Mac development environment. To know more about Orka in detail, visit MacStadium’s official website. CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed Introducing Ballista, a distributed compute platform based on Kubernetes and Rust Implementing Horizontal Pod Autoscaling in Kubernetes [Tutorial]
Read more
  • 0
  • 0
  • 2764

article-image-microsoft-introduces-public-preview-of-azure-dedicated-host-and-updates-its-licensing-terms
Amrata Joshi
07 Aug 2019
3 min read
Save for later

Microsoft introduces public preview of Azure Dedicated Host and updates its licensing terms

Amrata Joshi
07 Aug 2019
3 min read
Last week, Microsoft introduced a preview of Azure Dedicated Host that provides a physical server hosted on Azure and is not shared with other customers. The company has made few licensing changes that will make the Microsoft software bit expensive for the AWS, Google and Alibaba customers. Currently, the dedicated host is available in two specifications, first is Type 1 based on a 2.3GHz Intel Xeon E5-2673 v4 (Broadwell) and has 64 vCPUs (Virtual CPUs). It costs $4.055 or $4.492 per hour depending on the RAM (256GB or 448GB). Another is Type 2 based on the Xeon Platinum 8168 (Skylake) and comes with 72 vCPUs and 144GB RAM, it costs $4.039 per hour. These prices don’t include the licensing costs and it seems Microsoft is trying to bring changes in this area.  Last week, the team at Microsoft announced that they will modify their licensing terms related to outsourcing rights and dedicated hosted cloud services on October 1, 2019. The team further stated that this particular change won’t impact the use of existing software versions under the licenses that are purchased before October 1, 2019. The official post reads, “Currently, our outsourcing terms give on-premises customers the option to deploy Microsoft software on hardware leased from and managed by traditional outsourcers.”  The team is updating the outsourcing terms for Microsoft on-premises licenses in order to clarify the difference between on-premise/traditional outsourcing and cloud services. Additionally, they are planning to create more consistent licensing terms across multi-tenant and dedicated hosted cloud services. The customers will either have to rent the software via SPLA (Service Provider License Agreement) or they will have to purchase a license with software assurance, an annual service charge. From October 1, on-premises licenses purchased without Software Assurance and mobility rights will no longer be deployed with dedicated hosted cloud services that are offered by the public cloud providers like Microsoft, Alibaba, Amazon (including VMware Cloud on AWS), and Google. According to Microsoft, they will then be referred to as “Listed Providers.” These changes won’t be applicable to other providers, Services Provider License Agreement (SPLA) program as well as to the License Mobility for Software Assurance benefit except for expanding this benefit to cover dedicated hosted cloud services. Customers will benefit from licensed Microsoft products on a dedicated cloud platform Customers will be able to license Microsoft products on dedicated hosted cloud services from the Listed Providers. Users can continue to deploy and use the software under their existing licenses on Listed Providers’ servers that are dedicated to them. But they will noy be able to add workloads under licenses that are acquired on or after October 1, 2019.  Users will be able to use products through the purchase of cloud services directly from the Listed Provider, after 1st October. In case, they have licenses with Software Assurance, then it can be used with the Listed Providers’ dedicated hosted cloud services under License Mobility or Azure Hybrid Benefit rights. These changes aren’t applicable to deployment and use of licenses outside of a Listed Provider’s data center. But these changes are applicable to both first and third-party offerings on a dedicated hosted cloud service from a Listed Provider. To know more about this news, check out the official post by Microsoft. CERN plans to replace Microsoft-based programs with an affordable open-source software Softbank announces a second AI-focused Vision Fund worth $108 billion with Microsoft, Apple as major investors Why are experts worried about Microsoft’s billion dollar bet in OpenAI’s AGI pipe dream?  
Read more
  • 0
  • 0
  • 1728

Banner background image
article-image-cloud-next-2019-tokyo-google-announces-new-security-capabilities-for-enterprise-users
Bhagyashree R
01 Aug 2019
3 min read
Save for later

Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users

Bhagyashree R
01 Aug 2019
3 min read
At its Cloud Next 2019 conference happening in Tokyo, Google unveiled new security capabilities that are coming to its enterprise products, G Suite Enterprise, Google Cloud, and Cloud Identity. These capabilities are intended to help its enterprise customers protect their “users, data, and applications in the cloud.” Google is hosting this two-day event (July 31- Aug 1) to showcase its cloud products. Among the key announcements made are Advanced Protection Program support for enterprise products that are rolling out soon, expanded availability of Titan Security Keys, improved anomaly detection in G Suite enterprise, and more. Advanced Protection Program for high-risk employees The Advanced Protection Program was launched in 2017 to protect the personal Google accounts of users who are at high risk of online threats like phishing. The program goes beyond the traditional two-step verification by enforcing you to use a physical security key in addition to your password for signing in to your Google account. The program will be available in beta in the coming days for G Suite, Google Cloud Platform (GCP) and Cloud Identity customers. It will enable enterprise admins to enforce a set of security policies for employees who are at high-risk of targeted attacks such as IT administrators, business executives, among others. The set of policies include enforcing the use of Fast Identity Online (FIDO) keys like Titan Security Keys, automatically blocking of access to non-trusted third-party apps, and enabling enhanced scanning of incoming emails. Wider availability of Titan Security Keys After looking at the growing demand for Titan Security Keys in the US, Google has now expanded its availability in Canada, France, Japan, and the United Kingdom (UK). These keys are available as bundles of two: USB/NFC and Bluetooth. You can use these keys anywhere FIDO security keys are supported including Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more. Anomalous activity alerts in G Suite G Suite Enterprise and G Suite Enterprise for Education admins can now opt-in to receive anomalous activity alerts in the G Suite alert center. G Suite takes the help of machine learning to analyze security signals within Google Drive to detect potential security risks. These security risks include data exfiltration, policy violations when sharing and downloading files, and more. Google also announced that it will be rolling out support for password vaulted apps in Cloud Identity. Karthik Lakshminarayanan and Vidya Nagarajan from the Google Cloud team wrote in a blog post, “The combination of standards-based- and password-vaulted app support will deliver one of the largest app catalogs in the industry, providing seamless one-click access for users and a single point of management, visibility, and control for admins.” You can read the official announcement by Google to know more in detail. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless Understanding security features in the Google Cloud Platform (GCP)
Read more
  • 0
  • 0
  • 2475

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 3269
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-slack-was-down-for-an-hour-yesterday-causing-disruption-during-work-hours
Fatema Patrawala
30 Jul 2019
2 min read
Save for later

Slack was down for an hour yesterday, causing disruption during work hours

Fatema Patrawala
30 Jul 2019
2 min read
Yesterday, Slack reported of an outage which started at 7:23 a.m. PDT and was fully resolved at 8:48 a.m. PDT. The Slack status page said that some people had issues in sending messages while others couldn't access their channels at all. Slack said it was fully up and running again about an hour after the issues emerged. https://twitter.com/SlackStatus/status/1155869112406437889 According to Business Insider, more than 2,000 users reported issues with Slack via Downdetector. Employees around the globe, rely on Slack to communicate, organize tasks and share information. Downdetector’s live outage map showed a concentration of reports in the United States and a few of them in Europe and Japan. Slack has not yet shared the reason which caused the disruption on its status page. Last month as well Slack had suffered an outage which was caused due to server unavailability. Users took to Twitter sending funny memes and gifs about how they really depend on Slack all the time to communicate. https://twitter.com/slabodnick/status/1155858811518930946 https://twitter.com/gbhorwood/status/1155864432527867905 https://twitter.com/envyvenus/status/1155857852625555456 https://twitter.com/nhetmalaluan/status/1155863456991436800 While on Hacker News, users were annoyed and said that such issues have become quite common. One user commented, “This is becoming so often it's embarrassing really. The way it's handled in the app is also not ideal to say the least - only indication that something is wrong is that the text you are trying to send is greyed out.” Why did Slack suffer an outage on Friday? How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services
Read more
  • 0
  • 0
  • 2313

article-image-dropbox-walks-back-its-own-decision-brings-back-support-for-zfs-xfs-btrfs-and-ecryptfs-on-linux
Vincy Davis
23 Jul 2019
3 min read
Save for later

Dropbox walks back its own decision; brings back support for ZFS, XFS, Btrfs, and eCryptFS on Linux

Vincy Davis
23 Jul 2019
3 min read
Today, Dropbox notified users that it has brought back support for ZFS and XFS on 64-bit Linux systems, and Btrfs and eCryptFS on all Linux systems in its Beta Build 77.3.127. The support note in the Dropbox forum reads “Add support for zfs (on 64-bit systems only), eCryptFS, xfs (on 64-bit systems only), and btrfs filesystems in Linux.” Last year in November, Dropbox notified users that they are “ending support for Dropbox syncing to drives with certain uncommon file systems. The supported file systems are Ext4 filesystem on Linux, NTFS for Windows, and HFS+ or APFS for Mac.” Dropbox explained, a supported file system is necessary for Dropbox as it uses extended attributes (X-attrs) to identify files in their folder and to keep them in sync. The post also mentioned that Dropbox will support only the most common file systems that support X-attrs to ensure stability and consistency to its users. After Dropbox discontinued support for these Linux formats, many developers switched to other services such as Google Drive, Box, etc. This is speculated to be one of the reasons why Dropbox has changed its previous decision. However, no official statement from the Dropbox community, for bringing the support back, has been announced yet. Many users have expressed resentment on Dropbox’s irregular actions. A user on Hacker News says, “Too late. I have left Dropbox because of their stance on Linux filesystems, price bump with unnecessary features, and the continuous badgering to upgrade to its business. It's a great change though for those who are still on Dropbox. Their sync is top-notch” A Redditor comments, “So after I stopped using Dropbox they do care about me as a user after all? Linux users screamed about how nonsensical the original decision was. Maybe ignoring your users is not such a good idea after all? I moved to Cozy Drive - it's not perfect, but has native Linux client, is Europe based (so I am protected by EU privacy laws) and is pretty good as almost drop-in replacement.” Another Redditor said that “Too late for me, I was a big dropbox user for years, they dropped support for modern file systems and I dropped them. I started using Syncthing to replace the functionality I lost with them.” Few developers are still happy to see that Dropbox will again support the popular Linux systems. A user on Hacker News comments, “That's good news. Happy to see Dropbox thinking about the people who stuck with them from day 1. In the past few years they have been all over the place, trying to find their next big thing and in the process also neglecting their non-enterprise customers. Their core product is still the best in the market and an important alternative to Google.” Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads Linux Mint 19.2 beta releases with Update Manager, improved menu and much more! Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range
Read more
  • 0
  • 0
  • 5276

article-image-lxd-3-15-releases-with-a-switch-to-dqlite-1-0-branch-new-hardware-vlan-and-mac-filtering-on-sr-iov-and-more
Vincy Davis
17 Jul 2019
5 min read
Save for later

LXD 3.15 releases with a switch to dqlite 1.0 branch, new hardware VLAN and MAC filtering on SR-IOV and more!

Vincy Davis
17 Jul 2019
5 min read
A few days ago, the Linux Daemon (LXD) team announced the release of LXD 3.15. The major highlight of the release is the transition of LXD to the dqlite 1.0 branch, which will yield better performance and reliability to cluster users and standalone installations.  Linux Daemon (LXD) is a next-generation system container manager which uses Linux containers. It’s a free software, written in Go and developed under the Apache 2 license. LXD 3.15 explores new features including hardware VLAN and MAC filtering on SR-IOV, new storage-size option for lxd-p2c, Ceph FS storage backend for custom volumes and more. It also includes many major improvements including DHCP lease handling, cluster heartbeat handling, and bug fixes. What’s new in LXD 3.15? Hardware VLAN and MAC filtering on SR-IOV The security.mac_filtering and vlan properties are now available to SR-IOV devices. This will prevent MAC spoofing from the container as it will directly control the matching SR-IOV options on the virtual function. It will also perform hardware filtering at the VF level, in case of VLANs. New storage-size option for lxd-p2c A new --storage-size option has been added in LXD 3.15. When this option is used along with   --storage, it allows specifying the desired volume size to use for the container. Ceph FS storage backend for custom volumes Ceph FS is used as a storage driver for LXD and its support is limited to custom storage volumes. Its support includes size restrictions and native snapshot when the server, server configuration, and client kernel support those features. Ceph FS also allows attaching the same custom volume to multiple containers at the same time, even if they’re located on different hosts. IPv4 and IPv6 filtering IPv4 and IPv6 filtering (spoof protection) enable multiple containers to share the same underlying bridge, without worrying about spoofing the address of other containers, hijacking traffic or causing connectivity issues. Read Also: Internet governance project (IGP) survey on IPV6 adoption, initial reports Major improvements in LXD 3.15 Switch to dqlite 1.0 After a year of running all the LXD servers on the original implementation of distributed sqlite database, LXD 3.15 has finally switched to its 1.0 branch. This transition reduces the number of external dependencies, CPU usage and memory usage for the database. It also makes it easier to debug issues and integrate better with more complex database operations when running clusters.  Reworked DHCP lease handling In the previous versions, LXD’s handling of DHCP was pretty limited. With LXD 3.15, LXD will itself be able to issue DHCP requests to the dnsmasq server based on what’s currently in the DHCP lease table. This allows the user to manually release a lease when a container’s configuration is altered or a container is deleted, all without ever needing to restart dnsmasq. Reworked cluster heartbeat handling With LXD 3.15, the internal heartbeat (the list of database nodes) extends to include the most recent version information from the cluster as well as the status of all cluster members. This means that only the cluster leader will have to retrieve the data and the remaining members will get a consistent view of everything within 10s. Some of the Bug fixes in LXD 3.15 Linker flags have been updated. Path to the host’s communication socket has been fixed: doc/devlxd Basic install instructions have been added: doc/README Translations from weblate has been updated: i18n Unused arg from setNetworkRoutes has been removed: lxd/containers Unit tests have been updated: lxd/db Developers are happy with the new features and improvements included in LXD 3.15. A user on Reddit says, “The IPv4 and IPv6 spoof protection filters is going to make a few people very happy. As well as ceph FS support as RBD doesn't like sharing volumes with multiple host.” Some users were comparing LXD with Docker, where mostly all preferred the former over the latter. A Redditor gave a detailed comparison of the two platforms. The comment read, “The high-level difference is that Docker is for "application containers" and LXD is for "system containers". For Docker that means things like, say, your application process being PID 1, and generally being forced to do things the "Docker way".  “LXD, on the other hand, provides flexibility to use containers the way you want to. This means containers end up being closer to your development environment, e.g. by using systemd if you want it; they can be ephemeral like Docker, but only if you want to”, the user further added.  “So, LXD provides containers that are closer in feel to a regular installation or VM, but with the performance benefit of containers. You can even use LXD containers as Docker hosts, which is what I often do.” For the complete list of updates, head over to the LXD 3.15 release notes. LXD 3.11 releases with configurable snapshot expiry, progress reporting, and more LXD 3.8 released with automated container snapshots, ZFS compression support and more! Debian 10 codenamed ‘buster’ released, along with Debian GNU/Hurd 2019 as a port
Read more
  • 0
  • 0
  • 1708

article-image-hello-gg-a-new-os-framework-to-execute-super-fast-apps-on-1000s-of-transient-functional-containers
Bhagyashree R
15 Jul 2019
4 min read
Save for later

Hello 'gg', a new OS framework to execute super-fast apps on "1000s of transient functional containers"

Bhagyashree R
15 Jul 2019
4 min read
Last week at the USENIX Annual Technical Conference (ATC) 2019 event, a team of researchers introduced 'gg'. It is an open-source framework that helps developers execute applications using thousands of parallel threads on a cloud function service to achieve near-interactive completion times. "In the future, instead of running these tasks on a laptop, or keeping a warm cluster running in the cloud, users might push a button that spawns 10,000 parallel cloud functions to execute a large job in a few seconds from start. gg is designed to make this practical and easy," the paper reads. At USENIX ATC, leading systems researchers present their cutting-edge systems research. It also gives researchers to gain insight into topics like virtualization, network management and troubleshooting, cloud and edge computing, security, privacy, and more. Why is the gg framework introduced Cloud functions, better known as, serverless computing, provide developers finer granularity and lower latency. Though they were introduced for event handling and invoking web microservices, their granularity and scalability make them a good candidate for creating something called a “burstable supercomputer-on-demand”. These systems are capable of launching a burst-parallel swarm of thousands of cloud functions, all working on the same job. The goal here is to provide results to an interactive user much faster than their own computer or by booting a cold cluster and is cheaper than maintaining a warm cluster for occasional tasks. However, building applications on swarms of cloud functions pose various challenges. The paper lists some of them: Workers are stateless and may need to download large amounts of code and data on startup Workers have limited runtime before they are killed On-worker storage is limited but much faster than off-worker storage The number of available cloud workers depends on the provider's overall load and can't be known precisely upfront Worker failures occur when running at large scale Libraries and dependencies differ in a cloud function compared with a local machine Latency to the cloud makes roundtrips costly How gg works Previously, researchers have addressed some of these challenges. The gg framework aims to address these principal challenges faced by burst-parallel cloud-functions applications. With gg, developers and users can build applications that burst from zero to thousands of parallel threads to achieve low latency for everyday tasks. The following diagram shows its composition: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers The gg framework enables you to build applications on an abstraction of transient, functional containers that are also known as thunks. Applications can express their jobs in terms of interrelated thunks or Linux containers and then schedule, instantiate, and execute those thunks on a cloud-functions service. This framework is capable of containerizing and executing existing programs like software compilation, unit tests, and video encoding with the help of short-lived cloud functions. In some cases, this can give substantial gains in terms of performance. It can also be inexpensive than keeping a comparable cluster running continuously depending on the frequency of the task. The functional approach and fine-grained dependency management of gg give significant performance benefits when compiling large programs from a cold start. Here's a table showing a summary of the results for compiling Inkscape, an open-source software: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers When running “cold” on AWS Lambda, gg was nearly 5x faster than an existing icecc system, running on a 48-core or 384-core cluster of running VMs. To know more in detail, read the paper: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers. You can also check out gg's code on GitHub. Also, watch the talk in which Keith Winstein, an assistant professor of Computer Science at Stanford University, explains the purpose of GG and demonstrates how it exactly works: https://www.youtube.com/watch?v=O9qqSZAny3I&t=55m15s Cloud computing trends in 2019 Cloudflare's Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Serverless Computing 101
Read more
  • 0
  • 0
  • 3554
article-image-google-cloud-and-nvidia-tesla-set-new-ai-training-records-with-mlperf-benchmark-results
Amrata Joshi
15 Jul 2019
3 min read
Save for later

Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results

Amrata Joshi
15 Jul 2019
3 min read
Last week, the MLPerf effort released the results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. These benchmarks are used by the AI practitioners to adopt common standards for measuring the performance and speed of hardware that is used to train AI models. As per these benchmark results, Nvidia and Google Cloud set new AI training time performance records. MLPerf v0.6 studies the training performance of machine learning acceleration hardware in 6 categories including image classification, object detection (lightweight), object detection (heavyweight), translation (recurrent), translation (non-recurrent) and reinforcement learning. MLPerf is an association of more than 40 companies and researchers from leading universities, and the MLPerf benchmark suites are being the industry standard for measuring machine learning performance.  As per the results, Nvidia’s Tesla V100 Tensor Core GPUs used an Nvidia DGX SuperPOD for completing on-premise training of the ResNet-50 model for image classification in 80 seconds. Also, Nvidia turned out to be the only vendor who submitted results in all six categories. In 2017, when Nvidia launched the DGX-1 server, it took 8 hours to complete model training. In a statement to ZDNet, Paresh Kharya, director of Accelerated Computing for Nvidia said, “The progress made in just a few short years is staggering." He further added, “The results are a testament to how fast this industry is moving." Google Cloud entered five categories and had set three records for performance at scale with its Cloud TPU v3 Pods. Google Cloud Platform (GCP) set three new performance records in the latest round of the MLPerf benchmark competition. The three record-setting results ran on Cloud TPU v3 Pods, are Google’s latest generation of supercomputers, built specifically for machine learning.  The speed of Cloud TPU Pods was better and used less than two minutes of compute time. The TPU v3 Pods also showed the record performance results in machine translation from English to German of the Transformer model within 51 seconds. Cloud TPU v3 Pods train models over 84% faster than the fastest on-premise systems in the MLPerf Closed Division. TPU pods has also achieved record performance in the image classification benchmark of the ResNet-50 model with the ImageNet data set, as well as model training in another object detection category in 1 minute and 12 seconds. In a statement to ZDNet, Google Cloud's Zak Stone said, "There's a revolution in machine learning.” He further added, "All these workloads are performance-critical. They require so much compute, it really matters how fast your system is to train a model. There's a huge difference between waiting for a month versus a couple of days." Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh  
Read more
  • 0
  • 0
  • 2221

article-image-amazon-eventbridge-an-event-bus-with-higher-security-and-speed-to-boost-aws-serverless-ecosystem
Vincy Davis
15 Jul 2019
4 min read
Save for later

Amazon EventBridge: An event bus with higher security and speed to boost AWS serverless ecosystem

Vincy Davis
15 Jul 2019
4 min read
Last week, Amazon had a pretty huge news for its AWS serverless ecosystem, one which is being considered as the biggest thing since AWS Lambda itself. Few days ago, with an aim to help customers integrate their own AWS applications with Software as a Service (SaaS) applications, Amazon EventBridge was launched. The EventBridge model is an asynchronous, fast, clean, and easy to use event bus which can be used to publish events, specific to each AWS customer. The SaaS application and a code running on AWS are now independent of a shared communication protocol, runtime environment, or programming language. This allows Lambda functions to handle events from a Saas application as well as route events to other AWS targets. Similar to CloudWatch events, EventBridge also has an existing default event bus that accepts events from AWS services and calls to PutEvents. One distinction between them is that in EventBridge, each partner application that a user subscribes to will also create an event source. This event source can then be used to associate with an event bus in an AWS account. AWS users can select any of their event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule. Important terms to understand the use of Amazon EventBridge Partner: An organization that has integrated their SaaS application with EventBridge. Customer: An organization that uses AWS, and that has subscribed to a partner’s SaaS application. Partner Name: A unique name that identifies an Amazon EventBridge partner. Partner Event Bus: An Event Bus that is used to deliver events from a partner to AWS. How EventBridge works for partners & customers A partner can allow their customers to enter an AWS account number and then select an AWS region. Next, CreatePartnerEventSource is called by the partner in the desired region and the customer is informed of the event source name. After accepting the invitation to connect, customers have to wait for the status of the event source to change to Active. Each time an event of interest to the customer occurs, the partner calls the PutPartnerEvents and reference the event source. Image Source: Amazon It works the same way for customers as well. Customer accepts the invitation to connect by calling CreateEventBus, to create an event bus associated with the event source. Customer can add rules and targets to prepare the Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. Customers can use DeActivateEventSource and ActivateEventSource to control the flow. Amazon EventBridge is launched with ten partner event sources including Datadog, Zendesk, PagerDuty, Whispir, Segment, Symantec and more. This is pretty big news for users who deal with building serverless applications. With inbuilt partner integrations these partners can directly trigger an event in an EventBridge, without the need for a webhook. Thus “AWS is the mediator rather than HTTP”, quotes Paul Johnston, the ServerlessDays cofounder. He also adds that, “The security implications of partner integrations are the first thing that springs to mind. The speed implications will almost certainly be improved as well, with those partners almost certainly using AWS events at the other end as well.” https://twitter.com/PaulDJohnston/status/1149629728065650693 https://twitter.com/PaulDJohnston/status/1149629729571397632 Users are excited with the kind of creative freedom Amazon EventBridge will bring to their products. https://twitter.com/allPowerde/status/1149792437738622976 https://twitter.com/ShortJared/status/1149314506067255304 https://twitter.com/petrabarus/status/1149329981975040000 https://twitter.com/TobiM/status/1149911798256152576 Users with SaaS application can integrate with EventBridge Partner Integration. Visit the Amazon blog to learn the implementation of EventBridge. Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon Aurora makes PostgreSQL Serverless generally available Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic
Read more
  • 0
  • 0
  • 3009

article-image-twitter-experienced-major-outage-yesterday-due-to-an-internal-configuration-issue
Fatema Patrawala
12 Jul 2019
4 min read
Save for later

Twitter experienced major outage yesterday due to an internal configuration issue

Fatema Patrawala
12 Jul 2019
4 min read
Yesterday Twitter went down across major parts of the world including the US and the UK. Twitter users reported being unable to access the platform on web and mobile devices. The outage lasted on the site for approximately an hour. According to DownDetector.com, the site began experiencing major issues at 2:46pm EST, with problems being reported from users attempting to access Twitter through its website, iPhone or iPad app and via Android devices. While the majority of problems being reported from Twitter were website issues (51%), nearly 30% were from iPhone and iPad app usage and another 18% from Android users, as per the outage report. Twitter acknowledged that the platform was experiencing issues on its status page shortly after the first outages were reported online. The company listed the status as “investigating” and noted a service disruption was causing the seemingly global issue. “We are currently investigating issues people are having accessing Twitter,” the statement read. “We will keep you updated on what's happening.” This month has experienced several high-profile outages among social networks. Facebook and Instagram experienced a day-long outage affecting large parts of the world on July 3rd. LinkedIn went down for several hours on Wednesday. Cloudfare suffered two major outages in the span of two weeks this month. One was due to an internal software glitch and another was caused when Verizon accidentally rerouted IP packages after it wrongly accepted a network misconfiguration from a small ISP in Pennsylvania, USA. Reddit was experiencing outages on its website and app earlier in the day, but appeared to be back up and running for most users an hour before Twitter went down, according to DownDetector.com. In March, Facebook and its family of apps experience a 14 hour long outage which was reasoned as server config change issue. Twitter site then began operating normally nearly an hour later at approximately 3:45pm EST. The users on Twitter joked saying they were "all censored for the last hour" when the site eventually was back up and running. On the status page of the outage report Twitter said that the outage was caused due to “an internal configuration change, which we're now fixing.” “Some people may be able to access Twitter again and we're working to make sure Twitter is available to everyone as quickly as possible,” the company said in a follow up statement. https://twitter.com/TwitterSupport/status/1149412158121267200 On Hacker News too users discussed about number of outages in major tech companies and why is this happening. One of the user comments reads, “Ok, this is too many high-profile, apparently unrelated outages in the last month to be completely a coincidence. Hypotheses: 1) software complexity is escalating over time, and logically will continue to until something makes it stop. It has now reached the point where even large companies cannot maintain high reliability. 2) internet volume is continually increasing over time, and periodically we hit a point where there are just too many pieces required to make it work (until some change the infrastructure solves that). We had such a point when dialup was no longer enough, and we solved that with fiber. Now we have a chokepoint somewhere else in the system, and it will require a different infrastructure change 3) Russia or China or Iran or somebody is f*(#ing with us, to see what they are able to break if they needed to, if they need to apply leverage to, for example, get sanctions lifted 4) Just a series of unconnected errors at big companies 5) Other possibilities?” On this comment another user adds, “I work at Facebook. I worked at Twitter. I worked at CloudFlare. The answer is nothing other than #4. #1 has the right premise but the wrong conclusion. Software complexity will continue escalating until it drops by either commoditization or redefining problems. Companies at the scale of FAANG(+T) continually accumulate tech debt in pockets and they eventually become the biggest threats to availability. Not the new shiny things. The sinusoidal pattern of exposure will continue.” Facebook, Instagram and WhatsApp suffered a major outage yesterday; people had trouble uploading and sending media files Facebook family of apps hits 14 hours outage, longest in its history How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others
Read more
  • 0
  • 0
  • 2560
article-image-ispa-nominated-mozilla-in-the-internet-villain-category-for-dns-over-https-push-withdrew-nominations-and-category-after-community-backlash
Fatema Patrawala
11 Jul 2019
6 min read
Save for later

ISPA nominated Mozilla in the “Internet Villain” category for DNS over HTTPs push, withdrew nominations and category after community backlash

Fatema Patrawala
11 Jul 2019
6 min read
On Tuesday, the Internet Services Providers' Association (ISPA) which is also UK's Trade Association for providers of internet services announced that the nomination of Mozilla Firefox has been withdrawn from the “Internet Villain Category”. This decision came after they saw a global backlash to their nomination of Mozilla for their DNS-over-HTTPS (DoH) push. ISPA withdrew the Internet Villain category as a whole from the ISPA Awards 2019 ceremony which will be held today in London. https://twitter.com/ISPAUK/status/1148636700467453958 The official blog post reads, “Last week ISPA included Mozilla in our list of Internet Villain nominees for our upcoming annual awards. In the 21 years the event has been running it is probably fair to say that no other nomination has generated such strong opinion. We have previously given the award to the Home Secretary for pushing surveillance legislation, leaders of regimes limiting freedom of speech and ambulance-chasing copyright lawyers. The villain category is intended to draw attention to an important issue in a light-hearted manner, but this year has clearly sent the wrong message, one that doesn’t reflect ISPA’s genuine desire to engage in a constructive dialogue. ISPA is therefore withdrawing the Mozilla nomination and Internet Villain category this year.” Mozilla Firefox, which is the preferred browser for a lot of users encourages privacy protection and feature options to keep one’s Internet activity as private as possible. One of the recently proposed features – DoH (DNS-over-HTTPS) which is still in the testing phase didn’t receive a good response from the ISPA trade association. Hence, the ISPA decided to nominate Mozilla as one of the “Internet Villains” among the nominees for 2019. In their announcement, the ISPA mentioned that Mozilla is one of the Internet Villains for supporting DoH (DNS-over-HTTPS). https://twitter.com/ISPAUK/status/1146725374455373824 Mozilla on this announcement responded by saying that this is one way to know that they are fighting the good fight. https://twitter.com/firefox/status/1147225563649564672 On the other hand this announcement amongst the community garnered a lot of criticism. They rebuked ISPA for promoting online censorship and enabling rampant surveillance. Additionally there were comments of ISPA being the Internet Villian in this scenario. Some the tweet responses are given below: https://twitter.com/larik47/status/1146870658246352896 https://twitter.com/gon_dla/status/1147158886060908544 https://twitter.com/ultratethys/status/1146798475507617793 Along with Mozilla, Article 13 Copyright Directive and United States President Donald Trump also appeared in the nominations list. Here’s how ISPA explained in their announcement: “Mozilla – for their proposed approach to introduce DNS-over-HTTPS in such a way as to bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Article 13 Copyright Directive – for threatening freedom of expression online by requiring ‘content recognition technologies’ across platforms President Donald Trump – for causing a huge amount of uncertainty across the complex, global telecommunications supply chain in the course of trying to protect national security” Why are the ISPs pushing back against DNS-over-HTTPS? DoH basically means that your DNS requests will be encrypted over an HTTPS connection. Traditionally, the DNS requests are unencrypted and your DNS provider or the ISP can monitor/control your browsing activity. Without DoH, you can easily enforce blocking/content filtering through your DNS provider or the ISP can do that when they want. However, DoH takes that out of the equation and hence, you get a private browsing experience. Admittedly big broadband ISPs and politicians are concerned that large scale third-party deployments of DoH, which encrypts DNS requests using the common HTTPS protocol for websites (i.e. turning IP addresses into human readable domain names), could disrupt their ability to censor, track and control related internet services. The above position is however a particularly narrow way of looking at the technology, because at its core DoH is about protecting user privacy and making internet connections more secure. As a result DoH is often praised and widely supported by the wider internet community. Mozilla is not alone in pushing DoH but they found themselves being singled out by the ISPA because of their proposal to enable the feature by default within Firefox which is yet to happen. Google is also planning to introduce its own DoH solution in its Chrome browser. The result could be that ISPs lose a lot of their control over DNS and break their internet censorship plans. Is DoH useful for internet users? If so, how? On one side of the coin, DoH lets users bypass any content filters enforced by the DNS or the ISPs. So, it is a good thing that it will put a stop to Internet censorship and DoH will help in this. But, on the other side, if you are a parent, you can no longer set content filters if your kid utilizes DoH on Mozilla Firefox. And potentially DoH could be a solution for some to bypass parental controls, which could be a bad thing. And this particular reason is given by the ISPA for nominating Mozilla for the Internet Villian category. It says that DNS-over-HTTPS will bypass UK filtering obligations and parental controls, undermining internet safety standards in the UK. Also, using DoH means that you can no longer use the local host file, in case you are using it for ad blocking or for any other reason. The Internet community criticized the way ISPA handled the back lash and withdrew the category as a whole. One of the user comments on Hacker News read, “You have to love how all their "thoughtful criticisms" of DNS over HTTPS have nothing to do with the things they cited in their nomination of Mozilla as villain. Their issue was explicitly "bypassing UK filtering obligations" not that load of flaming horseshit they just pulled out of their ass in response to the backlash.” https://twitter.com/VModifiedMind/status/1148682124263866368   Highlights from Mary Meeker’s 2019 Internet trends report How Verizon and a BGP Optimizer caused a major internet outage affecting Amazon, Facebook, CloudFlare among others Time for data privacy: DuckDuckGo CEO Gabe Weinberg in an interview with Kara Swisher
Read more
  • 0
  • 0
  • 3940

article-image-google-suffers-another-outage-as-google-cloud-servers-in-the-us-east1-region-are-cut-off
Amrata Joshi
03 Jul 2019
3 min read
Save for later

Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off

Amrata Joshi
03 Jul 2019
3 min read
Yesterday, Google Cloud servers in the us-east1 region were cut off from the rest of the world as there was an issue reported with Cloud Networking and Load balancing within us-east1. These issues with Google Cloud Networking and Load Balancing have caused physical damage to multiple concurrent fiber bundles that serve network paths in us-east1. At 10:25 am PT yesterday, the status was updated that the “Customers may still observe traffic through Global Load-balancers being directed away from back-ends in us-east1 at this time.” It was later posted on the status dashboard that the mitigation work was underway for addressing the issue with Google Cloud Networking and Load Balancing in us-east1. However, the rate of errors was decreasing at the time but few users faced elevated latency. Around 4:05 pm PT, the status was updated, “The disruptions with Google Cloud Networking and Load Balancing have been root caused to physical damage to multiple concurrent fiber bundles serving network paths in us-east1, and we expect a full resolution within the next 24 hours. In the meantime, we are electively rerouting traffic to ensure that customers' services will continue to operate reliably until the affected fiber paths are repaired. Some customers may observe elevated latency during this period. We will provide another status update either as the situation warrants or by Wednesday, 2019-07-03 12:00 US/Pacific tomorrow.” This outage seems to be the second major one that hit Google's services in recent times. Last month, Google Calendar was down for nearly three hours around the world. Last month Google Cloud suffered a major outage that took down a number of Google services including YouTube, GSuite, Gmail, etc. According to a person who works on Google Cloud, the team is experiencing an issue with a subset of the fiber paths that supply the region and the team is working towards resolving the issue. They have mostly removed all the Google.com traffic out of the Region to prefer GCP customers. A Google employee commented on the HackerNews thread, “I work on Google Cloud (but I'm not in SRE, oncall, etc.). As the updates to [1] say, we're working to resolve a networking issue. The Region isn't (and wasn't) "down", but obviously network latency spiking up for external connectivity is bad. We are currently experiencing an issue with a subset of the fiber paths that supply the region. We're working on getting that restored. In the meantime, we've removed almost all Google.com traffic out of the Region to prefer GCP customers. That's why the latency increase is subsiding, as we're freeing up the fiber paths by shedding our traffic.” Google Cloud users are tensed about this outage and awaiting the services to get restored back to normal. https://twitter.com/IanFortier/status/1146079092229529600 https://twitter.com/beckynagel/status/1146133614100221952 https://twitter.com/SeaWolff/status/1146116320926359552 Ritiko, a cloud-based EHR company is also experiencing issues because of the Google Cloud outage, as they host their services there. https://twitter.com/ritikoL/status/1146121314387857408 As of now there is no further update from Google on if the outage is resolved, but they expect a full resolution within the next 24 hours. Check this space for new updates and information. Google Calendar was down for nearly three hours after a major outage Do Google Ads secretly track Stack Overflow users? Google open sources its robots.txt parser to make Robots Exclusion Protocol an official internet standard  
Read more
  • 0
  • 0
  • 3302