Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases now! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News - Cloud Computing

175 Articles
article-image-vmworld-2019-vmware-tanzu-on-kubernetes-new-hybrid-cloud-offerings-collaboration-with-multi-cloud-platforms-and-more
Fatema Patrawala
30 Aug 2019
7 min read
Save for later

VMworld 2019: VMware Tanzu on Kubernetes, new hybrid cloud offerings, collaboration with multi cloud platforms and more!

Fatema Patrawala
30 Aug 2019
7 min read
VMware kicked off its VMworld 2019 US in San Francisco last week on 25th August and ended yesterday with a series of updates, spanning Kubernetes, Azure, security and more. This year’s event theme was “Make Your Mark” aimed at empowering VMworld 2019 attendees to learn, connect and innovate in the world of IT and business. 20,000 attendees from more than 100 countries descended to San Francisco for VMworld 2019. VMware CEO Pat Gelsinger took the stage, and articulated VMware’s commitment and support for TechSoup, a one-stop IT shop for global nonprofits. Gelsinger also put emphasis on the company's 'any cloud, any application, any device, with intrinsic security' strategy. “VMware is committed to providing software solutions to enable customers to build, run, manage, connect and protect any app, on any cloud and any device,” said Pat Gelsinger, chief executive officer, VMware. “We are passionate about our ability to drive positive global impact across our people, products and the planet.” Let us take a look at the key highlights of the show: VMworld 2019: CEO's take on shaping tech as a force for good The opening keynote from Pat Gelsinger had everything one would expect; customer success stories, product announcements and the need for ethical fix in tech. "As technologists, we can't afford to think of technology as someone else's problem," Gelsinger told attendees, adding “VMware puts tremendous energy into shaping tech as a force for good.” Gelsinger cited three benefits of technology which ended up opening the Pandora's Box. Free apps and services led to severely altered privacy expectations; ubiquitous online communities led to a crisis in misinformation; while the promise of blockchain has led to illicit uses of cryptocurrencies. "Bitcoin today is not okay, but the underlying technology is extremely powerful," said Gelsinger, who has previously gone on record regarding the detrimental environmental impact of crypto. This prism of engineering for good, alongside good engineering, can be seen in how emerging technologies are being utilised. With edge, AI and 5G, and cloud as the "foundation... we're about to redefine the application experience," as the VMware CEO put it. Read also: VMware reaches the goal of using 100% renewable energy in its operations, a year ahead of their 2020 vision Gelsinger’s 2018 keynote was about the theme of tech 'superpowers'. Cloud, mobile, AI, and edge. This time, more focus was given to how the edge was developing. Whether it was a thin edge, containing a few devices and an SD-WAN connection, a thick edge of a remote data centre with NFV, or something in between, VMware aims to have it all covered. "Telcos will play a bigger role in the cloud universe than ever before," said Gelsinger, referring to the rise of 5G. "The shift from hardware to software [in telco] is a great opportunity for US industry to step in and play a great role in the development of 5G." VMworld 2019 introduces Tanzu to build, run and manage software on Kubernetes VMware is moving away from virtual machines to containerized applications. On the product side VMware Tanzu was introduced, a new product portfolio that aims to enable enterprise-class building, running, and management of software on Kubernetes. In Swahili, ’tanzu’ means the growing branch of a tree and in Japanese, ’tansu’ refers to a modular form of cabinetry. For VMware, Tanzu is their growing portfolio of solutions that help build, run and manage modern apps. Included in this is Project Pacific, which is a tech preview focused on transforming VMware vSphere into a Kubernetes native platform. "With project Pacific, we're bringing the largest infrastructure community, the largest set of operators, the largest set of customers directly to the Kubernetes. We will be the leading enabler of Kubernetes," Gelsinger said. Read also: VMware Essential PKS: Use upstream Kubernetes to build a flexible, cost-effective cloud-native platform Other product launches included an update to collaboration program Workspace ONE, including an AI-powered virtual assistant, as well as the launch of CloudHealth Hybrid by VMware. The latter, built on cloud cost management tool CloudHealth, aims to help organisations save costs across an entire multi-cloud landscape and will be available by the end of Q3. Collaboration, not compete with major cloud providers - Google Cloud, AWS & Microsoft Azure At VMworld 2019 VMware announced an extended partnership with Google Cloud earlier this month led the industry to consider the company's positioning amid the hyperscalers. VMware Cloud on AWS continues to gain traction - Gelsinger said Outposts, the hybrid tool announced at re:Invent last year, is being delivered upon - and the company also has partnerships in place with IBM and Alibaba Cloud. Further, VMware in Microsoft Azure is now generally available, with the facility to gradually switch across Azure data centres. By the first quarter of 2020, the plan is to make it available across nine global areas. Read also: Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users The company's decision not to compete, but collaborate with the biggest public clouds has paid off. Gelsinger also admitted that the company may have contributed to some confusion over what hybrid cloud and multi-cloud truly meant. But the explanation from Gelsinger was pretty interesting. Increasingly, with organisations opting for different clouds for different workloads, and changing environments, Gelsinger described a frequent customer pain point for those nearer the start of their journeys. Do they migrate their applications or do they modernise? Increasingly, customers want both - the hybrid option. "We believe we have a unique opportunity for both of these," he said. "Moving to the hybrid cloud enables live migration, no downtime, no refactoring... this is the path to deliver cloud migration and cloud modernisation." As far as multi-cloud was concerned, Gelsinger argued: "We believe technologists who master the multi-cloud generation will own it for the next decade." Collaboration with NVIDIA to accelerate GPU services on AWS NVIDIA and VMware today announced their intent to deliver accelerated GPU services for VMware Cloud on AWS to power modern enterprise applications, including AI, machine learning and data analytics workflows. These services will enable customers to seamlessly migrate VMware vSphere-based applications and containers to the cloud, unchanged, where they can be modernized to take advantage of high-performance computing, machine learning, data analytics and video processing applications. Through this partnership, VMware Cloud on AWS customers will gain access to a new, highly scalable and secure cloud service consisting of Amazon EC2 bare metal instances to be accelerated by NVIDIA T4 GPUs, and new NVIDIA Virtual Compute Server (vComputeServer) software. “From operational intelligence to artificial intelligence, businesses rely on GPU-accelerated computing to make fast, accurate predictions that directly impact their bottom line,” said Jensen Huang, founder and CEO, NVIDIA. “Together with VMware, we’re designing the most advanced GPU infrastructure to foster innovation across the enterprise, from virtualization, to hybrid cloud, to VMware's new Bitfusion data center disaggregation.” Read also: NVIDIA’s latest breakthroughs in conversational AI: Trains BERT in under an hour, launches Project Megatron to train transformer based models at scale Apart from this, Gelsinger made special note to mention VMware's most recent acquisitions, with Pivotal and Carbon Black and discussed about where they fit in the VMware stack at the back. VMware’s hybrid cloud platform for Next-gen Hybrid IT VMware introduced new and expanded cloud offerings to help customers meet the unique needs of traditional and modern applications. VMware empowers IT operators, developers, desktop administrators, and security professionals with the company’s hybrid cloud platform to build, run, and manage workloads on a consistent infrastructure across their data center, public cloud, or edge infrastructure of choice. VMware uniquely enables a consistent hybrid cloud platform spanning all major public clouds – AWS, Azure, Google Cloud, IBM Cloud – and more than 60 VMware Cloud Verified partners worldwide. More than 70 million workloads run on VMware. Of these, 10 million are in the cloud. These are running in more than 10,000 data centers run by VMware Cloud providers. Take a look at the full list of VMworld 2019 announcements here. What’s new in cloud and virtualization this week? VMware signs definitive agreement to acquire Pivotal Software and Carbon Black Pivotal open sources kpack, a Kubernetes-native image build service Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal
Read more
  • 0
  • 0
  • 2848

article-image-oracle-directors-support-billion-dollar-lawsuit-against-larry-ellison-and-safra-catz-for-netsuite-deal
Fatema Patrawala
23 Aug 2019
5 min read
Save for later

Oracle directors support billion dollar lawsuit against Larry Ellison and Safra Catz for NetSuite deal

Fatema Patrawala
23 Aug 2019
5 min read
On Tuesday, Reuters reported that Oracle directors gave a go ahead for a million dollar lawsuit filed against Larry Ellison and Safra Catz in a NetSuite deal in 2016. This was made possible by several board members who wrote an extraordinary letter to the Delaware Court. According to Reuters, in 2017, shareholders led by the Firemen’s Retirement System of St. Louis alleged that Oracle directors breached their duties when they approved a $9.3 billion acquisition of NetSuite – a company controlled by Oracle chair Larry Ellison – at a huge premium above NetSuite’s trading price. Shareholders alleged that Oracle directors sanctioned Ellison’s self-dealing - and also claimed that Oracle’s board members were too entwined with Ellison to be entrusted with the decision of whether the company should sue him and other directors over the NetSuite deal. In an opinion published in Reuters in May 2018, Vice-Chancellor Sam Glasscock of Delaware Chancery Court agreed that shareholders had shown it would have been futile for them to demand action from the board itself. Three years after closing a $9.3 billion deal to acquire NetSuite, three board members, including former U.S. Defense Secretary Leon Panetta, sent a letter on August 15th to Sam Glasscock III, Vice Chancellor for the Court of Chancery in Georgetown, Delaware, approving the lawsuit as members of a special board of directors entity known as the Special Litigation Committee. This lawsuit in legal parlance is known as a derivative suit. According to Justia, this type of suit is filed in cases like this. “Since shareholders are generally allowed to file a lawsuit in the event that a corporation has refused to file one on its own behalf, many derivative suits are brought against a particular officer or director of the corporation for breach of contract or breach of fiduciary duty,” the Justia site explained. The letter went on to say there was an attempt to settle this suit, which was originally launched in 2017, through negotiation outside of court, but when that attempt failed, the directors wrote this letter to the court stating that the suit should be allowed to proceed. As per the letter, the lawsuit, which was originally filed by the Firemen’s Retirement System of St. Louis, could be worth billions. It reads, “One of the lead lawyers for the Firemen’s fund, Joel Friedlander of Friedlander & Gorris, said at a hearing in June that shareholders believe the breach-of-duty claims against Oracle and NetSuite executives are worth billions of dollars. So in last week’s letter, Oracle’s board effectively unleashed plaintiffs’ lawyers to seek ten-figure damages against its own members.” Oracle directors struggled with its cloud footing and ended up buying NetSuite TechCrunch noted that Larry Ellison was involved in setting up NetSuite in the late 1990s and was a major shareholder of NetSuite at the time of the acquisition. Oracle directors were struggling to find its cloud footing in 2016, and it was believed that by buying an established SaaS player, like NetSuite, it could begin to build out its cloud business much faster than trying to develop something like it internally. On Hacker News, a few users commented saying Oracle directors overpaid NetSuite and enriched Larry Ellison. One comment reads, “As you know people, as you learn about things, you realize that these generalizations we have are, virtually to a generalization, false. Well, except for this one, as it turns out. What you think of Oracle, is even truer than you think it is. There has been no entity in human history with less complexity or nuance to it than Oracle. And I gotta say, as someone who has seen that complexity for my entire life, it's very hard to get used to that idea. It's like, 'surely this is more complicated!' but it's like: Wow, this is really simple! This company is very straightforward, in its defense. This company is about one man, his alter-ego, and what he wants to inflict upon humanity -- that's it! ...Ship mediocrity, inflict misery, lie our asses off, screw our customers, and make a whole shitload of money. Yeah... you talk to Oracle, it's like, 'no, we don't fucking make dreams happen -- we make money!' ...You need to think of Larry Ellison the way you think of a lawnmower. You don't anthropomorphize your lawnmower, the lawnmower just mows the lawn, you stick your hand in there and it'll chop it off, the end. You don't think 'oh, the lawnmower hates me' -- lawnmower doesn't give a shit about you, lawnmower can't hate you. Don't anthropomorphize the lawnmower. Don't fall into that trap about Oracle.” Oracle does “organizational restructuring” by laying off 100s of employees IBM, Oracle under the scanner again for questionable hiring and firing policies The tug of war between Google and Oracle over API copyright issue has the future of software development in the crossfires
Read more
  • 0
  • 0
  • 3388

article-image-cloud-next-2019-tokyo-google-announces-new-security-capabilities-for-enterprise-users
Bhagyashree R
01 Aug 2019
3 min read
Save for later

Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users

Bhagyashree R
01 Aug 2019
3 min read
At its Cloud Next 2019 conference happening in Tokyo, Google unveiled new security capabilities that are coming to its enterprise products, G Suite Enterprise, Google Cloud, and Cloud Identity. These capabilities are intended to help its enterprise customers protect their “users, data, and applications in the cloud.” Google is hosting this two-day event (July 31- Aug 1) to showcase its cloud products. Among the key announcements made are Advanced Protection Program support for enterprise products that are rolling out soon, expanded availability of Titan Security Keys, improved anomaly detection in G Suite enterprise, and more. Advanced Protection Program for high-risk employees The Advanced Protection Program was launched in 2017 to protect the personal Google accounts of users who are at high risk of online threats like phishing. The program goes beyond the traditional two-step verification by enforcing you to use a physical security key in addition to your password for signing in to your Google account. The program will be available in beta in the coming days for G Suite, Google Cloud Platform (GCP) and Cloud Identity customers. It will enable enterprise admins to enforce a set of security policies for employees who are at high-risk of targeted attacks such as IT administrators, business executives, among others. The set of policies include enforcing the use of Fast Identity Online (FIDO) keys like Titan Security Keys, automatically blocking of access to non-trusted third-party apps, and enabling enhanced scanning of incoming emails. Wider availability of Titan Security Keys After looking at the growing demand for Titan Security Keys in the US, Google has now expanded its availability in Canada, France, Japan, and the United Kingdom (UK). These keys are available as bundles of two: USB/NFC and Bluetooth. You can use these keys anywhere FIDO security keys are supported including Coinbase, Dropbox, Facebook, GitHub, Salesforce, Stripe, Twitter, and more. Anomalous activity alerts in G Suite G Suite Enterprise and G Suite Enterprise for Education admins can now opt-in to receive anomalous activity alerts in the G Suite alert center. G Suite takes the help of machine learning to analyze security signals within Google Drive to detect potential security risks. These security risks include data exfiltration, policy violations when sharing and downloading files, and more. Google also announced that it will be rolling out support for password vaulted apps in Cloud Identity. Karthik Lakshminarayanan and Vidya Nagarajan from the Google Cloud team wrote in a blog post, “The combination of standards-based- and password-vaulted app support will deliver one of the largest app catalogs in the industry, providing seamless one-click access for users and a single point of management, visibility, and control for admins.” You can read the official announcement by Google to know more in detail. Google Project Zero reveals six “interactionless” bugs that can affect iOS via Apple’s iMessage Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless Understanding security features in the Google Cloud Platform (GCP)
Read more
  • 0
  • 0
  • 2419

article-image-wewontbuildit-amazon-workers-demand-company-to-stop-working-with-palantir-and-take-a-stand-against-ice
Fatema Patrawala
30 Jul 2019
4 min read
Save for later

#WeWontBuildIt: Amazon workers demand company to stop working with Palantir and take a stand against ICE

Fatema Patrawala
30 Jul 2019
4 min read
On Monday, a group of Amazon employees sent out an internal email to the We Won’t Build it mailing list, calling on Amazon to stop working with Palantir. Palantir is a data analytics company, founded by Peter Thiel, one of President Trump’s most vocal supporters in Silicon Valley, has a strong association with the Immigration and Customs Enforcement (ICE). https://twitter.com/WeWontBuildIt/status/1155872860742664194 Last year in June, an alliance of more than 500 Amazon employees had signed a petition addressing to CEO Jeff Bezos and AWS head Andy Jassy to abandon its contracts with government agencies. It seems that those protests are ramping up again. The email sent to employee mailing lists within Amazon Web Services demanded that Palantir to be removed from Amazon’s cloud for violating its terms of service. It also called on Amazon to take a stand against ICE by making a statement establishing its position against immigration raids, deportations and camps for migrants at the border. They have also demanded to stop selling its facial recognition tech to the government agencies. https://twitter.com/WeWontBuildIt/status/1155872862055485441 In May, Amazon shareholders had rejected the proposal to ban the sale of its facial recognition tech to government. With this they had also rejected eleven other proposals made by employees including a climate resolution, salary transparency and other issues. "The world is watching the abuses in ICE's concentration camps unfold. We know that our company should, and can do better,” the email read. The protests broke out at Amazon’s AWS Summit, held in New York, last week on Thursday. As Amazon CTO Werner Vogels gave a presentation, a group led by a man identified in a tweet as a tech worker interrupted to protest Amazon ties with ICE. https://twitter.com/altochulo/status/1149305189800775680 https://twitter.com/MaketheRoadNY/status/1149306940377448449 Vogels was caught off guard by the protests but continued on about the specifics of AWS, according to ZDNet. “I’m more than willing to have a conversation, but maybe they should let me finish first,” Vogels said amidst protesters, whose audio was cut off on Amazon’s official livestream of the event, per ZDNet. “We’ll all get our voices heard,” he said before returning to his planned speech. According to Business Insider reports, Palantir has a $51 million contract with ICE, which entails providing software to gather data on undocumented immigrant’s employment information, phone records, immigration history and similar information. Its software is hosted in the AWS cloud. The email states that Palantir enables ICE to violate the rights of others and working with such a company is harmful to Amazon’s reputation. The employees also state that their protest is in the spirit of similar actions at companies including Wayfair, Microsoft and Salesforce where workers have protested against their employers to cut ties with ICE and US Customs and Border Protection (CBP). Amazon has been facing increasing pressure from its employees. Last week workers had protested on Amazon Prime day demanding a safe working conditions and fair wages. Amazon, which typically takes a cursory view of such employee outcry, has so far given no indication that it will reconsider providing services to Palantir and other law enforcement agencies. Instead the company argued that the government should determine what constitutes “acceptable use” of technology of the type it sells. “As we’ve said many times and continue to believe strongly, companies and government organizations need to use existing and new technology responsibly and lawfully,” Amazon said to BuzzFeed News. “There is clearly a need for more clarity from governments on what is acceptable use of AI and ramifications for its misuse, and we’ve provided a proposed legislative framework for this. We remain eager for the government to provide this additional clarity and legislation, and will continue to offer our ideas and specific suggestions.” Other tech worker groups like Google Walkout For Real Change, Ban Google for Pride stand in solidarity with Amazon workers on this protest. https://twitter.com/GoogleWalkout/status/1155976287803998210 https://twitter.com/NoPrideForGoog/status/1155906615930806276 #TechWontBuildIt: Entropic maintainer calls for a ban on Palantir employees contributing to the project and asks other open source communities to take a stand on ethical grounds Amazon workers protest on its Prime day, demand a safe work environment and fair wages Amazon shareholders reject proposals to ban sale of facial recognition tech to govt and to conduct independent review of its human and civil rights impact
Read more
  • 0
  • 0
  • 3161

article-image-dropbox-walks-back-its-own-decision-brings-back-support-for-zfs-xfs-btrfs-and-ecryptfs-on-linux
Vincy Davis
23 Jul 2019
3 min read
Save for later

Dropbox walks back its own decision; brings back support for ZFS, XFS, Btrfs, and eCryptFS on Linux

Vincy Davis
23 Jul 2019
3 min read
Today, Dropbox notified users that it has brought back support for ZFS and XFS on 64-bit Linux systems, and Btrfs and eCryptFS on all Linux systems in its Beta Build 77.3.127. The support note in the Dropbox forum reads “Add support for zfs (on 64-bit systems only), eCryptFS, xfs (on 64-bit systems only), and btrfs filesystems in Linux.” Last year in November, Dropbox notified users that they are “ending support for Dropbox syncing to drives with certain uncommon file systems. The supported file systems are Ext4 filesystem on Linux, NTFS for Windows, and HFS+ or APFS for Mac.” Dropbox explained, a supported file system is necessary for Dropbox as it uses extended attributes (X-attrs) to identify files in their folder and to keep them in sync. The post also mentioned that Dropbox will support only the most common file systems that support X-attrs to ensure stability and consistency to its users. After Dropbox discontinued support for these Linux formats, many developers switched to other services such as Google Drive, Box, etc. This is speculated to be one of the reasons why Dropbox has changed its previous decision. However, no official statement from the Dropbox community, for bringing the support back, has been announced yet. Many users have expressed resentment on Dropbox’s irregular actions. A user on Hacker News says, “Too late. I have left Dropbox because of their stance on Linux filesystems, price bump with unnecessary features, and the continuous badgering to upgrade to its business. It's a great change though for those who are still on Dropbox. Their sync is top-notch” A Redditor comments, “So after I stopped using Dropbox they do care about me as a user after all? Linux users screamed about how nonsensical the original decision was. Maybe ignoring your users is not such a good idea after all? I moved to Cozy Drive - it's not perfect, but has native Linux client, is Europe based (so I am protected by EU privacy laws) and is pretty good as almost drop-in replacement.” Another Redditor said that “Too late for me, I was a big dropbox user for years, they dropped support for modern file systems and I dropped them. I started using Syncthing to replace the functionality I lost with them.” Few developers are still happy to see that Dropbox will again support the popular Linux systems. A user on Hacker News comments, “That's good news. Happy to see Dropbox thinking about the people who stuck with them from day 1. In the past few years they have been all over the place, trying to find their next big thing and in the process also neglecting their non-enterprise customers. Their core product is still the best in the market and an important alternative to Google.” Fedora announces the first preview release of Fedora CoreOS as an automatically updating Linux OS for containerized workloads Linux Mint 19.2 beta releases with Update Manager, improved menu and much more! Linux kernel announces a patch to allow 0.0.0.0/8 as a valid address range
Read more
  • 0
  • 0
  • 5274

article-image-hello-gg-a-new-os-framework-to-execute-super-fast-apps-on-1000s-of-transient-functional-containers
Bhagyashree R
15 Jul 2019
4 min read
Save for later

Hello 'gg', a new OS framework to execute super-fast apps on "1000s of transient functional containers"

Bhagyashree R
15 Jul 2019
4 min read
Last week at the USENIX Annual Technical Conference (ATC) 2019 event, a team of researchers introduced 'gg'. It is an open-source framework that helps developers execute applications using thousands of parallel threads on a cloud function service to achieve near-interactive completion times. "In the future, instead of running these tasks on a laptop, or keeping a warm cluster running in the cloud, users might push a button that spawns 10,000 parallel cloud functions to execute a large job in a few seconds from start. gg is designed to make this practical and easy," the paper reads. At USENIX ATC, leading systems researchers present their cutting-edge systems research. It also gives researchers to gain insight into topics like virtualization, network management and troubleshooting, cloud and edge computing, security, privacy, and more. Why is the gg framework introduced Cloud functions, better known as, serverless computing, provide developers finer granularity and lower latency. Though they were introduced for event handling and invoking web microservices, their granularity and scalability make them a good candidate for creating something called a “burstable supercomputer-on-demand”. These systems are capable of launching a burst-parallel swarm of thousands of cloud functions, all working on the same job. The goal here is to provide results to an interactive user much faster than their own computer or by booting a cold cluster and is cheaper than maintaining a warm cluster for occasional tasks. However, building applications on swarms of cloud functions pose various challenges. The paper lists some of them: Workers are stateless and may need to download large amounts of code and data on startup Workers have limited runtime before they are killed On-worker storage is limited but much faster than off-worker storage The number of available cloud workers depends on the provider's overall load and can't be known precisely upfront Worker failures occur when running at large scale Libraries and dependencies differ in a cloud function compared with a local machine Latency to the cloud makes roundtrips costly How gg works Previously, researchers have addressed some of these challenges. The gg framework aims to address these principal challenges faced by burst-parallel cloud-functions applications. With gg, developers and users can build applications that burst from zero to thousands of parallel threads to achieve low latency for everyday tasks. The following diagram shows its composition: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers The gg framework enables you to build applications on an abstraction of transient, functional containers that are also known as thunks. Applications can express their jobs in terms of interrelated thunks or Linux containers and then schedule, instantiate, and execute those thunks on a cloud-functions service. This framework is capable of containerizing and executing existing programs like software compilation, unit tests, and video encoding with the help of short-lived cloud functions. In some cases, this can give substantial gains in terms of performance. It can also be inexpensive than keeping a comparable cluster running continuously depending on the frequency of the task. The functional approach and fine-grained dependency management of gg give significant performance benefits when compiling large programs from a cold start. Here's a table showing a summary of the results for compiling Inkscape, an open-source software: Source: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers When running “cold” on AWS Lambda, gg was nearly 5x faster than an existing icecc system, running on a 48-core or 384-core cluster of running VMs. To know more in detail, read the paper: From Laptop to Lambda: Outsourcing Everyday Jobs to Thousands of Transient Functional Containers. You can also check out gg's code on GitHub. Also, watch the talk in which Keith Winstein, an assistant professor of Computer Science at Stanford University, explains the purpose of GG and demonstrates how it exactly works: https://www.youtube.com/watch?v=O9qqSZAny3I&t=55m15s Cloud computing trends in 2019 Cloudflare's Workers enable containerless cloud computing powered by V8 Isolates and WebAssembly Serverless Computing 101
Read more
  • 0
  • 0
  • 3554
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-cloud-and-nvidia-tesla-set-new-ai-training-records-with-mlperf-benchmark-results
Amrata Joshi
15 Jul 2019
3 min read
Save for later

Google Cloud and Nvidia Tesla set new AI training records with MLPerf benchmark results

Amrata Joshi
15 Jul 2019
3 min read
Last week, the MLPerf effort released the results for MLPerf Training v0.6, the second round of results from their machine learning training performance benchmark suite. These benchmarks are used by the AI practitioners to adopt common standards for measuring the performance and speed of hardware that is used to train AI models. As per these benchmark results, Nvidia and Google Cloud set new AI training time performance records. MLPerf v0.6 studies the training performance of machine learning acceleration hardware in 6 categories including image classification, object detection (lightweight), object detection (heavyweight), translation (recurrent), translation (non-recurrent) and reinforcement learning. MLPerf is an association of more than 40 companies and researchers from leading universities, and the MLPerf benchmark suites are being the industry standard for measuring machine learning performance.  As per the results, Nvidia’s Tesla V100 Tensor Core GPUs used an Nvidia DGX SuperPOD for completing on-premise training of the ResNet-50 model for image classification in 80 seconds. Also, Nvidia turned out to be the only vendor who submitted results in all six categories. In 2017, when Nvidia launched the DGX-1 server, it took 8 hours to complete model training. In a statement to ZDNet, Paresh Kharya, director of Accelerated Computing for Nvidia said, “The progress made in just a few short years is staggering." He further added, “The results are a testament to how fast this industry is moving." Google Cloud entered five categories and had set three records for performance at scale with its Cloud TPU v3 Pods. Google Cloud Platform (GCP) set three new performance records in the latest round of the MLPerf benchmark competition. The three record-setting results ran on Cloud TPU v3 Pods, are Google’s latest generation of supercomputers, built specifically for machine learning.  The speed of Cloud TPU Pods was better and used less than two minutes of compute time. The TPU v3 Pods also showed the record performance results in machine translation from English to German of the Transformer model within 51 seconds. Cloud TPU v3 Pods train models over 84% faster than the fastest on-premise systems in the MLPerf Closed Division. TPU pods has also achieved record performance in the image classification benchmark of the ResNet-50 model with the ImageNet data set, as well as model training in another object detection category in 1 minute and 12 seconds. In a statement to ZDNet, Google Cloud's Zak Stone said, "There's a revolution in machine learning.” He further added, "All these workloads are performance-critical. They require so much compute, it really matters how fast your system is to train a model. There's a huge difference between waiting for a month versus a couple of days." Google suffers another Outage as Google Cloud servers in the us-east1 region are cut off Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google Cloud introduces Traffic Director Beta, a networking management tool for service mesh  
Read more
  • 0
  • 0
  • 2145

article-image-amazon-eventbridge-an-event-bus-with-higher-security-and-speed-to-boost-aws-serverless-ecosystem
Vincy Davis
15 Jul 2019
4 min read
Save for later

Amazon EventBridge: An event bus with higher security and speed to boost AWS serverless ecosystem

Vincy Davis
15 Jul 2019
4 min read
Last week, Amazon had a pretty huge news for its AWS serverless ecosystem, one which is being considered as the biggest thing since AWS Lambda itself. Few days ago, with an aim to help customers integrate their own AWS applications with Software as a Service (SaaS) applications, Amazon EventBridge was launched. The EventBridge model is an asynchronous, fast, clean, and easy to use event bus which can be used to publish events, specific to each AWS customer. The SaaS application and a code running on AWS are now independent of a shared communication protocol, runtime environment, or programming language. This allows Lambda functions to handle events from a Saas application as well as route events to other AWS targets. Similar to CloudWatch events, EventBridge also has an existing default event bus that accepts events from AWS services and calls to PutEvents. One distinction between them is that in EventBridge, each partner application that a user subscribes to will also create an event source. This event source can then be used to associate with an event bus in an AWS account. AWS users can select any of their event buses, create EventBridge Rules, and select Targets to invoke when an incoming event matches a rule. Important terms to understand the use of Amazon EventBridge Partner: An organization that has integrated their SaaS application with EventBridge. Customer: An organization that uses AWS, and that has subscribed to a partner’s SaaS application. Partner Name: A unique name that identifies an Amazon EventBridge partner. Partner Event Bus: An Event Bus that is used to deliver events from a partner to AWS. How EventBridge works for partners & customers A partner can allow their customers to enter an AWS account number and then select an AWS region. Next, CreatePartnerEventSource is called by the partner in the desired region and the customer is informed of the event source name. After accepting the invitation to connect, customers have to wait for the status of the event source to change to Active. Each time an event of interest to the customer occurs, the partner calls the PutPartnerEvents and reference the event source. Image Source: Amazon It works the same way for customers as well. Customer accepts the invitation to connect by calling CreateEventBus, to create an event bus associated with the event source. Customer can add rules and targets to prepare the Lambda functions to process the events. Associating the event source with an event bus also activates the source and starts the flow of events. Customers can use DeActivateEventSource and ActivateEventSource to control the flow. Amazon EventBridge is launched with ten partner event sources including Datadog, Zendesk, PagerDuty, Whispir, Segment, Symantec and more. This is pretty big news for users who deal with building serverless applications. With inbuilt partner integrations these partners can directly trigger an event in an EventBridge, without the need for a webhook. Thus “AWS is the mediator rather than HTTP”, quotes Paul Johnston, the ServerlessDays cofounder. He also adds that, “The security implications of partner integrations are the first thing that springs to mind. The speed implications will almost certainly be improved as well, with those partners almost certainly using AWS events at the other end as well.” https://twitter.com/PaulDJohnston/status/1149629728065650693 https://twitter.com/PaulDJohnston/status/1149629729571397632 Users are excited with the kind of creative freedom Amazon EventBridge will bring to their products. https://twitter.com/allPowerde/status/1149792437738622976 https://twitter.com/ShortJared/status/1149314506067255304 https://twitter.com/petrabarus/status/1149329981975040000 https://twitter.com/TobiM/status/1149911798256152576 Users with SaaS application can integrate with EventBridge Partner Integration. Visit the Amazon blog to learn the implementation of EventBridge. Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon Aurora makes PostgreSQL Serverless generally available Amazon launches VPC Traffic Mirroring for capturing and inspecting network traffic
Read more
  • 0
  • 0
  • 3009

article-image-google-launches-beta-version-of-deep-learning-containers-for-developing-testing-and-deploying-ml-applications
Amrata Joshi
28 Jun 2019
3 min read
Save for later

Google launches beta version of Deep Learning Containers for developing, testing and deploying ML applications

Amrata Joshi
28 Jun 2019
3 min read
Yesterday, Google announced the beta availability of Deep Learning Containers, a new cloud service that provides environments for developing, testing as well as for deploying machine learning applications. In March this year, Amazon also launched a similar offering, AWS Deep Learning Containers with Docker image support for easy deployment of custom machine learning (ML) environments. The major advantage of Deep Learning containers is its ability to test machine learning applications on-premises and it can quickly move them to cloud. Support for PyTorch, TensorFlow scikit-learn and R Deep Learning Containers, launched by Google Cloud Platform (GCP) can be run both in the cloud as well as on-premise. It has support for machine learning frameworks like PyTorch, TensorFlow 2.0, and TensorFlow 1.13. Deep Learning Containers by AWS has support for TensorFlow and Apache MXNet frameworks. Whereas Google’s ML containers don’t support Apache MXNet but they come with pre-installed PyTorch, TensorFlow scikit-learn and R. Features various tools and packages GCP Deep Learning Containers consists of several performance-optimized Docker containers that come along with various tools used for running deep learning algorithms. These tools include preconfigured Jupyter Notebooks that are interactive tools used to work with and share code, visualizations, equations and text. Google Kubernetes Engine clusters is also one of the tools and it used for orchestrating multiple container deployments. It also comes with access to packages and tools such as Nvidia’s CUDA, cuDNN, and NCCL. Docker images now work on cloud and on-premises  The docker images also work on cloud, on-premises, and across GCP products and services such as Google Kubernetes Engine (GKE), Compute Engine, AI Platform, Cloud Run, Kubernetes, and Docker Swarm. Mike Cheng, software engineer at Google Cloud in a blog post, said, “If your development strategy involves a combination of local prototyping and multiple cloud tools, it can often be frustrating to ensure that all the necessary dependencies are packaged correctly and available to every runtime.” He further added, “Deep Learning Containers address this challenge by providing a consistent environment for testing and deploying your application across GCP products and services, like Cloud AI Platform Notebooks and Google Kubernetes Engine (GKE).” For more information, visit the AI Platform Deep Learning Containers documentation. Do Google Ads secretly track Stack Overflow users? CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl”    
Read more
  • 0
  • 0
  • 3998

article-image-kubernetes-1-15-releases-with-extensibility-around-core-kubernetes-apis-cluster-lifecycle-stability-and-more
Vincy Davis
20 Jun 2019
5 min read
Save for later

Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more!

Vincy Davis
20 Jun 2019
5 min read
Update: On July 23rd, the Enhancements Lead of Kubernetes 1.15 at VMware, Kenny Coleman,  published a “What's New in Kubernetes 1.15” video with Cloud Native Computing Foundation (CNCF). In the video, he explains in detail about the three new major features in Kubernetes 1.15, which include Dynamic HA Clusters with Kubeadm, Volume Cloning and CustomResourceDefinition (CRDs). Coleman also highlights each feature and explains its importance to users.  Watch the video below to know in detail about Kenny Coleman’s talk about Kubernetes 1.15. https://www.youtube.com/watch?v=eq7dgHjPpzc On June 19th, the Kubernetes team announced the release of Kubernetes 1.15, which consists of 25 enhancements, including 2 moving to stable, 13 in beta, and 10 in alpha. The key features of this release include extensibility around core Kubernetes APIs, cluster lifecycle stability, and usability improvements. This is Kubernetes’ second release this year. The previous version Kubernetes 1.14, released three months ago, had 10 stable enhancements--the most amount of stable features revealed in a release. In an interview to the New Stack, Claire Laurence, the team lead at Kubernetes said that in this release, “We’ve had a fair amount of features progress to beta. I think what we’ve been seeing a lot with these alpha and beta features as they progress is a lot of continued focus on stability and overall improvement before indicating that those features are stable.” Let’s have a brief look at all the new features and updates. #1 Extensibility around core Kubernetes APIs The theme of the new developments around CustomResourceDefinitions is data consistency and native behavior. The Kubernetes team wants that a user should not notice whether the interaction is with a CustomResource or with a Golang-native resource. Hence, from v1.15 onwards, Kubernetes will check each schema against a restriction called “structural schema”. This enforces non-polymorphic and complete typing of each field in a CustomResource. Out of the five enhancements, the ‘CustomResourceDefinition Defaulting’ is an alpha release. It is specified using the default keyword in the OpenAPI validation schema. Defaulting will be available as alpha in Kubernetes 1.15 for structural schemas. The other four enhancements are in beta which include: CustomResourceDefinition Webhook Conversion In Kubernetes, CustomResourceDefinitions gain the ability to convert between different versions on-the-fly, just like users are used to from native resources for the long term. CustomResourceDefinition OpenAPI Publishing OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, but only for structural schemas. CustomResourceDefinitions Pruning Pruning is the automatic removal of unknown fields in objects sent to a Kubernetes API. A field is unknown if it is not specified in the OpenAPI validation schema. It enforces that only data structures specified by the CRD developer are persisted to etcd. This is the behaviour of native resources, and will be available for CRDs as well, starting as beta in Kubernetes 1.15. Admission Webhook Reinvocation & Improvements In the earlier versions, mutating webhooks were only called once, in alphabetical order. An earlier run webhook cannot react on the output of webhooks, called later in the chain. With Kubernetes 1.15, mutating webhooks can opt-in into at least one re-invocation by specifying reinvocationPolicy: IfNeeded. If a later mutating webhook modifies the object, the earlier webhook will get a second chance. #2 Cluster Lifecycle Stability and Usability Improvements The cluster lifecycle building block, kubeadm, continues to receive features and stability work, which is needed for bootstrapping production clusters efficiently. kubeadm has promoted high availability (HA) capability to beta, allowing users to use the familiar kubeadm init and kubeadm join commands to configure and deploy an HA control plane. With kubeadm, certificate management has become more robust in 1.15, as it seamlessly rotates all the certificates before expiry. The kubeadm configuration file API is moving from v1beta1 to v1beta2 in 1.15. kubeadm now has its own new logo. Continued Improvement of CSI In Kubernetes 1.15, the Special Interests Groups (SIG) Storage enables migration of in-tree volume plugins to Container Storage Interface (CSI). SIG Storage worked on bringing CSI to feature parity with in-tree functionality, including functionality like resizing and inline volumes. SIG Storage introduces new alpha functionality in CSI that doesn’t exist in the Kubernetes Storage subsystem yet, like volume cloning. Volume cloning enables users to specify another PVC as a “DataSource” when provisioning a new volume. If the underlying storage system supports this functionality and implements the “CLONE_VOLUME” capability in its CSI driver, then the new volume becomes a clone of the source volume. Additional feature updates Support for go modules in Kubernetes Core Continued preparation on cloud provider extraction and code organization. The cloud provider code has been moved to kubernetes/legacy-cloud-providers for easier removal later and external consumption. Kubectl get and describe now work with extensions. Nodes now support third party monitoring plugins. A new Scheduling Framework for schedule plugins is now Alpha ExecutionHook API designed to trigger hook commands in the containers for different use cases is now Alpha. These extensions/v1beta1, apps/v1beta1, and apps/v1beta2 APIs will continue to depreciate and eventually will be retired in the next version 1.16. To know about the additional features in detail check out the release notes. https://twitter.com/markdeneve/status/1141135440336039936 https://twitter.com/IanColdwater/status/1141485648412651520 For more details on Kubernetes 1.15, check out Kubernetes blog. HAProxy 2.0 released with Kubernetes Ingress controller, layer 7 retries, polyglot extensibility, gRPC support and more Red Hat releases OpenShift 4 with adaptability, Enterprise Kubernetes and more! Linkerd 2.3 introduces Zero-Trust Networking for Kubernetes
Read more
  • 0
  • 0
  • 3462
article-image-mongodb-announces-new-cloud-features-beta-version-of-mongodb-atlas-data-lake-and-mongodb-atlas-full-text-search-and-more
Amrata Joshi
19 Jun 2019
3 min read
Save for later

MongoDB announces new cloud features, beta version of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search and more!

Amrata Joshi
19 Jun 2019
3 min read
Yesterday, the team at MongoDB announced new cloud services and features that will offer a better way to work with data. The beta versions of MongoDB Atlas Data Lake and MongoDB Atlas Full-Text Search will help users to access new features in a fully managed MongoDB environment. MongoDB Charts include embedded charts in web applications The general availability of MongoDB Charts will help customers in creating charts and graphs, and further building and sharing dashboards. It also helps in embedding these charts, graphs and dashboards directly into web apps for creating better user experiences. MongoDB Charts is generally available to Atlas as well as on-premise customers which help in creating real-time visualization of MongoDB data. The MongoDB Charts include new features, such as embedded charts in external web applications, geospatial data visualization with new map charts, and built-in workload isolation for eliminating the impact of analytics queries on an operational application. Dev Ittycheria, CEO and President, MongoDB, said, “Our new offerings radically expand the ways developers can use MongoDB to better work with data.” He further added, “We strive to help developers be more productive and remove infrastructure headaches --- with additional features along with adjunct capabilities like full-text search and data lake. IDC predicts that by 2025 global data will reach 175 Zettabytes and 49% of it will reside in the public cloud. It’s our mission to give developers better ways to work with data wherever it resides, including in public and private clouds.” MongoDB Query Language added to MongoDB Atlas Data Lake MongoDB Atlas Data Lake helps customers to quickly query data on S3 in any format such as BSON, CSV, JSON, TSV, Parquet and Avro with the help of MongoDB Query Language (MQL). One of the major plus points about MongoDB Query Language is that it is expressive and will that allows developers to query the data. Developers can now use the same query language across data on S3, and make querying massive data sets easy and cost-effective. With MQL being added to MongoDB Atlas Data Lake, users can now run queries and explore their data by giving access to existing S3 storage buckets with a few clicks from the MongoDB Atlas console. Since the Atlas Data Lake is completely serverless, there is no need for setting up an infrastructure or managing it. Also, the customers pay only for the queries they run when they are actively working with the data. The team has planned for the availability of MongoDB Atlas Data Lake on Google Cloud Storage and Azure Storage for the future. Atlas Full-Text Search offers rich text search capabilities Atlas Full-Text Search offers rich text search capabilities that are based on Apache Lucene 8 against fully managed MongoDB databases. Also, there is no need for additional infrastructure or systems to manage. Full-Text Search helps the end users in filtering, ranking, and sorting their data for bringing out the most relevant results. So, users are not required to pair their database with an external search engine To know more about this news, check out the official press release. 12,000+ unsecured MongoDB databases deleted by Unistellar attackers MongoDB is going to acquire Realm, the mobile database management system, for $39 million MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process  
Read more
  • 0
  • 0
  • 2068

article-image-microsoft-finally-makes-hyper-v-server-2019-available-after-a-delay-of-more-than-six-months
Vincy Davis
18 Jun 2019
3 min read
Save for later

Microsoft finally makes Hyper-V Server 2019 available, after a delay of more than six months

Vincy Davis
18 Jun 2019
3 min read
Last week, Microsoft announced that Hyper-V server, one of the variants in the Windows 10 October 2018/1809 release is finally available, on the Microsoft Evaluation Center. This release comes after a delay of more than six months, since the re-release of Windows Server 1809/Server 2019 in early November. It has also been announced that Hyper-V Server 2019 will  be available to Visual Studio Subscription customers, by 19th June 2019. Microsoft Hyper-V Server is a free product, and includes all the great Hyper-V virtualization features like the Datacenter Edition. It is ideal to use when running on Linux Virtual Machines or VDI VMs. Microsoft had originally released the Windows Server 10 in October 2018. However it had to pull both the client and server versions of 1809 down, for investigating the reports of users of users missing files, after updating to the latest Windows 10 feature update. Microsoft then re-released Windows Server 1809/Server 2019 in early November 2018, but without the Hyper-V Server 2019. Read More: Microsoft fixes 62 security flaws on Patch Tuesday and re-releases Windows 10 version 1809 and Windows Server 2019 Early this year, Microsoft made Windows Server 2019 evaluation media available on the Evaluation Center, but the Hyper-V Server 2019 was still missing. Though no official statement was provided by the Microsoft officials, it is suspected that it may be due to errors with the working of Remote Desktop Services (RDS). Later in April, Microsoft officials stated that they found some issues with the media, and will release an update soon. Now that the Hyper-V Server 2019 is finally going to be available, it can put all users of Windows Server 2019 at ease. Users who had managed to download the original release of Hyper-V Server 2019 while it was available, are advised to delete it and install the new version, when it will be made available on 19th June 2019. Users are happy with this news, but are still wondering what took Microsoft so long to come up with the Hyper-V Server 2019. https://twitter.com/ProvoSteven/status/1139926333839028224 People are also skeptical about the product quality. A user on Reddit states that “I'm shocked, shocked I tell you! Honestly, after nearly 9 months of MS being unable to release this, and two months after they said the only thing holding it back were "problems with the media", I'm not sure I would trust this edition. They have yet to fully explain what it is that held it back all these months after every other Server 2019 edition was in production.” Microsoft’s Xbox team at E3 2019: Project Scarlett, AI-powered Flight Simulator, Keanu Reeves in Cyberpunk 2077, and more Microsoft quietly deleted 10 million faces from MS Celeb, the world’s largest facial recognition database 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft]
Read more
  • 0
  • 0
  • 2923

article-image-joyent-public-cloud-to-reach-end-of-life-in-november
Amrata Joshi
07 Jun 2019
4 min read
Save for later

Joyent Public Cloud to reach End-of-Life in November

Amrata Joshi
07 Jun 2019
4 min read
Yesterday Joyent announced its departure from the public cloud space. Beginning November 9, 2019, the Joyent Public Cloud, including Triton Compute and Triton Object Storage (Manta), will no longer accept new customers as of June 6, 2019, and will discontinue serving existing customers upon EOL on November 9th. In 2016, Joyent was acquired by Samsung after it had explored Manta, which is the Joyent’s object storage system, for implementation, Samsung liked the product, hence had bought it. In 2014, Joyent was even praised by Gartner, in its IaaS Magic Quadrant, for having a “unique vision.” The company had also developed a single-tenant cloud offering for cloud-mature, hyperscale users such as Samsung, who also demand vastly improved cloud costs. Since more resources are required for expanding the single-tenant cloud business, the company had to take this call. The team will continue to build functionality for their open source Triton offering complemented by commercial support options to utilize Triton equivalent private clouds in a single-tenant model. The official blog post reads, “As that single-tenant cloud business has expanded, the resources required to support it have grown as well, which has led us to a difficult decision.” Now the current customers have five months to switch and find a new home. The customers need to migrate, backup, or retrieve data running or stored in the Joyent Cloud before November 9th. The company will be removing compute and data from the current public cloud after November 9th and will not be capturing backups of any customer data. Joyent is working towards assisting its customers through the transition with the help of its partners. Some of the primary partners involved in assistance include OVH, Microsoft Azure, Redapt Attunix, and a few more, meanwhile, the additional partners are being finalized. Users might have to deploy the same open source software that powers the Joyent Public Cloud in their own datacenter or on a BMaaS provider like SoftLayer with the company’s ongoing support. For the ones who don’t have the level of scale for their own datacenter or for running BMaaS, Joyent is evaluating different options to support this transition and make it as smooth as possible. Steve Tuck, Joyent president and chief operating officer (COO), wrote in the blog post, “To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home.” He further added, “We are truly grateful for your business and the commitment that you have shown us over the years; thank you.” All publicly-available data centers including US-West, US-Southwest, US-East 1/2/3/3b, EU-West, and Manta will be impacted by the EOL. However, the company said that there will be no impact to their Node.js Enterprise Support offering, they will  invest heavily in software support business for both Triton and Node.js support. They will also be shortly releasing a new Node.js Support portal for the customers. Few think that Joyent’s value proposition got affected because of its public interface. A user commented on HackerNews, “Joyent's value proposition was killed (for the most part) by the experience of using their public interface. It would've taken a great deal of bravery to try that and decide a local install would be better. The node thing also did a lot of damage - Joyent wrote a lot of the SmartOS/Triton command line tools in node so they were slow as hell. Triton itself is a very non-trivial install although quite probably less so than a complete k8s rig.” Others have expressed remorse on Joyent Public Cloud EOL. https://twitter.com/mcavage/status/1136657172836708352 https://twitter.com/jamesaduncan/status/1136656364057612288 https://twitter.com/pborenstein/status/1136661813070827520 To know more about this news, check out EOL of Joyent Public Cloud. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Bryan Cantrill on the changing ethical dilemmas in Software Engineering Yuri Shkuro on Observability challenges in microservices and cloud-native applications
Read more
  • 0
  • 0
  • 2815
article-image-google-is-looking-to-acquire-looker-a-data-analytics-startup-for-2-6-billion-even-as-antitrust-concerns-arise-in-washington
Sugandha Lahoti
07 Jun 2019
5 min read
Save for later

Google is looking to acquire Looker, a data analytics startup for $2.6 billion even as antitrust concerns arise in Washington

Sugandha Lahoti
07 Jun 2019
5 min read
Google has got into an agreement with data analytics startup Looker, and is planning to add it to its Google Cloud division. The acquisition will cost Google $2.6 billion in an all-cash transaction. After the acquisition, Looker organization will report to Frank Bien, who will report to Thomas Kurian, CEO of Google Cloud. Looker is Google’s biggest acquisition since it bought smart home company Nest for $3.2 billion in 2014. Looker's analytics platform uses business intelligence and data visualization tools.  Founded in 2011, Looker has grown rapidly, now helping more than 1,700 companies understand and analyze their data. The company had raised more than $280 million in funding, according to Crunchbase. Looker spans the gap in two areas of data warehousing and Business Intelligence. Looker's platform includes a modeling platform where the user codifies the view of the data using a SQL-like proprietary modeling language (LookML). It complements the modeling language with an end user visualization tool providing the self-service analytics portion. Source Primarily, Looker will help Google Cloud become a complete analytics solution that will help customers in ingesting data to visualizing results and integrating data and insights into their daily workflows. Looker + Google Cloud will be used for: Connecting, analyzing and visualizing data across Google Cloud, Azure, AWS, on-premise databases or ISV SaaS applications Operationalizing BI for everyone with powerful data modeling Augmenting business intelligence from Looker with artificial intelligence from Google Cloud Creating collaborative, data-driven applications for industries with interactive data visualization and machine learning Source Implications of Google + Locker Google and Looker already have a strong existing partnership and 350 common customers (such as Buzzfeed, Hearst, King, Sunrun, WPP Essence, and Yahoo!) and this acquisition will only strength it. “We have many common customers we’ve worked with. One of the great things about this acquisition is that the two companies have known each other for a long time, we share very common culture,” Kurian said in a blog. This is also a significant move by Google to gain market share from Amazon Web Services, which reported $7.7 billion in revenue for the last quarter. Google Cloud has been trailing behind Amazon and Microsoft in the cloud-computing market. Looker’s  acquisition will hopefully make its service more attractive to corporations. Looker’s CEO Frank Bien commented on the partnership as a chance to gain the scale of the Google cloud platform. “What we’re really leveraging here, and I think the synergy with Google Cloud, is that this data infrastructure revolution and what really emerged out of the Big Data trend was very fast, scalable — and now in the cloud — easy to deploy data infrastructure,” he said. What is intriguing is Google’s timing and all-cash payment of this buyout. FCC, DOJ, and Congress are currently looking at bringing potential antitrust on Google and other big tech. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. According to Paul Gallant, a tech analyst with Cowen who focuses on regulatory issues, “A few years ago, this deal would have been waved through without much scrutiny. We’re in a different world today, and there might well be some buyer’s remorse from regulators on prior tech deals like this.” Public reaction to this accusation has been mixed. While some are happy: https://twitter.com/robgo/status/1136628768968192001 https://twitter.com/holgermu/status/1136639110892810241 Others remain dubious: "With Looker out of the way, the question turns to 'What else is on Google's cloud shopping list?," said Aaron Kessler, a Rayond James analyst in a report. "While the breadth of public cloud makes it hard to list specific targets, vertical specific solutions appear to be a strategic priority for Mr. Kurian." There are also questions on if Google will limit Looker to BigQuery, or at least get the newest features first. https://twitter.com/DanVesset/status/1136672725060243457 Then, there is the issue of whether Google will limit which clouds Looker can be run on. Although the company said, they will continue to support Looker’s multi-cloud strategy and will expand support for multiple analytics tools and data sources to provide customers choice.  Google Cloud will also continue to expand Looker’s investments in product development, go-to-market, and customer success capabilities. Google is also known for killing off its own products and also undermining some of its acquisition. With NEST for example, they said that it will be integrated with Google assistant. The decision was reversed only after a massive public backlash. Looker can also be one such acquisition, which may eventually merge with Google Analytics, Google’s proprietary Web analytics service. The deal expected to close later this year, albeit subject to regulatory approval. Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services Google and Binomial come together to open-source Basis Universal Texture Format Ian Lance Taylor, Golang team member, adds another perspective to Go being Google’s language
Read more
  • 0
  • 0
  • 3355

article-image-google-cloud-went-offline-taking-with-it-youtube-snapchat-gmail-and-a-number-of-other-web-services
Sugandha Lahoti
03 Jun 2019
4 min read
Save for later

Google Cloud went offline taking with it YouTube, Snapchat, Gmail, and a number of other web services

Sugandha Lahoti
03 Jun 2019
4 min read
Update: The article has been updated to include Google's response on Sunday's disruption service. Over the weekend, Google Cloud suffered a major outage taking down a number of Google services, YouTube, GSuite, Gmail, etc. It also affected services dependent on Google such as Snapchat, Nest, Discord, Shopify and more. The problem was first reported by East Coast users in the U.S around 3 PM ET / 12 PM PT, and the company resolved them after more than four hours. According to downdetector, UK, France, Austria, Spain, Brazil, also reported they are suffering from the outage. https://twitter.com/DrKMhana/status/1135291239388143617 In a statement posted to its Google Cloud Platform the company said it experiencing a multi-region issue with the Google Compute Engine. “We are experiencing high levels of network congestion in the eastern USA, affecting multiple services in Google Cloud, GSuite, and YouTube. Users may see a slow performance or intermittent errors. We believe we have identified the root cause of the congestion and expect to return to normal service shortly,” the company said in a statement. The issue was sorted four hours after Google acknowledged the downtime. “The network congestion issue in the eastern USA, affecting Google Cloud, G Suite, and YouTube has been resolved for all affected users as of 4:00 pm US/Pacific,” the company said in a statement. “We will conduct an internal investigation of this issue and make appropriate improvements to our systems to help prevent or minimize future recurrence. We will provide a detailed report of this incident once we have completed our internal investigation. This detailed report will contain information regarding SLA credits.” This outage resulted in some major suffering. Not only did it impact one of the most used apps by Netziens (YouTube and Sanpchat), people also reported that they were unable to use their NEST controlled devices such as turn on their AC or open their "smart" locks to let people into the house. https://twitter.com/davidiach/status/1135302533151436800 Even Shopify experienced problems because of the Google outage, which prevented some stores (both brick-and-mortar and online) from processing credit card payments for hours. https://twitter.com/LarryWeru/status/1135322080512270337 The entire dependency of the world’s most popular applications on just one backend in the hands of one company seems a bit startling. It is also surprising how so many people just rely on one hosting service. At the very least, companies should think of setting up a contingency plan, in case the services go down again. https://twitter.com/zeynep/status/1135308911643451392 https://twitter.com/SeverinAlexB/status/1135286351962812416 Another issue which popped up was how Google cloud randomly being down is proof that cloud-based gaming isn't ready for mass audiences yet. At this year’s Game Developers Conference (GDC), Google marked its entry in the game industry with Stadia, its new cloud-based platform for streaming games. It will be launching later this year in select countries including the U.S., Canada, U.K., and Europe. https://twitter.com/BrokenGamezHDR/status/1135318797068488712 https://twitter.com/soul_societyy/status/1135294007515500549 On Monday, Google released an apologetic update on the outage. They outlined the incident, detection and their response. In essence, the root cause of Sunday’s disruption was a configuration change that was intended for a small number of servers in a single region. The configuration was incorrectly applied to a larger number of servers across several neighboring regions, and it caused those regions to stop using more than half of their available network capacity. The network traffic to/from those regions then tried to fit into the remaining network capacity, but it did not. The network became congested, and our networking systems correctly triaged the traffic overload and dropped larger, less latency-sensitive traffic in order to preserve smaller latency-sensitive traffic flows, much as urgent packages may be couriered by bicycle through even the worst traffic jam. Next, Google’s engineering teams are conducting a thorough post-mortem to understand all the contributing factors to both the network capacity loss and the slow restoration. Facebook family of apps hits 14 hours outage, longest in its history Worldwide Outage: YouTube, Facebook, and Google Cloud goes down affecting thousands of users YouTube went down, Twitter flooded with deep questions, YouTube back and everyone is back to watching cat videos.
Read more
  • 0
  • 0
  • 2594