Google is taking yet another step to make its artificial intelligence technology more accessible across a range of industries. Yesterday, in a blog post, Google’s Director of product management for Cloud AI, Rajen Sheth introduced a host of tools to “put AI in reach of all businesses”. He stated that even though the company has more than 15,000 paying customers using its AI services, it’s not enough. The upgrades will make AI simpler, useful and fast for increased adoption among businesses.
Released in alpha, the AI Hub is a “one-stop destination for plug-and-play ML content” which includes pipelines, Jupyter notebooks, TensorFlow modules, and more. The AI Hub is launched with a motive to combat the scarcity of ML knowledge in the workforce. This will overcome the challenge for organizations to build comprehensive resources using their ML knowledge.
It aims to make high-quality ML resources developed by Google Cloud AI, Google Research and other teams across Google publicly available to all businesses. The Hub will also provide a private, secure hub where enterprises can upload and share ML resources within their own organizations. This will help businesses to reuse pipelines and deploy them to production in GCP or on hybrid infrastructures using the Kubeflow Pipeline system with just a few steps.
In the beta release, Google plans to expand the type of assets made available through the AI Hub, which includes public contributions from third-party organizations and partners.
Kubeflow Pipelines will enable organizations to build and package ML resources so that they’re as useful as possible to the broadest range of internal users.
This new component of Kubeflow, packages ML code just like building an app so that it’s reusable to other users across an organization. It enables industries to:
Google has also released three features in Cloud Video API (in beta) that address common challenges for businesses that work extensively with video.
Google’s Tensor Processing Units (TPUs) are custom ASIC chips designed for machine learning workloads to dramatically accelerate ML tasks, and are easily accessed through the cloud.
Since July, Google has been adding features to their Cloud TPU to make compute-intensive machine learning faster and more accessible to businesses worldwide.
In response to these upgrades, Kaustubh Das, vice president, data center product management at Cisco stated, “ Cisco is also delighted to see the emergence of Kubeflow Pipeline that promises a radical simplification of ML workflows which are critical for mainstream adoption. We look forward to bringing the benefits of this technology alongside our world class AI/ML product portfolio to our customers.” Adding to this line of thoughts were NVIDIA and Intel as well.
Head over to Google’s official blog for an entire coverage of this announcement.
#GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment
Google open sources BERT, an NLP pre-training technique
Google AdaNet, a TensorFlow-based AutoML framework