Google’s newly launched ML Kit SDK, allows mobile developers to make use of Google’s machine learning expertise in the development of Android and iOS apps. This kit allows integration of mobile apps with a number of pre-built Google-provided machine learning models. These models support text recognition, face detection, barcode scanning, image labeling and landmark recognition, among other things. What stands out here is the fact that the ML Kit is available both online and offline, depending on network availability and the developer’s preference.
In the coming months, Google plans to add a smart reply API and a high-density face contour feature for the face detection API, in the list of currently available APIs.
At the Google I/O conference, Google also announced several updates to its ARCore platform focused on overcoming the limitations of existing AR-enabled smartphones.
Multi-User and shared AR
New cloud anchor tools will enable developers to create new types of collaborative experiences, which can be shared with multiple users across both Android and iOS devices.
More surfaces to play around with
Vertical Plane Detection, a new feature of ARCore, allows users to place AR objects on more surfaces, like textured walls. Another capability, Augmented Images, brings images to life just by pointing a phone at them.
https://www.youtube.com/watch?v=uDs9rd7yD0I
Simple AR development
New ARcore updates also simplify the process of AR development for Java developers with the introduction of Sceneform. Developers can now build immersive, 3D apps, optimized for mobile without having to learn complicated APIs like OpenGL. They can use Sceneform to build AR apps from scratch as well as to add AR features to existing ones.
The name for the new version is yet to be decided but judging by their trend of naming the OS after a dessert it may be Pumpkin Pie, Peppermint Patty, Or Popsicle? I’m voting for Popsicle! Apart from the name, here are the other major features of the new OS:
After over 100,000 SDK downloads of the Developer Preview of Android Things, Google announced the long-term release of Android Things 1.0 to developers with long-term support for production devices.
App Library, allows developers to manage APKs more easily without the need to package them together in a separate zipped bundle.
Visual storage layout helps in configuring the device storage allocated to apps and data for each build and helps in getting an overview of how much storage your apps require.
Group sharing, where product sharing has been extended to include support for Google Groups.
Updated permissions, to give developers more control over the permissions used by apps on their devices.
Developers can manage their Android Things devices via a cloud-based Android Things Console. Devices themselves can manage OS and app updates, view analytics for device health and performance, and issue test builds of the software package.
A new update to Lighthouse, the web optimization tool of Google, was also announced at Google I/O. Lighthouse 3.0 offers smaller waiting periods more updates to developers to efficiently optimize their websites and audit their performance. It uses Simulated throttling, with a new Lighthouse internal auditing engine, that runs audits under normal network and CPU settings, and then estimates how long the page would take to load under mobile conditions.
Lighthouse 3.0 also features a new report UI along with invocation, scoring, audit, and output changes.
There are still 2 more days left for Google I/O to conclude and going by day 1 announcements, I can’t wait to see what’s next. I am especially looking forward to knowing more about Android Auto, Google’s Tour Creator, and Google Lens. You can view the Livestream and other sessions on the Google I/O conference page.
Keep visiting Packt Hub for more updates on Google I/O, Microsoft Build and other key tech conferences happening this month.
Google’s Android Things, developer preview 8: First look
Google open sources Seurat to bring high precision graphics to Mobile VR
Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence