Top Ad unit 728 × 90

Google decouples ML Kit’s on-device APIs from Firebase and introduces Early Access APIs

Google uses artificial intelligence extensively to serve highly contextual and accurate web and image search results. Besides Search on the web platform, Google’s machine learning models also provide for a variety of AI applications on Android phones ranging from the visual search on Google Lens to computational photography that Pixel devices are famous for. Apart from its own applications, Google also allows third-party developers to integrate machine learning features into their apps seamlessly with the help of ML Kit, an SDK (Software Development Kit) which is part Firebase – its online management and analytics dashboard for Android applications. As of today, Google is announcing a major change to ML Kit and will make on-device APIs independent of Firebase.

ML Kit was announced at Google I/O 2018 to simplify the addition of machine learning features to apps. At the time of its launch, ML Kit consisted of text recognition, face detection, barcode scanning, image labeling, and landmark recognition APIs. In April 2019, Google added Natural Language Processing (NLP) support in the SDK for developers to include APIs such as Smart Reply and Language Identification in their apps. A month later i.e. at Google I/O 2019, Google introduced three new ML APIs for on-device translation, object detection and tracking, and the AutoML Vision Edge API for identifying specific objects like types of flowers or food using visual search.

ML Kit comprises both on-device and cloud-based APIs. As you would expect, the on-device APIs process data using the machine learning models saved on the device itself while the cloud-based APIs send data to machine learning models hosted on Google’s Cloud Platform and receive the resolved data over an internet connection. Since on-device APIs run without the internet, they parse information faster and are more secure than their cloud-based counterparts. On-device machine learning APIs are also hardware accelerated on Android devices running Android Oreo 8.1 and above, and run off of Google’s Neural Networks API (NNAPI) along with special compute blocks or NPUs found on latest chipsets from Qualcomm, MediaTek, HiSilicon, etc.

Google recently posted a blog post announcing that the on-device APIs from ML Kit will now be available as part of an independent service. This means on-device APIs in ML Kit – including text recognition, barcode scanning, face detection, image labeling, object detection and tracking, language identification, smart reply, and on-device translation – will be available under a separate SDK that can be accessed without Firebase. Google, however, does recommend using the ML Kit SDK in Firebase to migrate their existing projects to the new standalone SDK. A new microsite has been launched with all the resources related to ML Kit.

Other than the new SDK, Google has announced some changes making it easier for developers to integrate machine learning models into their apps. Firstly, the Face detection/contour model is now delivered as part of the Google Play Store so developers don’t have to clone the API and the model separately for their apps. This allows for a smaller size for the app package and the ability to reuse the model within other apps more seamlessly.

Secondly, Google has added Android Jetpack Lifecycle support to all APIs. This will help in managing the use of the APIs when an app undergoes screen rotation or is closed by the user. In addition, it also facilitates easy integration of the CameraX Jetpack library in apps that use ML Kit.

Thirdly, Google has announced an early access program so that developers can get access to upcoming APIs and features before the rest. The company is now adding two new APIs in ML Kit for select developers to preview them and share their feedback. These APIs include:

  • Entity Extraction to detect things like phone numbers, addresses, payment numbers, tracking numbers, and date & time in text, and
  • Pose Detection for low-latency detection of 33 skeletal points, including hands and feet

google ml kit sdk pose detection API
Lastly, Google is now allowing developers to replace the existing Image Labeling as well as Object Detection and Tracking APIs from ML Kit with custom machine learning models from TensorFlow Lite. The company will soon announce more details on how to find or clone TensorFlow Lite models and train them using ML Kit or Android Studio’s new ML integration features.

The post Google decouples ML Kit’s on-device APIs from Firebase and introduces Early Access APIs appeared first on xda-developers.



from xda-developers https://ift.tt/2CvYPFn
via IFTTT
Google decouples ML Kit’s on-device APIs from Firebase and introduces Early Access APIs Reviewed by site on 1:29 AM Rating: 5

No comments:

All Rights Reserved by PHONE FORUM © 2014 - 2015
Powered By Blogger, Designed by Sweetheme

Formulaire de contact

Name

Email *

Message *

Powered by Blogger.