Use the open-source MLJAR auto-ML to build accurate models faster

The rise of automated machine learning tools has enabled developers to build accurate machine learning models faster. These tools reduce the work of an engineer by performing feature engineering, algorithm selection, and tuning as well as documenting the model. One such library is the open-source MLJAR package. In this article, let’s take a look at how you can use the package for binary classification.

Getting started

The MLJAR package can be used to build a complete machine learning pipeline with feature engineering and hyperparameter tuning. The package also supports popular machine learning algorithms including:

Generate TF Lite models from custom data using Model Maker

Photo by Samir Bouaked on Unsplash

In this article, let’s look at how you can use TensorFlow Model Maker to create a custom text classification model. Currently, the TF Lite model maker supports image classification, question answering, and text classification models. It uses transfer learning for shortening the amount of time required to build TF Lite models.

Getting started

The first step is to install the TensorFlow Lite model maker.

pip install -q tflite-model-maker

Let’s use the IMDB movies reviews dataset that has 50K reviews. Download and read it in.

Create TF Lite image classification models with TensorFlow Lite Model Maker

Photo by Paweł Czerwiński on Unsplash

The Model Maker library makes the process of developing TF Lite models quick and easy. It also allows you to adopt different architectures for your models. The process of exporting the TF Lite models is also straightforward. Let’s take a look at how you can do that for image classification models.

Create a model with default options

The first step is to install TensorFlow Lite Model Maker.

$ pip install -q tflite-model-maker

Obtaining the dataset

Let’s use the common cats and dogs dataset to create a TF Lite Model to classify them. The first step is to download the dataset and then create the test and validation set path.

Write your own JavaScript code in Lens Studio to enhance Lens interactivity

Photo by James Lee on Unsplash

Lens Studio’s functionalities can be extended by writing your own custom code in JavaScript. For instance, you can write your own code to target various events in your Lens and trigger specific actions.

Lens Studio is already powerful without custom code, but the ability to bring your own code to the platform is just amazing. Without further ado, let’s jump straight in and see how this can be done.

Mouth open and mouth closed events

You can write JavaScript code to trigger the appearance of an emoji, as shown below. The emoji appears when one opens their mouth and disappears when one closes their mouth. …

Script machine learning models with SnapML to add interactivity to your ML-powered Lenses

Photo by Sai Kiran Anagani on Unsplash

Lens Studio allows you to configure your own custom machine learning models via SnapML with the ML Component.

However, you might want to write your own code to configure the model. This will give you the flexibility to tune your ML-powered Lenses as you like. In this article, you will learn how to write custom JavaScript code to configure your ML Components.

Getting Started

For this illustration, you will need to start a new blank Lens Studio project. No template is used in this case. The goal is to recreate a minimal version of this classification lens.

Use ML Kit to extract address, email, places and so much more

Photo by Rene Böhmer on Unsplash

Entity extraction can be useful when you want to add interactivity to your application based on the context of the text. For example, if it’s a phone number you can prompt the user to make a call and if it’s an email address you can prompt the user to open the email app. This is achieved by first extracting various entities in the text. In this piece, let’s look at how that can be achieved using Google’s ML Kit.

Getting Started

Let’s start by adding the internet permission to the Android Manifest because it’s needed in order to download the model.

Use ML Kit’s On-device API to translate text

Photo by Kelly Sikkema on Unsplash

Using Google’s ML Kit one can build an application that can be consumed in more than 50 languages. For example, you can have users select the language they prefer and translate the app’s content into that language. The translation is quite fast because it happens on the user’s device. The ML Kit translation models are built to translate to and from English. When making translations from another language, English will be used as the intermediate language during the translation process. This can definitely affect the quality of the translation. …

Use Firebase ML to deploy custom TF Lite model on Android

Photo by Max Kukurudziak on Unsplash

Once your machine learning model is ready, you have to deploy it to a device. One of the ways that can be done is by shipping the model with the application. A challenge with this method is that whenever your model changes, you will need to ship a new APK to the app stores. Obviously, this takes a long time because every app update needs to be verified by the app store. Now, imagine if it was possible to update the model over the air without the need to ship a new application. …

Identify the language of user-provided text using ML Kit

Photo by Dmitry Ratushny on Unsplash

Using ML Kit, the language of a string of text can be determined. The ML Kit API supports over 100 languages. It can also identify native and romanized text from languages such as Russian and Arabic. Given a string of text, the API provides the most likely languages as well as the confidence level. Let’s look at how that can be done.

Getting Started

Start by adding the dependencies for the ML Kit Android libraries. Add the following in your app/build.gradle file.

The App Elements

The application is made of a text input, a text view, and a button. The text input will be…

Generate smart replies with ML Kit on Android

Photo by Hannah Busing on Unsplash

Using Google’s ML Kit, relevant replies to messages can be generated. The Smart Reply model generates replies based on the context of a conversation. The model generates useful reply suggestions because it uses the entire conversation. Hosting the model on the device ensures that users get reply suggestions quickly since no remote server is involved. The model currently supports only English. Therefore, if the conversation is in another language, no reply suggestions are generated. In this article, let’s look at how we can use this model.

Getting Started

Since the model will download on the device, internet permission is a requirement. …

Derrick Mwiti

Google Developer Expert — Machine Learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store