The rise of automated machine learning tools has enabled developers to build accurate machine learning models faster. These tools reduce the work of an engineer by performing feature engineering, algorithm selection, and tuning as well as documenting the model. One such library is the open-source MLJAR package. In this article, let’s take a look at how you can use the package for binary classification.
The MLJAR package can be used to build a complete machine learning pipeline with feature engineering and hyperparameter tuning. The package also supports popular machine learning algorithms including:
In this article, let’s look at how you can use TensorFlow Model Maker to create a custom text classification model. Currently, the TF Lite model maker supports image classification, question answering, and text classification models. It uses transfer learning for shortening the amount of time required to build TF Lite models.
The first step is to install the TensorFlow Lite model maker.
pip install -q tflite-model-maker
Let’s use the IMDB movies reviews dataset that has 50K reviews. Download and read it in.
The Model Maker library makes the process of developing TF Lite models quick and easy. It also allows you to adopt different architectures for your models. The process of exporting the TF Lite models is also straightforward. Let’s take a look at how you can do that for image classification models.
The first step is to install TensorFlow Lite Model Maker.
$ pip install -q tflite-model-maker
Let’s use the common cats and dogs dataset to create a TF Lite Model to classify them. The first step is to download the dataset and then create the test and validation set path.
Lens Studio is already powerful without custom code, but the ability to bring your own code to the platform is just amazing. Without further ado, let’s jump straight in and see how this can be done.
Lens Studio allows you to configure your own custom machine learning models via SnapML with the ML Component.
For this illustration, you will need to start a new blank Lens Studio project. No template is used in this case. The goal is to recreate a minimal version of this classification lens. …
Entity extraction can be useful when you want to add interactivity to your application based on the context of the text. For example, if it’s a phone number you can prompt the user to make a call and if it’s an email address you can prompt the user to open the email app. This is achieved by first extracting various entities in the text. In this piece, let’s look at how that can be achieved using Google’s ML Kit.
Let’s start by adding the internet permission to the Android Manifest because it’s needed in order to download the model.
Using Google’s ML Kit one can build an application that can be consumed in more than 50 languages. For example, you can have users select the language they prefer and translate the app’s content into that language. The translation is quite fast because it happens on the user’s device. The ML Kit translation models are built to translate to and from English. When making translations from another language, English will be used as the intermediate language during the translation process. This can definitely affect the quality of the translation. …
Once your machine learning model is ready, you have to deploy it to a device. One of the ways that can be done is by shipping the model with the application. A challenge with this method is that whenever your model changes, you will need to ship a new APK to the app stores. Obviously, this takes a long time because every app update needs to be verified by the app store. Now, imagine if it was possible to update the model over the air without the need to ship a new application. …
Using ML Kit, the language of a string of text can be determined. The ML Kit API supports over 100 languages. It can also identify native and romanized text from languages such as Russian and Arabic. Given a string of text, the API provides the most likely languages as well as the confidence level. Let’s look at how that can be done.
Start by adding the dependencies for the ML Kit Android libraries. Add the following in your
The application is made of a text input, a text view, and a button. The text input will be…
Using Google’s ML Kit, relevant replies to messages can be generated. The Smart Reply model generates replies based on the context of a conversation. The model generates useful reply suggestions because it uses the entire conversation. Hosting the model on the device ensures that users get reply suggestions quickly since no remote server is involved. The model currently supports only English. Therefore, if the conversation is in another language, no reply suggestions are generated. In this article, let’s look at how we can use this model.
Since the model will download on the device, internet permission is a requirement. …