Friday, June 7, 2024
Google search engine
HomeFirebaseUsing ML Kit in Flutter

Using ML Kit in Flutter

Implementing Machine Learning into our Flutter application

Introduction

Machine learning is becoming an essential technology as it can predict the possible outcomes. Through machine learning, we can train our machine and we can make it intelligent so that it can also take decision on its own. It is a subset of artificial intelligence.

With ML Kit we can integrate our app with various smart features such as:

  • Text Recognition
  • Face Detection
  • Image Labeling
  • Landmark recognition
  • Barcode scanning

ML Kit provides us on-Device and Cloud APIs. The on-Device process the data without the use of an internet connection, Cloud provides us high-level accuracy with the use of Google Cloud Platform’s machine learning technology.

In this blog, we shall discuss how to implement Image Labeling, Text Recognition, Barcode Scanner using ML kit. Image Labeling is a machine learning feature that tells us about the content of an image.

So let’s start:

To enable the use of ML Kit we need to connect the app with the firebase. We will be using two dependencies firebase_ml_vision: ^latest version
for ML Kit and image_picker: ^latest version to get the image using a gallery or camera.


Table of Content

:: Resources

:: Configure Your App

:: Creating ImagePicker

:: ImageLabeler function

:: Barcode Scanner function

:: Text Recognizer function

:: main.dart file


Resources :

firebase_ml_vision | Flutter Package
A Flutter plugin to use the capabilities of Firebase ML, which includes all of Firebase’s cloud-based ML features, and…pub.dev

image_picker | Flutter Package
A Flutter plugin for iOS and Android for picking images from the image library, and taking new pictures with the…pub.dev

Configure Your App

Add the dependencies in your pubspec.yaml

firebase_ml_vision: ^0.9.7
image_picker: ^0.6.7+11

Edit your app/build.gradle

dependencies {
implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
api 'com.google.firebase:firebase-ml-vision-image-label-model:17.0.2'
}

Edit your AndroidManifest.xml file

Add the following metadata inside the application section.

<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="ica" />

Creating UserImagePicker Class:

ImagePicker is one of the most frequently used functionality when we try to get the user images as an input to perform authentication, profile image upload, etc. This widget provides us the flexibility to chose images from our phone gallery or using a phone camera. Here also we are trying to achieve the same.

  • Initializing File and ImagePicker objects
File _pickedImage;
ImagePicker picker = ImagePicker();
  • pickImage function with default ImageSource
void _pickImage(ImageSource imageSource) async {
final pickedImageFile = await picker.getImage(
source: imageSource,
);

How Image Picker will look like?

We will first create a Container that will display the loaded image and a FlatButton.icon, on tapping the button an AlertDialog Box will appear it will show two options Complete action using Camera or Gallery. We can choose either the them.

  • Creating the Image Container

This container with rounded corners shows the image picked by the user. If there is no image then Please Add Image text will be displayed in the center.

Padding(
padding: const EdgeInsets.all(18.0),
child: ClipRRect(
borderRadius: BorderRadius.circular(10),
child: Container(
color: Colors.orangeAccent.withOpacity(0.3),
width: MediaQuery.of(context).size.width,
height: 300,
child: _pickedImage != null
? Image(
image: FileImage(_pickedImage),
)
: Center(
child: Text("Please Add Image"),
),
),
),
),
  • Creating a FlatButton.icon

On pressing FlatButton.icon a AlertDialog will appear that will contain two options to choose from the image source.

FlatButton.icon(
onPressed: () {
showDialog(
context: context,
builder: (_) {
return AlertDialog(
title: Text(
"Complete your action using..",
),
actions: [
FlatButton(
onPressed: () {
Navigator.of(context).pop();
},
child: Text(
"Cancel",
),
),
],
content: Container(
height: 120,
child: Column(
children: [
ListTile(
leading: Icon(Icons.camera),
title: Text(
"Camera",
),
onTap: () {
_pickImage(ImageSource.camera);
Navigator.of(context).pop();
},
),
Divider(
height: 1,
color: Colors.black,
),
ListTile(
leading: Icon(Icons.image),
title: Text(
"Gallery",
),
onTap: () {
_pickImage(ImageSource.gallery);
Navigator.of(context).pop();
},
),
],
),
),
);
});
},
icon: Icon(Icons.add),
label: Text(
'Add Image',
),
)

Your UserImagePicker dart file will look like this :

https://gist.github.com/anmolseth06/9c52896eba3c7a43b2ea2dd200366f6a#file-userimagepicker-dart

ImageLabeler :

Creating a imageLabeler function

  • Creating objects for FirebaseVisionImage and ImageLabeler
FirebaseVisionImage myImage = FirebaseVisionImage.fromFile(_userImageFile);
ImageLabeler labeler = FirebaseVision.instance.imageLabeler();

FirebaseVisionImage is an object that represents an image object used for both on-device and cloud API detectors.imagLabeler() method creates an on-device instance of ImageLabeler .

  • Processing image
_imageLabels = await labeler.processImage(myImage);

ImageLabeler provides us processImage() method that takes an FirebaseVisionImage object. Here _imageLables is a list of ImageLabel.

Storing the labels in the result variable.

for (ImageLabel imageLabel in _imageLabels) {
setState(() {
result = result +
imageLabel.text +
":" +
imageLabel.confidence.toString() +
"\n";
});
}
  • processImageLabels() code
processImageLabels() async {
FirebaseVisionImage myImage = FirebaseVisionImage.fromFile(_userImageFile);
ImageLabeler labeler = FirebaseVision.instance.imageLabeler();
_imageLabels = await labeler.processImage(myImage);
result = "";
for (ImageLabel imageLabel in _imageLabels) {
setState(() {
result = result +
imageLabel.text +
":" +
imageLabel.confidence.toString() +
"\n";
});
}
}

Barcode Scanner function :

  • barCodeScanner() function
barCodeScanner() async {
FirebaseVisionImage myImage = FirebaseVisionImage.fromFile(_userImageFile);
BarcodeDetector barcodeDetector = FirebaseVision.instance.barcodeDetector();
_barCode = await barcodeDetector.detectInImage(myImage);
result = "";
for (Barcode barcode in _barCode) {
setState(() {
result = barcode.displayValue;
});
}
}

Text Recognizer function :

  • recogniseText() function :
recogniseText() async {
FirebaseVisionImage myImage = FirebaseVisionImage.fromFile(_userImageFile);
TextRecognizer recognizeText = FirebaseVision.instance.textRecognizer();
VisionText readText = await recognizeText.processImage(myImage);
result = "";
for (TextBlock block in readText.blocks) {
for (TextLine line in block.lines) {
setState(() {
result = result + ' ' + line.text + '\n';
});
}
}
}

Your Main dart file will look like this :

https://gist.github.com/anmolseth06/35cd7d3b4c5480145d4d9f6f1020d057#file-main-dart


🌸🌼🌸 Thank you for reading. 🌸🌼🌸🌼

If I got something wrong? Let me know in the comments. I would love to improve.

Clap 👏 If this article helps you.

If we got something wrong? Let me know in the comments. we would love to improve.

FlutterDevs team of Flutter developers to build high-quality and functionally-rich apps. Hire flutter developer for your cross-platform Flutter mobile app project on an hourly or full-time basis as per your requirement! You can connect with us on Facebook, GitHub, Twitter, and LinkedIn for any flutter related queries.

We welcome feedback and hope that you share what you’re working on using #FlutterDevs. We truly enjoy seeing how you use Flutter to build beautiful, interactive web experiences!.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
Google search engine

Most Popular

Recent Comments