Text Recognition with ML-Kit | Flutter
Hi everyone, In this article, you will learn how to implement the ML-Kit text recognition in your app.
What is ML-Kit?
Implementation
Step 1: Add Firebase to Flutter
Add Firebase to your Flutter app | Firebase
Follow this guide to add Firebase products to a Flutter app. Firebase supports frameworks like Flutter on a best-effort…firebase.google.com
Step 2: Add the dependencies
Add dependencies to pubspec.yaml
file.
dependencies:
flutter:
sdk: flutter
firebase_ml_vision: "<newest version>"
camera: "<newest version>"
firebase_ml_vision
uses the ML Kit Vision for Firebase API for flutter that is built by the Flutter team.
We also need the camera
plugin to scan the text.
Step 3: Initialize the camera
CameraController _camera;
@override
void initState() {
super.initState();
_initializeCamera();
}
void _initializeCamera() async {
final CameraDescription description =
await ScannerUtils.getCamera(_direction);
_camera = CameraController(
description,
ResolutionPreset.high,
);
await _camera.initialize();
_camera.startImageStream((CameraImage image) {
// Here we will scan the text from the image
// which we are getting from the camera.
});
}
We will be using a prewritten class by Flutter Team in their Demo which has some utility method to scan and detect the image from Firebase Ml-kit.
https://gist.github.com/ashishrawat2911/07c53c9a8eb16d36889ff8382dedbfae#file-scanner_utils-dart
Step 4: Scan the image
When we get the Image from the camera and scan it will we get the VisionText
.
VisionText _textScanResults;
TextRecognizer _textRecognizer = FirebaseVision.instance.textRecognizer();
https://gist.github.com/ashishrawat2911/b721a8e2312644767ac64be92e2f0796#file-camera_scan-dart
Step 5: Get the result
Now _textScanResults
will have the result.
If you see VisionText
, we can get the whole text, block, lines, and words also.
They are dependent on our choices that what type of result we want.
To get the
Text block :
List<TextBlock> blocks = _textScanResults.blocks;
Text Lines:
List<TextLine> lines = block .lines;
Text words :
List<TextElement> words = line.elements;
Step 6: Build the UI
To show the image result we just need to have the CameraPreview
and pass the CameraController
object.
@override Widget build(BuildContext context) { return Scaffold( body: Stack( fit: StackFit.expand, children: <Widget>[ _camera == null ? Container( color: Colors.black, ) : Container( height: MediaQuery .of(context) .size .height - 150, child: CameraPreview(_camera)), ], ), ); }
Show scanned text outlines
To show the outline we can draw them by using CustomPainter
because of the VisionText
parameters provide the TextContainer
and we can find the coordinates and draw on it.
Widget _buildResults(VisionText scanResults) {
CustomPainter painter;
// print(scanResults);
if (scanResults != null) {
final Size imageSize = Size(
_camera.value.previewSize.height - 100,
_camera.value.previewSize.width,
);
painter = TextDetectorPainter(imageSize, scanResults);
getWords(scanResults);
return CustomPaint(
painter: painter,
);
} else {
return Container();
}
}
https://gist.github.com/ashishrawat2911/167a980ce78726bf428eede41ab381d0#file-outline-dart
Now if we see the app.
Thanks for reading this article ❤
If I got something wrong 🙈, Let me know in the comments. I would love to improve.
Clap 👏 If this article helps you.
Connect with me on Linkedin and Github
Feel free to connect with us:
And read more articles from FlutterDevs.com.
FlutterDevs team of Flutter developers to build high-quality and functionally rich apps. Hire a Flutter developer for your cross-platform Flutter mobile app project on an hourly or full-time basis as per your requirement! For any flutter-related queries, you can connect with us on Facebook, GitHub, Twitter, and LinkedIn.
We welcome feedback and hope that you share what you’re working on using #FlutterDevs. We truly enjoy seeing how you use Flutter to build beautiful, interactive web experiences.