Flutterexperts

Crafting Next-Gen Apps with Flutter Power
Implementing Voice Searching In Flutter

In today’s world, everybody needs the mobile application they are utilizing to be more alluring and contain more features. Voice Searching is one of the main highlights which makes any application more responsive and intuitive to the user.

It involves a broad array of exploration in computer science, semantics, and computer engineering. Numerous modern devices and text-focused programs have speech recognition functions in them to consider more straightforward or hands-free utilization of a device.

In this blog, we will explore Implementing Voice Searching In Flutter. We will see how to implement a demo program of voice searching and we’ll use our voice to enter the text rather than typing text using the flutter_speech package in your flutter applications.

flutter_speech | Flutter Package
Flutter plugin to support voice recognition on Android, iOS, and Mac OSXpub.dev

Table Of Contents::

Introduction

Implementation

Code Implement

Code File

Conclusion



Introduction:

Voice/Speech recognition, or speech-to-text, is the capacity of a machine or program to distinguish words expressed out loud and convert them into readable text. The simple flutter_speech package may recognize words and expressions when expressed clearly. The more modern package can deal with normal speech, various accents, and different languages.

Demo Module :

The above demo video shows how to implement Voice Searching in a flutter. It shows how voice searching will work using the flutter_speech package in your flutter applications. It shows our voice to enter the text rather than typing text. It will be shown on your device.

Implementation:

Step 1: Add the dependencies

Add dependencies to pubspec — yaml file.

dependencies:
flutter:
sdk: flutter
flutter_speech:

Step 2: Import

import 'package:drop_shadow/drop_shadow.dart';

Step 3: Run flutter packages get in the root directory of your app.

How to implement code in dart file :

You need to implement it in your code respectively:

Create a new dart file called main.dart inside the lib folder.

First, we will create a Language class in the main. dart file. In this class, we will add two final String names and code.

class Language {
final String name;
final String code;

const Language(this.name, this.code);
}

Now, we will create languages that are equal to the bracket. Inside the bracket, we will add some Language().

const languages = [
Language('English', 'en_US'),
Language('Hindi', 'hi'),
Language('Francais', 'fr_FR'),
Language('Pусский', 'ru_RU'),
Language('Italiano', 'it_IT'),
Language('Español', 'es_ES'),
];

In main. dart file, we will create a new VoiceSearchingDemo class. In this class, we will add a late SpeechRecognition variable that was _speech, bool _speechRecognitionAvailable that is equal to the false, bool _isListening that is equal to the false, String transcription is equal to an empty string, and Language variable selectedLang is equal to languages. first.

late SpeechRecognition _speech;
bool _speechRecognitionAvailable = false;
bool _isListening = false;
String transcription = '';
Language selectedLang = languages.first;

Now, we will create an initState() method. In this method, we will add activateSpeechRecognizer()method. Inside this method, we will add _speech. setAvailabilityHandler, setErrorHandler, activate setRecognitionResultHandler and etc.

@override
initState() {
super.initState();
activateSpeechRecognizer();
}

void activateSpeechRecognizer() {
print('_MyAppState.activateSpeechRecognizer... ');
_speech = SpeechRecognition();
_speech.setAvailabilityHandler(onSpeechAvailability);
_speech.setRecognitionStartedHandler(onRecognitionStarted);
_speech.setRecognitionResultHandler(onRecognitionResult);
_speech.setRecognitionCompleteHandler(onRecognitionComplete);
_speech.setErrorHandler(errorHandler);
_speech.activate('en_US').then((res) {
setState(() => _speechRecognitionAvailable = res);
});
}

Now, we need to create all these handler methods. So, let’s make them one by one method. All the below methods will handle the multiple cases.

void onSpeechAvailability(bool result) =>
setState(() => _speechRecognitionAvailable = result);

void onRecognitionStarted() {
setState(() => _isListening = true);
}

void onRecognitionResult(String text) {
print('_MyAppState.onRecognitionResult... $text');
setState(() => transcription = text);
}

void onRecognitionComplete(String text) {
print('_MyAppState.onRecognitionComplete... $text');
setState(() => _isListening = false);
}

void errorHandler() => activateSpeechRecognizer();

We will also create a custom _buildButton() widget. In this widget, we will add ElevatedButton. Inside the button, we will add onPressed and text.

Widget _buildButton({required String label, VoidCallback? onPressed}) =>
Padding(
padding: EdgeInsets.all(12.0),
child: ElevatedButton(
onPressed: onPressed,
child: Text(
label,
style: const TextStyle(color: Colors.white),
),
));

When we run the application, we ought to get the screen’s output like the underneath screen capture.

You can see that initially, it will ask for record audio permission. Allow it.

Now, we need to create the methods which will we call on the button click on start, stop, cancel. That’s great, now we will call these methods on button clicks.

void start() => _speech.activate(selectedLang.code).then((_) {
return _speech.listen().then((result) {
print('_MyAppState.start => result $result');
setState(() {
_isListening = result;
});
});
});

void cancel() =>
_speech.cancel().then((_) => setState(() => _isListening = false));

void stop() => _speech.stop().then((_) {
setState(() => _isListening = false);
});

void onCurrentLocale(String locale) {
print('_MyAppState.onCurrentLocale... $locale');
setState(
() => selectedLang = languages.firstWhere((l) => l.code == locale));
}

Add all these methods to your build method. In AppBar, we will add the actions widget. In this widget add PopupMenuButton(). In the body part, we will add the Column widget. Inside the widget, we will use three custom _buildButton() for start, stop and cancel. Also, the Column widget wrap to the Center widget, and the Center widget wrap to the Padding.

@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
automaticallyImplyLeading: false,
title: const Text('Flutter Voice Searching Demo'),
actions: [
PopupMenuButton<Language>(
onSelected: _selectLangHandler,
itemBuilder: (BuildContext context) => _buildLanguagesWidgets,
)
],
),
body: Padding(
padding: const EdgeInsets.all(8.0),
child: Center(
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.stretch,
children: [
Expanded(
child: Container(
padding: const EdgeInsets.all(8.0),
color: Colors.grey.shade200,
child: Text(transcription))),
_buildButton(
onPressed: _speechRecognitionAvailable && !_isListening
? () => start()
: null,
label: _isListening
? 'Listening...'
: 'Listen (${selectedLang.code})',
),
_buildButton(
onPressed: _isListening ? () => cancel() : null,
label: 'Cancel',
),
_buildButton(
onPressed: _isListening ? () => stop() : null,
label: 'Stop',
),
],
),
)),
);
}

List<CheckedPopupMenuItem<Language>> get _buildLanguagesWidgets => languages
.map((l) => CheckedPopupMenuItem<Language>(
value: l,
checked: selectedLang == l,
child: Text(l.name),
))
.toList();

void _selectLangHandler(Language lang) {
setState(() => selectedLang = lang);
}

When we run the application, we ought to get the screen’s output like the underneath screen capture.

Final Output

Code File:

import 'package:flutter/foundation.dart';
import 'package:flutter/material.dart';
import 'package:flutter_speech/flutter_speech.dart';
import 'package:flutter_voice_search_demo/splash_screen.dart';

void main() {
debugDefaultTargetPlatformOverride = TargetPlatform.android;
runApp(const MyApp());
}

class Language {
final String name;
final String code;

const Language(this.name, this.code);
}

const languages = [
Language('English', 'en_US'),
Language('Hindi', 'hi'),
Language('Francais', 'fr_FR'),
Language('Pусский', 'ru_RU'),
Language('Italiano', 'it_IT'),
Language('Español', 'es_ES'),
];

class MyApp extends StatelessWidget {
const MyApp({Key? key}) : super(key: key);

@override
Widget build(BuildContext context) {
return const MaterialApp(
debugShowCheckedModeBanner: false,
home: Splash());
}
}

class VoiceSearchingDemo extends StatefulWidget {
const VoiceSearchingDemo({Key? key}) : super(key: key);

@override
_VoiceSearchingDemoState createState() => _VoiceSearchingDemoState();
}

class _VoiceSearchingDemoState extends State<VoiceSearchingDemo> {
late SpeechRecognition _speech;

bool _speechRecognitionAvailable = false;
bool _isListening = false;

String transcription = '';

//String _currentLocale = 'en_US';
Language selectedLang = languages.first;

@override
initState() {
super.initState();
activateSpeechRecognizer();
}

// Platform messages are asynchronous, so we initialize in an async method.
void activateSpeechRecognizer() {
print('_MyAppState.activateSpeechRecognizer... ');
_speech = SpeechRecognition();
_speech.setAvailabilityHandler(onSpeechAvailability);
_speech.setRecognitionStartedHandler(onRecognitionStarted);
_speech.setRecognitionResultHandler(onRecognitionResult);
_speech.setRecognitionCompleteHandler(onRecognitionComplete);
_speech.setErrorHandler(errorHandler);
_speech.activate('en_US').then((res) {
setState(() => _speechRecognitionAvailable = res);
});
}

Widget _buildButton({required String label, VoidCallback? onPressed}) =>
Padding(
padding: EdgeInsets.all(12.0),
child: ElevatedButton(
onPressed: onPressed,
child: Text(
label,
style: const TextStyle(color: Colors.white),
),
));

void start() => _speech.activate(selectedLang.code).then((_) {
return _speech.listen().then((result) {
print('_MyAppState.start => result $result');
setState(() {
_isListening = result;
});
});
});

void cancel() =>
_speech.cancel().then((_) => setState(() => _isListening = false));

void stop() => _speech.stop().then((_) {
setState(() => _isListening = false);
});

void onCurrentLocale(String locale) {
print('_MyAppState.onCurrentLocale... $locale');
setState(
() => selectedLang = languages.firstWhere((l) => l.code == locale));
}

void onSpeechAvailability(bool result) =>
setState(() => _speechRecognitionAvailable = result);

void onRecognitionStarted() {
setState(() => _isListening = true);
}

void onRecognitionResult(String text) {
print('_MyAppState.onRecognitionResult... $text');
setState(() => transcription = text);
}

void onRecognitionComplete(String text) {
print('_MyAppState.onRecognitionComplete... $text');
setState(() => _isListening = false);
}

void errorHandler() => activateSpeechRecognizer();

@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
automaticallyImplyLeading: false,
title: const Text('Flutter Voice Searching Demo'),
actions: [
PopupMenuButton<Language>(
onSelected: _selectLangHandler,
itemBuilder: (BuildContext context) => _buildLanguagesWidgets,
)
],
),
body: Padding(
padding: const EdgeInsets.all(8.0),
child: Center(
child: Column(
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.stretch,
children: [
Expanded(
child: Container(
padding: const EdgeInsets.all(8.0),
color: Colors.grey.shade200,
child: Text(transcription))),
_buildButton(
onPressed: _speechRecognitionAvailable && !_isListening
? () => start()
: null,
label: _isListening
? 'Listening...'
: 'Listen (${selectedLang.code})',
),
_buildButton(
onPressed: _isListening ? () => cancel() : null,
label: 'Cancel',
),
_buildButton(
onPressed: _isListening ? () => stop() : null,
label: 'Stop',
),
],
),
)),
);
}

List<CheckedPopupMenuItem<Language>> get _buildLanguagesWidgets => languages
.map((l) => CheckedPopupMenuItem<Language>(
value: l,
checked: selectedLang == l,
child: Text(l.name),
))
.toList();

void _selectLangHandler(Language lang) {
setState(() => selectedLang = lang);
}
}

Conclusion:

In the article, I have explained the Voice Searching basic structure in a flutter; you can modify this code according to your choice. This was a small introduction to Voice Searching On User Interaction from my side, and it’s working using Flutter.

I hope this blog will provide you with sufficient information on Trying the Implementing Voice Searching in your flutter projectsWe will show you what the Introduction is?. Make a demo program for working Voice Searching using the flutter_speech package in your flutter applications. So please try it.

❤ ❤ Thanks for reading this article ❤❤

If I got something wrong? Let me know in the comments. I would love to improve.

Clap 👏 If this article helps you.


Feel free to connect with us:
And read more articles from FlutterDevs.com.

FlutterDevs team of Flutter developers to build high-quality and functionally-rich apps. Hire a flutter developer for your cross-platform Flutter mobile app project on an hourly or full-time basis as per your requirement! For any flutter-related queries, you can connect with us on Facebook, GitHub, Twitter, and LinkedIn.

We welcome feedback and hope that you share what you’re working on using #FlutterDevs. We truly enjoy seeing how you use Flutter to build beautiful, interactive web experiences.


Leave comment

Your email address will not be published. Required fields are marked with *.