Why Use Native Speech Recognition in React Native?
Before we dive into the implementation, let’s quickly discuss why using native speech recognition can be a game-changer for your app.
Benefits of Native Speech Recognition
Enhanced Accuracy: Native speech recognition tools are optimized for each device's hardware, ensuring better accuracy compared to web-based APIs.
Offline Capability: Unlike web APIs that rely on an internet connection, many native tools can function offline, ensuring that your app continues working in low-connectivity environments.
Improved Performance: By accessing native microphone APIs, your app avoids the performance bottlenecks that web-based APIs can face, providing smoother experiences.
React Native, which bridges native code and JavaScript, makes it easier to integrate these native features without diving deep into platform-specific code. Using the right libraries, like react-native-voice
, you can unlock powerful speech recognition tools for both iOS and Android devices.
Setting Up Your React Native Environment
Prerequisites
Before we start adding speech recognition to your React Native app, ensure you have the following in place:
React Native Development Environment: Follow the official React Native environment setup guide to get all the tools you need.
React Native Project: If you haven’t already, create a new React Native app using the React Native CLI.
Once your environment is set up, you're ready to begin!
Step 1: Install the react-native-voice
Library
The first thing you need to do is install the react-native-voice
library, which provides a simple interface for integrating native speech recognition features into your app.
Install the Library
Run the following command to install the package:
npm install @react-native-voice/voice --save # Or using yarn: yarn add @react-native-voice/voice
Link the Native Code
React Native 0.60 and higher supports auto-linking, so if you’re using a compatible version, you won’t need to manually link the library. However, you may need to do some additional setup for iOS and Android.
For iOS:
After installing the library, navigate to your iOS project directory and run the following command:
cd ios && pod install && cd ..
Open Info.plist and add the following to request microphone access:
<key>NSMicrophoneUsageDescription</key> <string>We need access to your microphone for speech recognition.</string>
For Android:
Open AndroidManifest.xml and include the following permissions:
<uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.RECORD_AUDIO" />
For Android versions 6.0 and above, ensure you request microphone permissions at runtime, which we’ll cover later.
Step 2: Setting Up the Speech Recognition Component
Now that the library is installed, let’s set up a simple speech-to-text component that uses the device’s native microphone for speech recognition.
Import the Required Modules
Create a new component, SpeechToText.js
, and import the necessary libraries.
import React, { useEffect, useState } from 'react'; import { View, Text, Button, StyleSheet } from 'react-native'; import Voice from '@react-native-voice/voice';
Create the Component
Below is the basic structure for your speech-to-text component. It starts and stops the speech recognition, displays transcribed text, and handles errors.
const SpeechToText = () => { const [isListening, setIsListening] = useState(false); const [recognizedText, setRecognizedText] = useState(''); const [errorMessage, setErrorMessage] = useState(null); useEffect(() => { Voice.onSpeechStart = () => { console.log('Speech recognition started'); setErrorMessage(null); }; Voice.onSpeechPartialResults = (e) => { setRecognizedText(e.value[0]); }; Voice.onSpeechResults = (e) => { setRecognizedText(e.value[0]); }; Voice.onSpeechError = (e) => { setErrorMessage(e.error.message); }; Voice.onSpeechEnd = () => { setIsListening(false); }; return () => { Voice.destroy().then(Voice.removeAllListeners); }; }, []); const startListening = async () => { setRecognizedText(''); try { await Voice.start('en-US'); // Specify the language code setIsListening(true); } catch (e) { console.error(e); setErrorMessage(e.message); } }; const stopListening = async () => { try { await Voice.stop(); setIsListening(false); } catch (e) { console.error(e); setErrorMessage(e.message); } }; return ( <View style={styles.container}> <Button title={isListening ? 'Stop Listening' : 'Start Listening'} onPress={isListening ? stopListening : startListening} /> <Text style={styles.text}>Recognized Text: {recognizedText}</Text> {errorMessage && <Text style={styles.error}>Error: {errorMessage}</Text>} </View> ); }; const styles = StyleSheet.create({ container: { padding: 20, marginTop: 50, alignItems: 'center' }, text: { marginTop: 20, fontSize: 18 }, error: { marginTop: 10, color: 'red' } }); export default SpeechToText;
Explanation of Code
State Variables:
isListening
: Tracks whether the app is listening for speech.
recognizedText
: Stores the transcribed speech.
errorMessage
: Displays any error message during speech recognition.
Speech Recognition Events:
onSpeechStart
: Triggered when speech recognition starts.
onSpeechPartialResults
: Updates the text as the speech is recognized.
onSpeechResults
: Displays the final transcribed text.
onSpeechError
: Captures any errors that occur during recognition.
onSpeechEnd
: Ends the speech recognition process.
Start/Stop Functions:
startListening
: Begins speech recognition.
stopListening
: Stops speech recognition.
Step 3: Handle Permissions
iOS Permissions
For iOS, ensure the Info.plist file contains the microphone permission:
<key>NSMicrophoneUsageDescription</key> <string>We need access to your microphone for speech recognition.</string>
Android Permissions
For Android, open AndroidManifest.xml and ensure the following permissions are added:
<uses-permission android:name="android.permission.INTERNET" /> <uses-permission android:name="android.permission.RECORD_AUDIO" />
For Android 6.0 and above, you'll need to request runtime permissions. Here’s how you can do it using the react-native-permissions library:
import { check, request, PERMISSIONS, RESULTS } from 'react-native-permissions'; const requestPermission = async () => { const result = await request(PERMISSIONS.ANDROID.RECORD_AUDIO); if (result === RESULTS.GRANTED) { startListening(); } else { setErrorMessage('Microphone permission denied'); } };
Step 4: Customize the Speech Recognition Experience
You can enhance the speech recognition feature by adding:
Real-time Transcription: Use onSpeechPartialResults
to display the transcription as it happens, offering more interactive feedback.
Multiple Languages: You can switch languages easily by passing the appropriate language code to the Voice.start()
function (e.g., 'fr-FR'
for French).
Error Handling: Display user-friendly messages when errors occur, such as microphone permissions being denied.
Step 5: Test Your Implementation
Testing is crucial when dealing with speech recognition. Here are some tips for smooth testing:
Physical Device Testing: Always test on an actual device, as emulators don’t provide reliable microphone input.
Cross-Platform Testing: Test on both iOS and Android devices to ensure consistent functionality across platforms.
Error Logs: Use console logs to debug issues. Keep an eye on the device logs to understand any errors or issues in real-time.

Conclusion
Integrating native speech recognition into your React Native app can significantly enhance the user experience by enabling voice-based interactions. By using the react-native-voice
library, you gain access to powerful native speech recognition capabilities for both iOS and Android devices.
Whether you're building an assistant app, dictation tool, or any app that requires voice commands, this simple implementation will help you get started with speech-to-text functionality in your React Native app. Make sure to test thoroughly and optimize the user experience for the best results!