!depsdepsdeps-url
All methods now return a
Method Name | Description | Platform --------------------------- | ----------------------------------------------------------------------------------- | -------- Voice.isAvailable() | Checks whether a speech recognition service is available on the system. | Android, iOS Voice.start(locale) | Starts listening for speech for a specific locale. Returns null if no error occurs. | Android, iOS Voice.stop() | Stops listening for speech. Returns null if no error occurs. | Android, iOS Voice.cancel() | Cancels the speech recognition. Returns null if no error occurs. | Android, iOS Voice.destroy() | Destroys the current SpeechRecognizer instance. Returns null if no error occurs. | Android, iOS Voice.removeAllListeners() | Cleans/nullifies overridden
Event Name | Description | Event | Platform ----------------------------------- | ----------------------------------------------------------- | ----------------------------------------------- | -------- Voice.onSpeechStart(event) | Invoked when
Please see the documentation provided by ReactNative for this: PermissionsAndroid
@asafron @BrendanFDMoore @brudny @chitezh @ifsnow @jamsch @ohtangza & @hayanmind @rudiedev6 @wenkesj
React Native Voice
A speech-to-text library for React Native.
npm i @evgen74/react-native-voice --save
It is a clone of this repo but with a working definition of the volume level and without conflict with react-native-tts
Table of contents
Linking* [Manually Link Android](#manually-link-android)
* [Manually Link iOS](#manually-link-ios)
Usage* [Example](#example)
API
Events
Permissions* [Android](#android)
* [iOS](#ios)
ContibutorsLinking
Manually Link Android
- In
android/setting.gradle
...
include ':VoiceModule', ':app'
project(':VoiceModule').projectDir = new File(rootProject.projectDir, '../node_modules/@evgen74/react-native-voice/android')
- In
android/app/build.gradle
...
dependencies {
...
compile project(':@evgen74/react-native-voice')
}
- In
MainApplication.java
import com.facebook.react.ReactApplication
import com.facebook.react.ReactPackage;
...
import com.wenkesj.voice.VoicePackage; // <------ Add this!
...
public class MainActivity extends ReactActivity {
...
@Override
protected List<ReactPackage> getPackages() {
return Arrays.<ReactPackage>asList(
new MainReactPackage(),
new VoicePackage() // <------ Add this!
);
}
}
Manually Link iOS
- Drag the Voice.xcodeproj from the @evgen74/react-native-voice/ios folder to the Libraries group on Xcode in your poject. Manual linking
- Click on your main project file (the one that represents the .xcodeproj) select Build Phases and drag the static library, lib.Voice.a, from the Libraries/Voice.xcodeproj/Products folder to Link Binary With Libraries
Usage
Full example for Android and iOS.
Example
import Voice from '@evgen74/react-native-voice';
import React, {Component} from 'react';
class VoiceTest extends Component {
constructor(props) {
Voice.onSpeechStart = this.onSpeechStartHandler.bind(this);
Voice.onSpeechEnd = this.onSpeechEndHandler.bind(this);
Voice.onSpeechResults = this.onSpeechResultsHandler.bind(this);
}
onStartButtonPress(e){
Voice.start('en');
}
...
}
API
Static access to the Voice API.
All methods now return a
new Promise
for async/await
compatibility.Method Name | Description | Platform --------------------------- | ----------------------------------------------------------------------------------- | -------- Voice.isAvailable() | Checks whether a speech recognition service is available on the system. | Android, iOS Voice.start(locale) | Starts listening for speech for a specific locale. Returns null if no error occurs. | Android, iOS Voice.stop() | Stops listening for speech. Returns null if no error occurs. | Android, iOS Voice.cancel() | Cancels the speech recognition. Returns null if no error occurs. | Android, iOS Voice.destroy() | Destroys the current SpeechRecognizer instance. Returns null if no error occurs. | Android, iOS Voice.removeAllListeners() | Cleans/nullifies overridden
Voice
static methods. | Android, iOS
Voice.isRecognizing() | Return if the SpeechRecognizer is recognizing. | Android, iOSEvents
Callbacks that are invoked when a native event emitted.
Event Name | Description | Event | Platform ----------------------------------- | ----------------------------------------------------------- | ----------------------------------------------- | -------- Voice.onSpeechStart(event) | Invoked when
.start()
is called without error. | { error: false }
| Android, iOS
Voice.onSpeechRecognized(event) | Invoked when speech is recognized. | { error: false }
| Android, iOS
Voice.onSpeechEnd(event) | Invoked when SpeechRecognizer stops recognition. | { error: false }
| Android, iOS
Voice.onSpeechError(event) | Invoked when an error occurs. | { error: Description of error as string }
| Android, iOS
Voice.onSpeechResults(event) | Invoked when SpeechRecognizer is finished recognizing. | { value: [..., 'Speech recognized'] }
| Android, iOS
Voice.onSpeechPartialResults(event) | Invoked when any results are computed. | { value: [..., 'Partial speech recognized'] }
| Android, iOS
Voice.onSpeechVolumeChanged(event) | Invoked when pitch that is recognized changed. (do not use) | { value: pitch in dB }
| Android
Voice.onSpeechVolumeLevel(event) | Invoked when pitch that is recognized changed. | { value: pitch in dB }
| Android, IOSPermissions
Arguably the most important part.
Android
While the includedVoiceTest
app works without explicit permissions checks and requests, it may be necessary to add a permission request for RECORD_AUDIO
for some configurations.
Since Android M (6.0), user need to grant permission at runtime (and not during app installation).
By default, calling the startSpeech
method will invoke RECORD AUDIO
permission popup to the user. This can be disabled by passing REQUEST_PERMISSIONS_AUTO: true
in the options argument.iOS
Need to include permissions forNSMicrophoneUsageDescription
and NSSpeechRecognitionUsageDescription
inside Info.plist for iOS. See the included VoiceTest
for how to handle these cases.<dict>
...
<key>NSMicrophoneUsageDescription</key>
<string>Description of why you require the use of the microphone</string>
<key>NSSpeechRecognitionUsageDescription</key>
<string>Description of why you require the use of the speech recognition</string>
...
</dict>
Please see the documentation provided by ReactNative for this: PermissionsAndroid
Contibutors
@asafron @BrendanFDMoore @brudny @chitezh @ifsnow @jamsch @ohtangza & @hayanmind @rudiedev6 @wenkesj