| TYPE | VERSION | STATUS | COVERAGE | |
|---|---|---|---|---|
demo |
demo | Latest demo | Build Status | Coverage Status |
vui-core |
core | Latest version | Build Status | Coverage Status |
addon-android-speech |
addon | Latest version | Build Status | Coverage Status |
addon-google-speech |
addon | Latest version | Build Status | Coverage Status |
addon-amazon-speech |
addon | Latest version | Build Status | Coverage Status |
This library combines both native built-in resources and cloud services into a component capable to run reliably and seamlessly a Speech Synthesizer and a Voice Recognizer.
It enables currently the following providers:
- Built-in Android (@FranRiadigos)
- Google Cloud (@FranRiadigos)
- Amazon (@xvelx)
Other providers you can contribute with are:
- Microsoft Azure
- Watson (IBM)
- Wit.ai
- Temi
Apart from the above mentioned, it also helps you when:
- some devices don't have configured the resources you need to run a conversation in your app
- a developer needs to learn and test quite a lot before even to start coding for voice capabilities
- noise is impacting considerably the communication
- oldest android components force you to create a lot of boilerplate
- some countries don't allow Google Services
The SDK works on Android version 5.0 (Lollipop) and above. (for lower versions contact us)
repositories {
// Optional. Access to early versions not yet published.
maven { url "https://dl.bintray.com/chattylabs/maven" }
}
dependencies {
// Required
implementation 'chattylabs:vui-core:<latest version>'
// You can either use only one or combine addons
// i.e. Use voice Synthesizer from Google and SpeechRecognizer from Android
implementation 'chattylabs:addon-android-speech:<latest version>'
implementation 'chattylabs:addon-google-speech:<latest version>'
}
You can use the component at any Context level, both in an Activity and a Service.
You will create a set of VoiceNode objects, add them into the graph and build a flow.
// Get the Component via default provider val component = ConversationalFlow.provide(...) // Setup the Addons to use (typically done in your Application class) component.updateConfiguration(builder -> builder .setRecognizerServiceType(() -> AndroidSpeechRecognizer.class) .setSynthesizerServiceType(() -> AndroidSpeechSynthesizer.class) .build()) // To record from the mic you have to request the permissions val perms = component.requiredPermissions() // requestPermissions(perms) // You should check if the addons are available component.checkSpeechSynthesizerStatus(...) val conversation: Conversation = component.create(context) val question: VoiceMessage = ... val answers: VoiceMatch = ... with(conversation) { addNode(question) addNode(answers) with(prepare()) { from(question).to(answers) } start(question) }
There are different Voice Nodes and Configurations, check the wiki page