This presentation uses speech-to-text capability as an example of how to integrate custom transformers into Alfresco. Presented as a tutorial, it will take your understanding of the repository one level deeper by looking at subsystems, transformers, and JMX.
The goals for this presentation are :
- Explain how the repository is divided into subsystems, and they can be used to change the configuration dynamically.
- Integrate into Alfresco (and demo) a speech-to-text transformer (e.g. pocketsphinx) to be able to search for speech contained within audio and video content.
- Show that it requires no code, only config, and that it is therefore very simple to integrate additional transformers into the platform.
- Explain how the subsystems can be used to changed the config dynamically. In this context for example this could be used to dynamically change the dictionary or acoustic and language models used to transform speech into text.