Rhasspy (pronounced RAH-SPEE) is an offline, multilingual voice assistant toolkit inspired by Jasper that works well with Home Assistant, Hass.io, and Node-RED. Designed so that you don't have to use any not-self-hosted software under the hood, from speech recognition to TTS. Emits JSON events. Vocabulary can be expanded with the automated assistance feature. Will run on something as simple as a RasPi but doesn't treat x86(-64) like a second-class citizen. Commands/intents are specified in a fairly easy templating language.
Stanford has open sourced a self-hosted, personal assistant system. Designed with privacy in mind. Speech recognition, analysis, task execution. They want to make so that it's easy and highly useful for everyone to integrate into their stuff. Can monitor things and filter for certain things. Aims for composability. Services (skills) are also open source and crowdsourced.
Github: https://github.com/stanford-oval
Another F/OSS personal assistant. Skill-based. Speech recognition and synthesis. Uses node.js and Python.
A step by step process for setting up and using Mycroft for everyday tasks.
FOSS speech recognition package. Supports over 100 languages and accents. Fairly lightweight in terms of dependencies. Can also output audio. Additional dictionaries can be added.
A python module for interfacing with online and local speech recognition services. Comes with a set of examples that illustrate common use cases. Can access a local microphone directly.
Can be used with audio files and probably a hot mic to transcribe speech into text for later processing. Uses git Large File Storage for the neural network objects. GPU acceleration enabled. Includes trained models as well as source code. Available in PyPy as deepspeech and deepspeech-gpu. Supports the RasPi explicitly as a platform, interestingly.
Looking at the releases page is a good way to keep up with the project: https://github.com/mozilla/DeepSpeech/releases
Mozilla's open source speech recognition project. They're asking people to contribute samples of themselves speaking sentences on the screen to grow their corpus.
github ticket: howto make Mycroft respond to a physical button instead of a wakeword.
github repo for a foss speechrecognition system that has multiple models for speech interpretation.
Claims to be a foss personal AI assistant. Called Stella. Built on top of Arch Linux. Demo appears to be both conversational and somewhat usable.
voice assistant speechrecognition exocortex Designed for privacy, runs on-device on-prem, works without a network connection. There are community contributed skills, called snips, provided skills, and you can develop your own with a web based visual builder. Supports multiple human languages. Deploy to your own devices as long as they run android or linux. github repos here: https://github.com/snipsco
F/OSS voice control system. Runs on a raspi. Extensible. Uses speech synthesis to respond.
Personal assistant software. Seems written in Python. Lets you build actions/capabilities piece by piece. Customizable. Has a community of contributed features.