Willow is an ESP IDF based project primarily targetting the ESP BOX hardware from Espressif. Our goal is to provide Amazon Echo/Google Home competitive performance, accuracy, cost, and functionality with Home Assistant and other platforms - 100% open source and completely self-hosted by the user with "ready for the kitchen counter" low cost commercially available hardware.
Use Willow Inference Server anywhere or don't use it at all with command recognition on the device. Have the results go anywhere you want. Integrate with whatever you want. Completely open source so it does what you want, only what you want, and only how you want it. No more annoying extra prompts or sales pitches to upsell you. Supports multiple wake words with more coming soon.
Approximately $50 hardware cost (plus USB-C power supply). Fully assembled. Done.
Rhasspy (pronounced RAH-SPEE) is an offline, multilingual voice assistant toolkit inspired by Jasper that works well with Home Assistant, Hass.io, and Node-RED. Designed so that you don't have to use any not-self-hosted software under the hood, from speech recognition to TTS. Emits JSON events. Vocabulary can be expanded with the automated assistance feature. Will run on something as simple as a RasPi but doesn't treat x86(-64) like a second-class citizen. Commands/intents are specified in a fairly easy templating language.
Stanford has open sourced a self-hosted, personal assistant system. Designed with privacy in mind. Speech recognition, analysis, task execution. They want to make so that it's easy and highly useful for everyone to integrate into their stuff. Can monitor things and filter for certain things. Aims for composability. Services (skills) are also open source and crowdsourced.
Github: https://github.com/stanford-oval
Another F/OSS personal assistant. Skill-based. Speech recognition and synthesis. Uses node.js and Python.
A step by step process for setting up and using Mycroft for everyday tasks.
FOSS speech recognition package. Supports over 100 languages and accents. Fairly lightweight in terms of dependencies. Can also output audio. Additional dictionaries can be added.
A python module for interfacing with online and local speech recognition services. Comes with a set of examples that illustrate common use cases. Can access a local microphone directly.
Can be used with audio files and probably a hot mic to transcribe speech into text for later processing. Uses git Large File Storage for the neural network objects. GPU acceleration enabled. Includes trained models as well as source code. Available in PyPy as deepspeech and deepspeech-gpu. Supports the RasPi explicitly as a platform, interestingly.
Looking at the releases page is a good way to keep up with the project: https://github.com/mozilla/DeepSpeech/releases
Mozilla's open source speech recognition project. They're asking people to contribute samples of themselves speaking sentences on the screen to grow their corpus.
github ticket: howto make Mycroft respond to a physical button instead of a wakeword.
github repo for a foss speechrecognition system that has multiple models for speech interpretation.
Claims to be a foss personal AI assistant. Called Stella. Built on top of Arch Linux. Demo appears to be both conversational and somewhat usable.
voice assistant speechrecognition exocortex Designed for privacy, runs on-device on-prem, works without a network connection. There are community contributed skills, called snips, provided skills, and you can develop your own with a web based visual builder. Supports multiple human languages. Deploy to your own devices as long as they run android or linux. github repos here: https://github.com/snipsco
F/OSS voice control system. Runs on a raspi. Extensible. Uses speech synthesis to respond.
Personal assistant software. Seems written in Python. Lets you build actions/capabilities piece by piece. Customizable. Has a community of contributed features.