stenoai

stenoai

Privacy focused AI powered meeting notes using locally hosted Small Language Models

Stars: 279

Visit
 screenshot

StenoAI is an AI-powered meeting intelligence tool that allows users to record, transcribe, summarize, and query meetings using local AI models. It prioritizes privacy by processing data entirely on the user's device. The tool offers multiple AI models optimized for different use cases, making it ideal for healthcare, legal, and finance professionals with confidential data needs. StenoAI also features a macOS desktop app with a user-friendly interface, making it convenient for users to access its functionalities. The project is open-source and not affiliated with any specific company, emphasizing its focus on meeting-notes productivity and community collaboration.

README:

StenoAI Logo

StenoAI

Your very own stenographer for every meeting

Build Release Discord License macOS

AI-powered meeting intelligence that runs entirely on your device, your private data never leaves anywhere. Record, transcribe, summarize, and query your meetings using local AI models. Perfect for healthcare, legal and finance professionals with confidential data needs.

Trusted by users at AWS, Deliveroo & Tesco.

StenoAI Interface

Twitter Follow

Disclaimer: This is an independent open-source project for meeting-notes productivity and is not affiliated with, endorsed by, or associated with any similarly named company.

Features

  • Local transcription using whisper.cpp
  • AI summarization with Ollama models
  • Ask Steno - Query your meetings with natural language questions
  • Multiple AI models - Choose from 4 models optimized for different use cases
  • Privacy-first - 100% local processing, your data never leaves your device
  • macOS desktop app with intuitive interface

Have questions or suggestions? Join our Discord to chat with the community.

Models & Performance

Transcription Models (Whisper):

  • small: Default model - good accuracy and speed on Apple Silicon (default)
  • base: Faster but lower accuracy for basic meetings
  • medium: High accuracy for important meetings (slower)

Summarization Models (Ollama):

  • llama3.2:3b (2GB): Fastest option for quick meetings (default)
  • gemma3:4b (2.5GB): Lightweight and efficient
  • qwen3:8b (4.7GB): Excellent at structured output and action items
  • deepseek-r1:8b (4.7GB): Strong reasoning and analysis capabilities

Switching Models:

  • Click the 🧠 AI Settings icon in the app
  • Select your preferred model
  • Models download automatically when selected
  • ⚠️ Note: Downloads will pause any active summarization

Future Roadmap

Enhanced Features

  • Custom summarization templates
  • Speaker Diarisation

Installation

Download the latest release for your Mac:

Installing on macOS

  1. Download and open the DMG file

  2. Drag the app to Applications

  3. When you first launch the app, macOS may show a security warning

  4. To fix this warning:

    • Go to System Settings > Privacy & Security and click "Open Anyway"

    Alternatively:

    • Right-click StenoAI in Applications and select "Open"
    • Or run in Terminal: xattr -cr /Applications/StenoAI.app
  5. The app will work normally on subsequent launches

You can run it locally as well (see below) if you dont want to install a dmg.

Local Development/Use Locally

Prerequisites

  • Python 3.9+
  • Node.js 18+

Setup

git clone https://github.com/ruzin/stenoai.git
cd stenoai

# Backend setup
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt

# Download bundled binaries (Ollama, ffmpeg)
./scripts/download-ollama.sh

# Build the Python backend
pip install pyinstaller
pyinstaller stenoai.spec --noconfirm

# Frontend
cd app
npm install
npm start

Note: Ollama and ffmpeg are bundled - no system installation needed. The setup wizard in the app will download the required AI models automatically.

Build

cd app
npm run build

Release Process

Simple Release Commands

cd app

# Patch release (bug fixes): 0.0.5 → 0.0.6
npm version patch
git add package.json package-lock.json
git commit -m "Version bump to $(node -p "require('./package.json').version")"
git push
git tag v$(node -p "require('./package.json').version")
git push origin v$(node -p "require('./package.json').version")

# Minor release (new features): 0.0.6 → 0.1.0
npm version minor
git add package.json package-lock.json
git commit -m "Version bump to $(node -p "require('./package.json').version")"
git push
git tag v$(node -p "require('./package.json').version")
git push origin v$(node -p "require('./package.json').version")

# Major release (breaking changes): 0.0.6 → 1.0.0
npm version major
git add package.json package-lock.json
git commit -m "Version bump to $(node -p "require('./package.json').version")"
git push
git tag v$(node -p "require('./package.json').version")
git push origin v$(node -p "require('./package.json').version")

What happens:

  1. npm version updates package.json and package-lock.json locally
  2. Manual commit ensures version changes are saved to git
  3. git push sends the version commit to GitHub
  4. git tag creates the version tag locally
  5. git push origin tag triggers GitHub Actions workflow
  6. Workflow automatically builds DMGs for Intel & Apple Silicon
  7. Creates GitHub release with downloadable assets

Project Structure

stenoai/
├── app/                  # Electron desktop app
├── src/                  # Python backend
├── website/              # Marketing site
├── recordings/           # Audio files
├── transcripts/          # Text output
└── output/              # Summaries

Troubleshooting

Debug Logs

StenoAI includes a built-in debug panel for troubleshooting issues:

In-App Debug Panel:

  1. Launch StenoAI
  2. Click the 🔨 hammer icon (next to settings)
  3. The debug panel shows real-time logs of all operations

Terminal Logging (Advanced): For detailed system-level logs, run the app from Terminal:

# Launch StenoAI with full logging
/Applications/StenoAI.app/Contents/MacOS/StenoAI

This displays comprehensive logs including:

  • Python subprocess output
  • Whisper transcription details
  • Ollama API communication
  • HTTP requests and responses
  • Error stack traces
  • Performance timing

System Console Logs: For system-level debugging:

# View recent StenoAI-related logs
log show --last 10m --predicate 'process CONTAINS "StenoAI" OR eventMessage CONTAINS "ollama"' --info

# Monitor live logs
log stream --predicate 'eventMessage CONTAINS "ollama" OR process CONTAINS "StenoAI"' --level info

Common Issues:

  • Recording stops early: Check microphone permissions and available disk space
  • "Processing failed": Usually Ollama service or model issues - check terminal logs
  • Empty transcripts: Whisper couldn't detect speech - verify audio input levels
  • Slow processing: Normal for longer recordings - Ollama processing is CPU-intensive especially on older intel Macs

Logs Location

  • User Data: ~/Library/Application Support/stenoai/
  • Recordings: ~/Library/Application Support/stenoai/recordings/
  • Transcripts: ~/Library/Application Support/stenoai/transcripts/
  • Summaries: ~/Library/Application Support/stenoai/output/

License

This project is licensed under the MIT License.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for stenoai

Similar Open Source Tools

For similar tasks

For similar jobs