
meeting-minutes
A free and open source, self hosted Ai based live meeting note taker and minutes summary generator that can completely run in your Local device (Mac OS and windows OS Support added. Working on adding linux support soon) https://meetily.zackriya.com/ is meetly ai
Stars: 7452

An open-source AI assistant for taking meeting notes that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams to focus on discussions while automatically capturing and organizing meeting content without external servers or complex infrastructure. Features include modern UI, real-time audio capture, speaker diarization, local processing for privacy, and more. The tool also offers a Rust-based implementation for better performance and native integration, with features like live transcription, speaker diarization, and a rich text editor for notes. Future plans include database connection for saving meeting minutes, improving summarization quality, and adding download options for meeting transcriptions and summaries. The backend supports multiple LLM providers through a unified interface, with configurations for Anthropic, Groq, and Ollama models. System architecture includes core components like audio capture service, transcription engine, LLM orchestrator, data services, and API layer. Prerequisites for setup include Node.js, Python, FFmpeg, and Rust. Development guidelines emphasize project structure, testing, documentation, type hints, and ESLint configuration. Contributions are welcome under the MIT License.
README:
Get latest Product updates
Website •
LinkedIn •
Meetily Discord •
Privacy-First AI •
Reddit
A privacy-first AI meeting assistant that captures, transcribes, and summarizes meetings entirely on your infrastructure. Built by expert AI engineers passionate about data sovereignty and open source solutions. Perfect for enterprises that need advanced meeting intelligence without compromising on privacy, compliance, or control.
For enterprise version: Sign up for early access
For Partnerships and Custom AI development: Let's chat
- Overview
- The Privacy Problem
- Features
- System Architecture
- Quick Start Guide
- Prerequisites
- Setup Instructions
- Whisper Model Selection
- LLM Integration
- Troubleshooting
- Developer Console
- Uninstallation
- Enterprise Solutions
- Partnerships & Referrals
- Development Guidelines
- Contributing
- License
- About Our Team
- Acknowledgments
- Star History
A privacy-first AI meeting assistant that captures, transcribes, and summarizes meetings entirely on your infrastructure. Built by expert AI engineers passionate about data sovereignty and open source solutions. Perfect for professionals and enterprises that need advanced meeting intelligence without compromising privacy or control.
While there are many meeting transcription tools available, this solution stands out by offering:
- Privacy First: All processing happens locally on your device
- Cost Effective: Uses open-source AI models instead of expensive APIs
- Flexible: Works offline, supports multiple meeting platforms
- Customizable: Self-host and modify for your specific needs
- Intelligent: Built-in knowledge graph for semantic search across meetings
Meeting AI tools create significant privacy and compliance risks across all sectors:
- $4.4M average cost per data breach (IBM 2024)
- €5.88 billion in GDPR fines issued by 2025
- 400+ unlawful recording cases filed in California this year
Whether you're a defense consultant, enterprise executive, legal professional, or healthcare provider, your sensitive discussions shouldn't live on servers you don't control. Cloud meeting tools promise convenience but deliver privacy nightmares with unclear data storage practices and potential unauthorized access.
Meetily solves this: Complete data sovereignty on your infrastructure, zero vendor lock-in, full control over your sensitive conversations.
✅ Modern, responsive UI with real-time updates
✅ Real-time audio capture (microphone + system audio)
✅ Live transcription using locally-running Whisper
✅ Local processing for privacy
✅ Packaged the app for macOS and Windows
✅ Rich text editor for notes
🚧 Export to Markdown/PDF/HTML
🚧 Obsidian Integration
🚧 Speaker diarization
Choose your setup method based on your needs:
Best for: Regular users wanting optimal performance
Time: 10-15 minutes
System Requirements: 8GB+ RAM, 4GB+ disk space
- Frontend: Download and run meetily-frontend_0.0.5_x64-setup.exe
-
Backend: Download backend zip from releases, extract, run
Get-ChildItem -Path . -Recurse | Unblock-File
, then.\start_with_output.ps1
For safety and to maintain proper user permissions for frontend app:
- Go to Latest Releases
- Download the file ending with
x64-setup.exe
- Important: Before running, right-click the file → Properties → Check Unblock at bottom → OK
- Double-click the installer to run it
- If Windows shows a security warning:
- Click
More info
and chooseRun anyway
, or - Follow the permission dialog prompts
- Click
- Follow the installation wizard
✅ Success Check: You should see the Meetily application window open successfully when launched.
-
Complete Setup (Recommended):
# Install both frontend + backend brew tap zackriya-solutions/meetily brew install --cask meetily # Start the backend server meetily-server --language en --model medium
- Open Meetily from Applications folder
Best for: Developers, quick testing, or multi-environment deployment
Time: 5-10 minutes
System Requirements: 16GB+ RAM (8GB minimum for Docker), Docker Desktop
# Navigate to backend directory
cd backend
# Windows (PowerShell)
.\build-docker.ps1 cpu
.\run-docker.ps1 start -Interactive
# macOS/Linux (Bash)
./build-docker.sh cpu
./run-docker.sh start --interactive
After setup, verify everything works:
- Whisper Server: Visit http://localhost:8178 (should show API interface)
- Backend API: Visit http://localhost:5167/docs (should show API documentation)
- Frontend App: Open Meetily application and test microphone access
- Windows Defender blocking installer? → See Windows Defender Troubleshooting below
- Can't access localhost:8178 or 5167? → Check if backend is running and ports are available
-
"Permission denied" errors? → Run
chmod +x
on script files (macOS/Linux) or check execution policy (Windows) - Docker containers crashing? → Increase Docker RAM allocation to 12GB+ and check available disk space
- Audio not working? → Grant microphone permissions to the app in system settings
👉 For detailed troubleshooting, see Troubleshooting Section
-
Audio Capture Service
- Real-time microphone/system audio capture
- Audio preprocessing pipeline
- Built with Rust (experimental) and Python
-
Transcription Engine
- Whisper.cpp for local transcription
- Supports multiple model sizes (tiny->large)
- GPU-accelerated processing
-
LLM Orchestrator
- Unified interface for multiple providers
- Automatic fallback handling
- Chunk processing with overlap
- Model configuration:
-
Data Services
- ChromaDB: Vector store for transcript embeddings
- SQLite: Process tracking and metadata storage
- Frontend: Tauri app + Next.js (packaged executables)
-
Backend: Python FastAPI:
- Transcript workers
- LLM inference
- RAM: 8GB (16GB+ recommended)
- Storage: 4GB free space
- CPU: 4+ cores
- OS: Windows 10/11, macOS 10.15+, or Ubuntu 18.04+
- RAM: 16GB+ (for large Whisper models)
- Storage: 10GB+ free space
- CPU: 8+ cores or Apple Silicon Mac
- GPU: NVIDIA GPU with CUDA (optional, for faster processing)
Component | Windows | macOS | Purpose |
---|---|---|---|
Python | 3.9+ (python.org) | brew install python |
Backend runtime |
Node.js | 18+ LTS (nodejs.org) | brew install node |
Frontend build |
Git | (git-scm.com) | Pre-installed | Code download |
FFmpeg | winget install FFmpeg |
brew install ffmpeg |
Audio processing |
- Docker Desktop (docker.com)
- 16GB+ RAM allocated to Docker
- 4+ CPU cores allocated to Docker
# Install Visual Studio Build Tools (required for Whisper.cpp compilation)
# Download from: https://visualstudio.microsoft.com/downloads/#build-tools-for-visual-studio-2019
# Install Xcode Command Line Tools
xcode-select --install
# Install Homebrew (if not already installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
sudo apt-get update
sudo apt-get install build-essential cmake git ffmpeg python3 python3-pip nodejs npm
- Ollama (ollama.com) - For local AI models
- API Keys - For Claude (Anthropic) or Groq services
⏱️ Estimated Time: 10-15 minutes total
⏱️ Time: ~3-5 minutes
Manual Download (Recommended)
For safety and to maintain proper user permissions:
- Go to Latest Releases
- Download the file ending with
x64-setup.exe
- Important: Before running, right-click the file → Properties → Check Unblock at bottom → OK
- Double-click the installer to run it
- If Windows shows a security warning:
- Click
More info
and chooseRun anyway
, or - Follow the permission dialog prompts
- Click
- Follow the installation wizard
- The application will be available on your desktop
✅ Success Check: You should see the Meetily application window open successfully when launched.
Alternative: MSI Installer (Less likely to be blocked)
- Go to Latest Releases
- Download the file ending with
x64_en-US.msi
- Double-click the MSI file to run it
- Follow the installation wizard to complete the setup
- The application will be installed and available on your desktop
Provide necessary permissions for audio capture and microphone access.
⏱️ Time: ~5-10 minutes
Step 2: Install and Start the Backend
📦 Option 1: Pre-built Release (Recommended - Easiest)
The simplest way to get started with the backend is to download the pre-built release:
-
Download the backend:
- From the same releases page
- Download the backend zip file (e.g.,
meetily_backend.zip
) - Extract the zip to a folder like
C:\meetily_backend\
-
Prepare backend files:
- Open PowerShell (search for it in Start menu)
- Navigate to your extracted backend folder:
cd C:\meetily_backend
- Unblock all files (Windows security requirement):
Get-ChildItem -Path . -Recurse | Unblock-File
-
Start the backend services:
.\start_with_output.ps1
- This script will:
- Guide you through Whisper model selection (recommended:
base
ormedium
) - Ask for language preference (default: English)
- Download the selected model automatically
- Start both Whisper server (port 8178) and Meeting app (port 5167)
- Guide you through Whisper model selection (recommended:
- This script will:
What happens during startup:
- Model Selection: Choose from tiny (fastest, basic accuracy) to large (slowest, best accuracy)
- Language Setup: Select your preferred language for transcription
- Auto-download: Selected models are downloaded automatically (~150MB to 1.5GB depending on model)
- Service Launch: Both transcription and meeting services start automatically
✅ Success Verification:
-
Check services are running:
- Open browser and visit http://localhost:8178 (should show Whisper API interface)
- Visit http://localhost:5167/docs (should show Meeting app API documentation)
-
Test the application:
- Launch Meetily from desktop/Start menu
- Grant microphone permissions when prompted
- You should see the main interface ready to record meetings
🐳 Option 2: Docker (Alternative - Easier Dependency Management)
Docker provides easy setup with automatic dependency management, though it's slower than the pre-built release:
# Navigate to backend directory
cd ~/Downloads
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes/backend
# Build and start using Docker (CPU version)
.\build-docker.ps1 cpu
.\run-docker.ps1 start -Interactive
Prerequisites for Docker:
- Docker Desktop installed (docker.com)
- 8GB+ RAM allocated to Docker
- Internet connection for model downloads
✅ Success Check: Docker will automatically handle dependencies and you should see both Whisper server (port 8178) and Meeting app (port 5167) start successfully.
🛠️ Option 3: Local Build (Best Performance)
Local building provides the best performance but requires installing all dependencies manually. Choose this if you want optimal speed and don't mind the extra setup steps.
Click on the image to see installation video
Step 1: Install Dependencies
- Python 3.9+ (with pip)
- Visual Studio Build Tools (C++ workload)
- CMake
- Git
- Visual Studio Redistributables
Open PowerShell as administrator and run the dependency installer:
cd ~/Downloads
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes/backend
Set-ExecutionPolicy Bypass -Scope Process -Force
.\install_dependancies_for_windows.ps1
The script will install:
- Chocolatey (package manager)
- Python 3.11 (if not already installed)
- Git, CMake, Visual Studio Build Tools
- Visual Studio Redistributables
- Required development tools
Once installation is complete, restart your terminal before proceeding.
Step 2: Build Whisper
Enter the following commands to build the backend:
cd meeting-minutes/backend
.\build_whisper.cmd
If the build fails, run the command again:
.\build_whisper.cmd
The build process will:
- Update git submodules (whisper.cpp)
- Compile whisper.cpp with server support
- Create Python virtual environment
- Install Python dependencies
- Download the specified Whisper model
Step 3: Start the Backend
Finally, when the installation is successful, run the backend using:
.\start_with_output.ps1
✅ Success Check: You should see both Whisper server (port 8178) and Meeting app (port 5167) start successfully with log messages indicating they're running.
-
Warning - existing chocolatey installation is detected
To address this - Either use the current chocolatey version installed or remove the current one with:
rm C:\ProgramData\chocolatey
-
Error - ./start_with_output.ps1 shows security error
Run after making sure the file is unblocked:
Set-ExecutionPolicy Bypass -Scope Process -Force .\start_with_output.ps1
- Docker Desktop (Windows/Mac) or Docker Engine (Linux)
- 16GB+ RAM (8GB minimum allocated to Docker)
- 4+ CPU cores recommended
- For GPU: NVIDIA drivers + nvidia-container-toolkit (Windows/Linux only)
# Navigate to backend directory
cd backend
# Build and start services
.\build-docker.ps1 cpu # Build CPU version
.\run-docker.ps1 start -Interactive # Interactive setup (recommended)
# Navigate to backend directory
cd backend
# Build and start services
./build-docker.sh cpu # Build CPU version
./run-docker.sh start --interactive # Interactive setup (recommended)
- Whisper Server: http://localhost:8178
-
Meeting App: http://localhost:5167 (with API docs at
/docs
)
# GPU acceleration (Windows/Linux only)
.\build-docker.ps1 gpu # Windows
./build-docker.sh gpu # Linux
# Custom configuration
.\run-docker.ps1 start -Model large-v3 -Language es -Detach
./run-docker.sh start --model large-v3 --language es --detach
# Check status and logs
.\run-docker.ps1 status # Windows
./run-docker.sh status # macOS/Linux
# Stop services
.\run-docker.ps1 stop # Windows
./run-docker.sh stop # macOS/Linux
⏱️ Estimated Time: 5-10 minutes total
Option 1: Using Homebrew (Recommended) - Complete Setup ⏱️ Time: ~5-7 minutes
Note: This single command installs both the frontend app and backend server.
# Install Meetily (frontend + backend)
brew tap zackriya-solutions/meetily
brew install --cask meetily
# Start the backend server
meetily-server --language en --model medium
How to use after installation:
- Run
meetily-server
in terminal (keep it running) - Open Meetily from Applications folder or Spotlight
- Grant microphone and screen recording permissions when prompted
✅ Success Check: Meetily app should open and you should be able to start recording meetings immediately.
To update existing installation:
# Update Homebrew and get latest package information
brew update
# Update to latest version
brew upgrade --cask meetily
brew upgrade meetily-backend
⚠️ Data Backup Warning: You are upgrading from Meetily 0.0.4 to 0.0.5. This upgrade will automatically migrate your data to a new persistent location, but it's recommended to backup your data first.Current Data Location (Version 0.0.4):
- Database:
/opt/homebrew/Cellar/meetily-backend/0.0.4/backend/meeting_minutes.db
New Persistent Location (Version 0.0.5+):
- Database:
/opt/homebrew/var/meetily/meeting_minutes.db
What Happens During Upgrade:
- ✅ Your data will be automatically migrated to the new persistent location
- ✅ Data will survive future upgrades
- ✅ The old data in the Cellar directory will be cleaned up
Backup Recommendation: The upgrade ensures data loss doesn't happen, but it's always better to backup your data before proceeding.
Option 2: Manual Installation ⏱️ Time: ~8-12 minutes
- Download the latest dmg_darwin_arch64.zip file
- Extract the file
- Double-click the
.dmg
file inside the extracted folder - Drag the application to your Applications folder
- Remove quarantine attribute:
xattr -c /Applications/meetily-frontend.app
- Grant necessary permissions for audio capture and microphone access
- Important: You'll need to install the backend separately (see Manual Backend Setup below)
Option 1: Using Homebrew Backend Only ⏱️ Time: ~3-5 minutes
# Install just the backend (if you manually installed frontend)
brew tap zackriya-solutions/meetily
brew install meetily-backend
# Start the backend server
meetily-server --language en --model medium
To update existing backend installation:
# Update Homebrew and get latest package information
brew update
# Update to latest version
brew upgrade meetily-backend
⚠️ Data Backup Warning: You are upgrading from Meetily 0.0.4 to 0.0.5. This upgrade will automatically migrate your data to a new persistent location, but it's recommended to backup your data first.Current Data Location (Version 0.0.4):
- Database:
/opt/homebrew/Cellar/meetily-backend/0.0.4/backend/meeting_minutes.db
New Persistent Location (Version 0.0.5+):
- Database:
/opt/homebrew/var/meetily/meeting_minutes.db
What Happens During Upgrade:
- ✅ Your data will be automatically migrated to the new persistent location
- ✅ Data will survive future upgrades
- ✅ The old data in the Cellar directory will be cleaned up
Backup Recommendation: The upgrade ensures data loss doesn't happen, but it's always better to backup your data before proceeding.
Option 2: Complete Manual Setup ⏱️ Time: ~10-15 minutes
# Clone the repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes.git
cd meeting-minutes/backend
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Build dependencies
chmod +x build_whisper.sh
./build_whisper.sh
# Start backend servers
./clean_start_backend.sh
# Navigate to frontend directory
cd frontend
# Give execute permissions to clean_build.sh
chmod +x clean_build.sh
# run clean_build.sh
./clean_build.sh
When setting up the backend (either via Homebrew, manual installation, or Docker), you can choose from various Whisper models based on your needs:
Model | Size | Accuracy | Speed | Best For |
---|---|---|---|---|
tiny | ~39 MB | Basic | Fastest | Testing, low resources |
base | ~142 MB | Good | Fast | General use (recommended) |
small | ~244 MB | Better | Medium | Better accuracy needed |
medium | ~769 MB | High | Slow | High accuracy requirements |
large-v3 | ~1550 MB | Best | Slowest | Maximum accuracy |
macOS (Metal acceleration):
- 8 GB RAM: small
- 16 GB RAM: medium
- 32 GB+ RAM: large-v3
Windows/Linux:
- 8 GB RAM: base or small
- 16 GB RAM: medium
- 32 GB+ RAM: large-v3
-
Standard models (balance of accuracy and speed):
- tiny, base, small, medium, large-v1, large-v2, large-v3, large-v3-turbo
-
English-optimized models (faster for English content):
- tiny.en, base.en, small.en, medium.en
-
Quantized models (reduced size, slightly lower quality):
- *-q5_1 (5-bit quantized), *-q8_0 (8-bit quantized)
- Example: tiny-q5_1, base-q5_1, small-q5_1, medium-q5_0
Recommendation: Start with base
model for general use, or base.en
if you're only transcribing English content.
- Smaller LLMs can hallucinate, making summarization quality poor; Please use model above 32B parameter size
- Backend build process requires CMake, C++ compiler, etc. Making it harder to build
- Backend build process requires Python 3.10 or newer
- Frontend build process requires Node.js
For those interested in using GPU for faster Whisper inference:
Windows/Linux GPU Setup:
-
Modify build_whisper.cmd:
- Locate line 55 in the build_whisper.cmd file
- Replace it with:
cmake .. -DBUILD_SHARED_LIBS=OFF -DWHISPER_BUILD_TESTS=OFF -DWHISPER_BUILD_SERVER=ON -DGGML_CUDA=1
-
Clean Rebuild Requirement:
- If you have previously compiled whisper.cpp for CPU inference, a clean rebuild is essential
- Create a new directory, git clone meetily into this new folder, then execute the build script
- This ensures all components are compiled with GPU support from scratch
-
CUDA Toolkit Installation:
- Verify that the CUDA Toolkit is correctly installed on your system
- This toolkit provides the necessary libraries and tools for CUDA development
-
Troubleshooting CMake Errors:
- If errors persist, refer to this Stack Overflow post
- Copy required files to Visual Studio folder if needed
For detailed GPU support discussion, see Issue #126
The backend supports multiple LLM providers through a unified interface. Current implementations include:
- Anthropic (Claude models)
- Groq (Llama3.2 90 B)
- Ollama (Local models that supports function calling)
Common issues and solutions organized by setup method:
# Stop services
./run-docker.sh stop # or .\run-docker.ps1 stop
# Check port usage
netstat -an | grep :8178
lsof -i :8178 # macOS/Linux
- Enable WSL2 integration in Docker Desktop
- Install nvidia-container-toolkit
- Verify with:
.\run-docker.ps1 gpu-test
# Manual download
./run-docker.sh models download base.en
# or
.\run-docker.ps1 models download base.en
If you see "Dropped old audio chunk X due to queue overflow" messages:
-
Increase Docker Resources (most important):
- Memory: 8GB minimum (12GB+ recommended)
- CPUs: 4+ cores recommended
- Disk: 20GB+ available space
-
Use smaller Whisper model:
./run-docker.sh start --model base --detach
-
Check container resource usage:
docker stats
If Windows Defender or antivirus software blocks the installer with "virus or potentially unwanted software" error:
- Download the installer from Latest Releases
- Right-click the downloaded
.exe
file → Properties - Check the Unblock checkbox at the bottom → OK
- Double-click the installer to run it
- Follow the installation prompts
- Open Windows Security → Virus & threat protection
- Under Virus & threat protection settings, click Manage settings
- Scroll to Exclusions and click Add or remove exclusions
- Add the downloaded installer file as an exclusion
- Run the installer manually
If Windows Defender continues to block:
- Use the MSI installer instead (often less flagged): Download
*x64_en-US.msi
from releases - Or use manual backend installation only and access via web browser at http://localhost:5167
Why this happens: New software releases may trigger false positives in antivirus software until they build trust/reputation.
# CMake not found - install Visual Studio Build Tools
# PowerShell execution blocked:
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope Process
# Compilation errors
brew install cmake llvm libomp
export CC=/opt/homebrew/bin/clang
export CXX=/opt/homebrew/bin/clang++
# Permission denied
chmod +x build_whisper.sh
chmod +x clean_start_backend.sh
# Port conflicts
lsof -i :5167 # Find process using port
kill -9 PID # Kill process
- Check if ports 8178 (Whisper) and 5167 (Backend) are available
- Verify all dependencies are installed
- Check logs for specific error messages
- Ensure sufficient system resources (8GB+ RAM recommended)
If you encounter issues with the Whisper model:
# Try a different model size
meetily-download-model small
# Verify model installation
ls -la $(brew --prefix)/opt/meetily-backend/backend/whisper-server-package/models/
If the server fails to start:
-
Check if ports 8178 and 5167 are available:
lsof -i :8178 lsof -i :5167
-
Verify that FFmpeg is installed correctly:
which ffmpeg ffmpeg -version
-
Check the logs for specific error messages when running
meetily-server
-
Try running the Whisper server manually:
cd $(brew --prefix)/opt/meetily-backend/backend/whisper-server-package/ ./run-server.sh --model models/ggml-medium.bin
If the frontend application doesn't connect to the backend:
- Ensure the backend server is running (
meetily-server
) - Check if the application can access localhost:5167
- Restart the application after starting the backend
If the application fails to launch:
# Clear quarantine attributes
xattr -cr /Applications/meetily-frontend.app
Build Docker images with GPU support and cross-platform compatibility.
Usage:
# Build Types
cpu, gpu, macos, both, test-gpu
# Options
-Registry/-r REGISTRY # Docker registry
-Push/-p # Push to registry
-Tag/-t TAG # Custom tag
-Platforms PLATFORMS # Target platforms
-BuildArgs ARGS # Build arguments
-NoCache/--no-cache # Build without cache
-DryRun/--dry-run # Show commands only
Examples:
# Basic builds
.\build-docker.ps1 cpu
./build-docker.sh gpu
# Multi-platform with registry
.\build-docker.ps1 both -Registry "ghcr.io/user" -Push
./build-docker.sh cpu --platforms "linux/amd64,linux/arm64" --push
Complete Docker deployment manager with interactive setup.
Commands:
start, stop, restart, logs, status, shell, clean, build, models, gpu-test, setup-db, compose
Start Options:
-Model/-m MODEL # Whisper model (default: base.en)
-Port/-p PORT # Whisper port (default: 8178)
-AppPort/--app-port # Meeting app port (default: 5167)
-Gpu/-g/--gpu # Force GPU mode
-Cpu/-c/--cpu # Force CPU mode
-Language/--language # Language code (default: auto)
-Translate/--translate # Enable translation
-Diarize/--diarize # Enable diarization
-Detach/-d/--detach # Run in background
-Interactive/-i # Interactive setup
Examples:
# Interactive setup
.\run-docker.ps1 start -Interactive
./run-docker.sh start --interactive
# Advanced configuration
.\run-docker.ps1 start -Model large-v3 -Gpu -Language es -Detach
./run-docker.sh start --model base --translate --diarize --detach
# Management
.\run-docker.ps1 logs -Service whisper -Follow
./run-docker.sh logs --service app --follow --lines 100
Service URLs:
- Whisper Server: http://localhost:8178 (transcription service)
- Meeting App: http://localhost:5167 (AI-powered meeting management)
- API Documentation: http://localhost:5167/docs
The developer console provides real-time logging and debugging information for Meetily. It's particularly useful for troubleshooting issues and monitoring application behavior.
When running in development mode, the console is always visible:
pnpm tauri dev
All logs appear in the terminal where you run this command.
- Navigate to Settings in the app
- Scroll to the Developer section
- Use the Developer Console toggle to show/hide the console
- Windows: Controls the console window visibility
- macOS: Opens Terminal with filtered app logs
macOS:
# View live logs
log stream --process meetily-frontend --level info --style compact
# View historical logs (last hour)
log show --process meetily-frontend --last 1h
Windows:
# Run the executable directly to see console output
./target/release/meetily-frontend.exe
The console displays:
- Application startup and initialization logs
- Recording start/stop events
- Real-time transcription progress
- API connection status
- Error messages and stack traces
- Debug information (when
RUST_LOG=info
is set)
The console is helpful for:
- Debugging audio issues: See which audio devices are detected and used
- Monitoring transcription: Track progress and identify bottlenecks
- Troubleshooting connectivity: Verify API endpoints and connection status
- Performance analysis: Monitor resource usage and processing times
- Error diagnosis: Get detailed error messages and context
Windows:
- In release builds, the console window is hidden by default
- Use the UI toggle or run from terminal to see console output
- Console can be shown/hidden at runtime without restarting
macOS:
- Uses the system's unified logging
- Console opens in Terminal.app with filtered logs
- Logs persist in the system and can be viewed later
To completely remove Meetily:
# Remove the frontend
brew uninstall --cask meetily
# Remove the backend
brew uninstall meetily-backend
# Optional: remove the taps
brew untap zackriya-solutions/meetily
brew untap zackriya-solutions/meetily-backend
# Optional: remove Ollama if no longer needed
brew uninstall ollama
We are a team of expert AI engineers building privacy-first AI applications and agents. With experience across 20+ product development projects, we understand the critical importance of protecting privacy while delivering cutting-edge AI solutions.
Our Mission: Build comprehensive privacy-first AI applications that enterprises and professionals can trust with their most sensitive data.
Our Values:
- Privacy First: Data sovereignty should never be compromised
- Open Source: Transparency and community-driven development
- Enterprise Ready: Solutions that scale and meet compliance requirements
Meetily represents the beginning of our vision - a full ecosystem of privacy-first AI tools ranging from meeting assistants to compliance report generators, auditing systems, case research assistants, patent agents, HR automation, and more.
Meetily Enterprise is available for on-premise deployment, giving organizations complete control over their meeting intelligence infrastructure. This enterprise version includes:
- 100% On-Premise Deployment: Your data never leaves your infrastructure
- Centralized Management: Support for 100+ users with administrative controls
- Zero Vendor Lock-in: Open source MIT license ensures complete ownership
- Compliance Ready: Meet GDPR, SOX, HIPAA, and industry-specific requirements
- Custom Integration: APIs and webhooks for enterprise systems
For enterprise solutions: https://meetily.zackriya.com
Help us grow the privacy-first AI ecosystem!
We're looking for partners and referrals for early adopters of privacy-first AI solutions:
Target Industries & Use Cases:
- Meeting note takers and transcription services
- Compliance report generators
- Auditing support systems
- Case research assistants
- Patent agents and IP professionals
- HR automation and talent management
- Legal document processing
- Healthcare documentation
How You Can Help:
- Refer clients who need privacy-first AI solutions
- Partner with us on custom AI application development
- Collaborate on revenue sharing opportunities
- Get early access to new privacy-first AI tools
Your referrals keep us in business and help us build the future of privacy-first AI. We believe in partnerships that benefit everyone.
For partnerships and custom AI development: https://www.zackriya.com/service-interest-form/
- Follow the established project structure
- Write tests for new features
- Document API changes
- Use type hints in Python code
- Follow ESLint configuration for JavaScript/TypeScript
- Fork the repository
- Create a feature branch
- Submit a pull request
MIT License - Feel free to use this project for your own purposes.
Thanks for all the contributions. Our community is what makes this project possible. Below is the list of contributors:
We welcome contributions from the community! If you have any questions or suggestions, please open an issue or submit a pull request. Please follow the established project structure and guidelines. For more details, refer to the CONTRIBUTING file.
- We borrowes some code from Whisper.cpp
- We borrowes some code from Screenpipe
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for meeting-minutes
Similar Open Source Tools

meeting-minutes
An open-source AI assistant for taking meeting notes that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams to focus on discussions while automatically capturing and organizing meeting content without external servers or complex infrastructure. Features include modern UI, real-time audio capture, speaker diarization, local processing for privacy, and more. The tool also offers a Rust-based implementation for better performance and native integration, with features like live transcription, speaker diarization, and a rich text editor for notes. Future plans include database connection for saving meeting minutes, improving summarization quality, and adding download options for meeting transcriptions and summaries. The backend supports multiple LLM providers through a unified interface, with configurations for Anthropic, Groq, and Ollama models. System architecture includes core components like audio capture service, transcription engine, LLM orchestrator, data services, and API layer. Prerequisites for setup include Node.js, Python, FFmpeg, and Rust. Development guidelines emphasize project structure, testing, documentation, type hints, and ESLint configuration. Contributions are welcome under the MIT License.

lyraios
LYRAIOS (LLM-based Your Reliable AI Operating System) is an advanced AI assistant platform built with FastAPI and Streamlit, designed to serve as an operating system for AI applications. It offers core features such as AI process management, memory system, and I/O system. The platform includes built-in tools like Calculator, Web Search, Financial Analysis, File Management, and Research Tools. It also provides specialized assistant teams for Python and research tasks. LYRAIOS is built on a technical architecture comprising FastAPI backend, Streamlit frontend, Vector Database, PostgreSQL storage, and Docker support. It offers features like knowledge management, process control, and security & access control. The roadmap includes enhancements in core platform, AI process management, memory system, tools & integrations, security & access control, open protocol architecture, multi-agent collaboration, and cross-platform support.

layra
LAYRA is the world's first visual-native AI automation engine that sees documents like a human, preserves layout and graphical elements, and executes arbitrarily complex workflows with full Python control. It empowers users to build next-generation intelligent systems with no limits or compromises. Built for Enterprise-Grade deployment, LAYRA features a modern frontend, high-performance backend, decoupled service architecture, visual-native multimodal document understanding, and a powerful workflow engine.

robustmq
RobustMQ is a next-generation, high-performance, multi-protocol message queue built in Rust. It aims to create a unified messaging infrastructure tailored for modern cloud-native and AI systems. With features like high performance, distributed architecture, multi-protocol support, pluggable storage, cloud-native readiness, multi-tenancy, security features, observability, and user-friendliness, RobustMQ is designed to be production-ready and become a top-level Apache project in the message queue ecosystem by the second half of 2025.

opcode
opcode is a powerful desktop application built with Tauri 2 that serves as a command center for interacting with Claude Code. It offers a visual GUI for managing Claude Code sessions, creating custom agents, tracking usage, and more. Users can navigate projects, create specialized AI agents, monitor usage analytics, manage MCP servers, create session checkpoints, edit CLAUDE.md files, and more. The tool bridges the gap between command-line tools and visual experiences, making AI-assisted development more intuitive and productive.

OpenChat
OS Chat is a free, open-source AI personal assistant that combines 40+ language models with powerful automation capabilities. It allows users to deploy background agents, connect services like Gmail, Calendar, Notion, GitHub, and Slack, and get things done through natural conversation. With features like smart automation, service connectors, AI models, chat management, interface customization, and premium features, OS Chat offers a comprehensive solution for managing digital life and workflows. It prioritizes privacy by being open source and self-hostable, with encrypted API key storage.

zotero-mcp
Zotero MCP is an open-source project that integrates AI capabilities with Zotero using the Model Context Protocol. It consists of a Zotero plugin and an MCP server, enabling AI assistants to search, retrieve, and cite references from Zotero library. The project features a unified architecture with an integrated MCP server, eliminating the need for a separate server process. It provides features like intelligent search, detailed reference information, filtering by tags and identifiers, aiding in academic tasks such as literature reviews and citation management.

DreamLayer
DreamLayer AI is an open-source Stable Diffusion WebUI designed for AI researchers, labs, and developers. It automates prompts, seeds, and metrics for benchmarking models, datasets, and samplers, enabling reproducible evaluations across multiple seeds and configurations. The tool integrates custom metrics and evaluation pipelines, providing a streamlined workflow for AI research. With features like automated benchmarking, reproducibility, built-in metrics, multi-modal readiness, and researcher-friendly interface, DreamLayer AI aims to simplify and accelerate the model evaluation process.

chat-ollama
ChatOllama is an open-source chatbot based on LLMs (Large Language Models). It supports a wide range of language models, including Ollama served models, OpenAI, Azure OpenAI, and Anthropic. ChatOllama supports multiple types of chat, including free chat with LLMs and chat with LLMs based on a knowledge base. Key features of ChatOllama include Ollama models management, knowledge bases management, chat, and commercial LLMs API keys management.

local-deep-research
Local Deep Research is a powerful AI-powered research assistant that performs deep, iterative analysis using multiple LLMs and web searches. It can be run locally for privacy or configured to use cloud-based LLMs for enhanced capabilities. The tool offers advanced research capabilities, flexible LLM support, rich output options, privacy-focused operation, enhanced search integration, and academic & scientific integration. It also provides a web interface, command line interface, and supports multiple LLM providers and search engines. Users can configure AI models, search engines, and research parameters for customized research experiences.

monoscope
Monoscope is an open-source monitoring and observability platform that uses artificial intelligence to understand and monitor systems automatically. It allows users to ingest and explore logs, traces, and metrics in S3 buckets, query in natural language via LLMs, and create AI agents to detect anomalies. Key capabilities include universal data ingestion, AI-powered understanding, natural language interface, cost-effective storage, and zero configuration. Monoscope is designed to reduce alert fatigue, catch issues before they impact users, and provide visibility across complex systems.

evi-run
evi-run is a powerful, production-ready multi-agent AI system built on Python using the OpenAI Agents SDK. It offers instant deployment, ultimate flexibility, built-in analytics, Telegram integration, and scalable architecture. The system features memory management, knowledge integration, task scheduling, multi-agent orchestration, custom agent creation, deep research, web intelligence, document processing, image generation, DEX analytics, and Solana token swap. It supports flexible usage modes like private, free, and pay mode, with upcoming features including NSFW mode, task scheduler, and automatic limit orders. The technology stack includes Python 3.11, OpenAI Agents SDK, Telegram Bot API, PostgreSQL, Redis, and Docker & Docker Compose for deployment.

Hacx-GPT
Hacx GPT is a cutting-edge AI tool developed by BlackTechX, inspired by WormGPT, designed to push the boundaries of natural language processing. It is an advanced broken AI model that facilitates seamless and powerful interactions, allowing users to ask questions and perform various tasks. The tool has been rigorously tested on platforms like Kali Linux, Termux, and Ubuntu, offering powerful AI conversations and the ability to do anything the user wants. Users can easily install and run Hacx GPT on their preferred platform to explore its vast capabilities.

paelladoc
PAELLADOC is an intelligent documentation system that uses AI to analyze code repositories and generate comprehensive technical documentation. It offers a modular architecture with MECE principles, interactive documentation process, key features like Orchestrator and Commands, and a focus on context for successful AI programming. The tool aims to streamline documentation creation, code generation, and product management tasks for software development teams, providing a definitive standard for AI-assisted development documentation.

AIPex
AIPex is a revolutionary Chrome extension that transforms your browser into an intelligent automation platform. Using natural language commands and AI-powered intelligence, AIPex can automate virtually any browser task - from complex multi-step workflows to simple repetitive actions. It offers features like natural language control, AI-powered intelligence, multi-step automation, universal compatibility, smart data extraction, precision actions, form automation, visual understanding, developer-friendly with extensive API, and lightning-fast execution of automation tasks.

gemini-cli
Gemini CLI is an open-source AI agent that provides lightweight access to Gemini, offering powerful capabilities like code understanding, generation, automation, integration, and advanced features. It is designed for developers who prefer working in the command line and offers extensibility through MCP support. The tool integrates directly into GitHub workflows and offers various authentication options for individual developers, enterprise teams, and production workloads. With features like code querying, editing, app generation, debugging, and GitHub integration, Gemini CLI aims to streamline development workflows and enhance productivity.
For similar tasks

meeting-minutes
An open-source AI assistant for taking meeting notes that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams to focus on discussions while automatically capturing and organizing meeting content without external servers or complex infrastructure. Features include modern UI, real-time audio capture, speaker diarization, local processing for privacy, and more. The tool also offers a Rust-based implementation for better performance and native integration, with features like live transcription, speaker diarization, and a rich text editor for notes. Future plans include database connection for saving meeting minutes, improving summarization quality, and adding download options for meeting transcriptions and summaries. The backend supports multiple LLM providers through a unified interface, with configurations for Anthropic, Groq, and Ollama models. System architecture includes core components like audio capture service, transcription engine, LLM orchestrator, data services, and API layer. Prerequisites for setup include Node.js, Python, FFmpeg, and Rust. Development guidelines emphasize project structure, testing, documentation, type hints, and ESLint configuration. Contributions are welcome under the MIT License.

amurex
Amurex is a powerful AI meeting assistant that integrates seamlessly into your workflow. It ensures you never miss details, stay on top of action items, and make meetings more productive. With real-time suggestions, smart summaries, and follow-up emails, Amurex acts as your personal copilot. It is open-source, transparent, secure, and privacy-focused, providing a seamless AI-driven experience to take control of your meetings and focus on what truly matters.

hyprnote
Hyprnote is a local-first AI notepad designed for people in back-to-back meetings. It listens to your meetings while you write, crafts smart summaries based on your quick notes, and runs completely offline using open-source models like Whisper or HyprLLM. With Hyprnote, users can have full control over their notes as not a single byte of data leaves their laptop/server.
For similar jobs

amurex
Amurex is a powerful AI meeting assistant that integrates seamlessly into your workflow. It ensures you never miss details, stay on top of action items, and make meetings more productive. With real-time suggestions, smart summaries, and follow-up emails, Amurex acts as your personal copilot. It is open-source, transparent, secure, and privacy-focused, providing a seamless AI-driven experience to take control of your meetings and focus on what truly matters.

meeting-minutes
An open-source AI assistant for taking meeting notes that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams to focus on discussions while automatically capturing and organizing meeting content without external servers or complex infrastructure. Features include modern UI, real-time audio capture, speaker diarization, local processing for privacy, and more. The tool also offers a Rust-based implementation for better performance and native integration, with features like live transcription, speaker diarization, and a rich text editor for notes. Future plans include database connection for saving meeting minutes, improving summarization quality, and adding download options for meeting transcriptions and summaries. The backend supports multiple LLM providers through a unified interface, with configurations for Anthropic, Groq, and Ollama models. System architecture includes core components like audio capture service, transcription engine, LLM orchestrator, data services, and API layer. Prerequisites for setup include Node.js, Python, FFmpeg, and Rust. Development guidelines emphasize project structure, testing, documentation, type hints, and ESLint configuration. Contributions are welcome under the MIT License.

hyprnote
Hyprnote is a local-first AI notepad designed for people in back-to-back meetings. It listens to your meetings while you write, crafts smart summaries based on your quick notes, and runs completely offline using open-source models like Whisper or HyprLLM. With Hyprnote, users can have full control over their notes as not a single byte of data leaves their laptop/server.

Omi
Omi is an open-source AI wearable that transforms the way conversations are captured and managed. By connecting Omi to your mobile device, you can effortlessly obtain high-quality transcriptions of meetings, chats, and voice memos on the go.

omi
Omi is an open-source AI wearable that provides automatic, high-quality transcriptions of meetings, chats, and voice memos. It revolutionizes how conversations are captured and managed by connecting to mobile devices. The tool offers features for seamless documentation and integration with third-party services.

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customer’s subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.

quivr
Quivr is a personal assistant powered by Generative AI, designed to be a second brain for users. It offers fast and efficient access to data, ensuring security and compatibility with various file formats. Quivr is open source and free to use, allowing users to share their brains publicly or keep them private. The marketplace feature enables users to share and utilize brains created by others, boosting productivity. Quivr's offline mode provides anytime, anywhere access to data. Key features include speed, security, OS compatibility, file compatibility, open source nature, public/private sharing options, a marketplace, and offline mode.

Avalonia-Assistant
Avalonia-Assistant is an open-source desktop intelligent assistant that aims to provide a user-friendly interactive experience based on the Avalonia UI framework and the integration of Semantic Kernel with OpenAI or other large LLM models. By utilizing Avalonia-Assistant, you can perform various desktop operations through text or voice commands, enhancing your productivity and daily office experience.