local-cocoa

local-cocoa

A local AI assistant running on your device. It turns your files into actionable memory.

Stars: 53

Visit
 screenshot

Local Cocoa is a privacy-focused tool that runs entirely on your device, turning files into memory to spark insights and power actions. It offers features like fully local privacy, multimodal memory, vector-powered retrieval, intelligent indexing, vision understanding, hardware acceleration, focused user experience, integrated notes, and auto-sync. The tool combines file ingestion, intelligent chunking, and local retrieval to build a private on-device knowledge system. The ultimate goal includes more connectors like Google Drive integration, voice mode for local speech-to-text interaction, and a plugin ecosystem for community tools and agents. Local Cocoa is built using Electron, React, TypeScript, FastAPI, llama.cpp, and Qdrant.

README:

Local Cocoa Banner

๐Ÿซ Local Cocoa: Your Personal Cowork, Fully Local ๐Ÿ’ป

License: MIT macOS Windows Linux Privacy Powered by


๐Ÿ’ป Local Cocoa runs entirely on your device, not inside the cloud.

๐Ÿง  Each file turns into memory. Memories form context. Context sparks insight. Insight powers action.

๐Ÿ”’ No externals eyes. No data leaving. Just your computer learning you better, helping you smarter.

๐ŸŽฌ Live Demos

๐Ÿ” File Retrieval ๐Ÿ“Š Year-End Report โŒจ๏ธ Global Shortcuts
File Retrieval Demo Year-End Report Demo Global Shortcuts Demo
Instantly chat with your local files Scan 2025 files for insights Access Synvo anywhere

Key Features

๐Ÿ›ก๏ธ Privacy First

  • ๐Ÿ” Fully Local Privacy: All inference, indexing, and retrieval run entirely on your device with zero data leaving.
    • *๐Ÿ’ก Pro Tip: If you verify network activity using tools like Little Snitch (macOS) or GlassWire (Windows), you'll confirm that no personal data leaves your device.

๐Ÿง  Core Intelligence

  • ๐Ÿง  Multimodal Memory: Turns documents, images, audio, and video into a persistent semantic memory space.
  • ๐Ÿ” Vector-Powered Retrieval: Local Qdrant search with semantic reranking for precise, high-recall answers.
  • ๐Ÿ“ Intelligent Indexing: Smartly monitors folders to incrementally index, chunk, and embed efficient vectors.
  • ๐Ÿ–ผ Vision Understanding: Integrated OCR and VLM to extract text and meaning from screenshots and PDFs.

โšก Performance & Experience

  • โšก Hardware Accelerated: Optimized llama.cpp engine designed for Apple Silicon and consumer GPUs.
  • ๐Ÿซ Focused UX: A calm, responsive interface designed for clarity and seamless interaction.
  • โœ Integrated Notes: Write notes that become part of your semantic memory for future recall.
  • ๐Ÿ” Auto-Sync: Automatically detects file changes and keeps your knowledge base fresh.

๐Ÿ—๏ธ Architecture Overview

Local Cocoa runs entirely on your device. It combines file ingestion, intelligent chunking, and local retrieval to build a private on-device knowledge system.

Local Cocoa Architecture Diagram

Frontend: Electron โ€ข React โ€ข TypeScript โ€ข TailwindCSS
Backend: FastAPI โ€ข llama.cpp โ€ข Qdrant

๐ŸŽฏ The Ultimate Goal of Local Cocoa

Local Cocoa Vision Diagram
We're actively developing these featuresโ€”contributions welcome!
  • [ ] ๐Ÿ‘‘ More Connectors: Google Drive, Notion, Slack integration
  • [ ] ๐ŸŽค Voice Mode: Local speech-to-text for voice interaction
  • [ ] ๐Ÿ”Œ Plugin Ecosystem: Open API for community tools and agents

โœจ Contributors

๐Ÿ’ก Core Contributors

EricFan2002
EricFan2002
Jingkang50
Jingkang50
Tom-TaoQin
Tomโ€‘TaoQin
choiszt
choiszt
KairuiHu
KairuiHu

๐ŸŒ Community Contributors

๐Ÿ› ๏ธ Quick Start

Local Cocoa uses a modern Electron + React + Python FastAPI hybrid architecture.

๐Ÿš€ Prerequisites

Ensure the following are installed on your system:

  • Node.js v18.17 or higher
  • Python v3.10 or higher
  • CMake (for building the llama.cpp server)

Step 1: Clone the Repository

git clone https://github.com/synvo-ai/local-cocoa.git
cd local-cocoa

Step 2: Install Dependencies

# Frontend / Electron
npm install

# Backend / RAG Agent (macOS/Linux)
python3 -m venv .venv
source .venv/bin/activate
pip install -r services/app/requirements.txt

# Backend / RAG Agent (Windows PowerShell)
python -m venv .venv
.venv\Scripts\Activate.ps1
pip install -r services/app/requirements.txt

Step 3: Download Local Models

We provide a script to automatically download embedding, reranker, and vision models:

npm run models:download
Proxy Support (Clash / Shadowsocks / Corporate)

Model downloads support:

  • System proxy (recommended): If Clash/Shadowsocks is set as your OS proxy, downloads will use it automatically.
  • Environment variables: Set one of these (case-insensitive):
    • HTTPS_PROXY / HTTP_PROXY (e.g., http://127.0.0.1:7890)
    • ALL_PROXY (supports socks5://...)
    • NO_PROXY (comma-separated bypass list, e.g., localhost,127.0.0.1)

Windows PowerShell example:

$env:HTTPS_PROXY = "http://127.0.0.1:7890"
$env:NO_PROXY = "localhost,127.0.0.1"
npm run models:download

Step 4: Build Llama Server

Windows Users: If you have pre-compiled binaries, place llama-server.exe in runtime/llama-cpp/bin/.

Build llama-server using CMake:

mkdir -p runtime && cd runtime
git clone https://github.com/ggerganov/llama.cpp.git llama-cpp
cd llama-cpp
mkdir -p build && cd build
cmake .. -DLLAMA_BUILD_SERVER=ON
cmake --build . --target llama-server --config Release
cd ..

# Organize binaries (macOS/Linux)
mkdir -p bin
cp build/bin/llama-server bin/llama-server

# Windows: cp build/bin/Release/llama-server.exe bin/llama-server.exe

cd ../..

Step 5: Build Whisper Server (Speech-to-Text)

To enable transcriptions:

# In runtime folder
cd runtime
git clone https://github.com/ggml-org/whisper.cpp.git whisper-cpp
cd whisper-cpp
cmake -B build
cmake --build build -j --config Release
mv build/bin ./
# The app expects the binary at runtime/whisper-cpp/bin/whisper-server
# For Windows, check build/bin/Release/whisper-server.exe
cd ../..

๐Ÿƒ Run in Development Mode

Ensure your Python virtual environment is active, then run:

# macOS/Linux
source .venv/bin/activate
npm run dev

# Windows PowerShell
.venv\Scripts\Activate.ps1
npm run dev

This launches the React Dev Server, Electron client, and FastAPI backend simultaneously.


๐Ÿค Contributing

We welcome contributions of all kindsโ€”bug fixes, features, or documentation improvements.

Please read our Contribution Guidelines before submitting a Pull Request or Issue.

Quick Guide

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Make your changes
  4. Commit your changes (git commit -m 'feat: add amazing feature')
    • ๐Ÿ” Pre-commit hooks will automatically check your code for errors
    • Run npm run lint:fix to auto-fix common issues
  5. Push to the branch (git push origin feature/amazing-feature)
  6. Open a Pull Request

Code Quality

This project enforces code quality through automated pre-commit hooks:

  • โœ… ESLint checks for unused imports/variables and coding standards
  • โœ… TypeScript ensures type safety
  • โœ… Commits are blocked if errors are found

See CONTRIBUTING.md for details.

Thank you to everyone who has contributed to Local Cocoa! ๐Ÿ™

๐Ÿ“„ License

This project is licensed under the MIT License. See the LICENSE file for details.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for local-cocoa

Similar Open Source Tools

For similar tasks

For similar jobs