tangent

tangent

Excalidraw meets ComfyUI for LLMs

Stars: 88

Visit
 screenshot

Tangent is a canvas for exploring AI conversations, allowing users to resurrect and continue conversations, branch and explore different ideas, organize conversations by topics, and support archive data exports. It aims to provide a visual/textual/audio exploration experience with AI assistants, offering a 'thoughts workbench' for experimenting freely, reviving old threads, and diving into tangents. The project structure includes a modular backend with components for API routes, background task management, data processing, and more. Prerequisites for setup include Whisper.cpp, Ollama, and exported archive data from Claude or ChatGPT. Users can initialize the environment, install Python packages, set up Ollama, configure local models, and start the backend and frontend to interact with the tool.

README:

->->-> Discord <-<-<-

tangent

What is this?

Tangent is a canvas for exploring AI conversations, treating each chat branch as an experiment you can merge, compare, and discard. It lets you resurrect conversations that hit context limits, pick up abandoned threads, and map the hidden connections between different discussions.

Core stuff it does:

  • 🌟 Resurrect & Continue: Seamlessly resume conversations after reaching a prior context limit.
  • 🌿 Branch & Explore: Effortlessly create conversation forks at any point to test multiple approaches or ideas.
  • 💻 Offline-First: Fully powered by local models, leveraging Ollama with plans to expand support.
  • 📂 Topic Clustering: Dynamically organize and filter conversations by their inferred topics, streamlining navigation.
  • 📜 Archive Support: Comprehensive compatibility with Claude and ChatGPT data exports, with additional integrations in development.
> The idea is to make your interaction with AI assistants more of a visual/textual/audio exploration rather than a plain chat interface. Think less "chat app" and more "thoughts workbench" where you can experiment freely, revive old threads that still have potential, or dive into tangents.

https://github.com/user-attachments/assets/69fac816-ebec-4506-af33-2d31bbe9419e

Project Structure

The backend is organized into a clean, modular structure:

tangent-api
├── src
│   ├── app.py                # Entry point of the application
│   ├── config.py             # Configuration settings
│   ├── models.py             # Data models and structures
│   ├── tasks.py              # Background task management
│   ├── utils.py              # Utility functions
│   ├── routes                # API route definitions
│   │   ├── __init__.py
│   │   ├── api.py            # Main API routes
│   │   ├── chats.py          # Chat-related routes
│   │   ├── messages.py       # Message retrieval routes
│   │   ├── states.py         # State management routes
│   │   └── topics.py         # Topic-related routes
│   └── services              # Service layer for background processing and data handling
│       ├── __init__.py
│       ├── background_processor.py  # Background processing tasks
│       ├── clustering.py      # Clustering operations
│       ├── data_processing.py  # Data processing functions
│       ├── embedding.py       # Embedding functions
│       ├── reflection.py      # Reflection generation functions
│       └── topic_generation.py # Topic generation functions
├── requirements.txt           # Project dependencies
└── README.md                  # Project documentation
## Prerequisites
  • Whisper.cpp (git clone https://github.com/ggerganov/whisper.cpp -> cd whisper.cpp -> sh ./models/download-ggml-model.sh base.en -> make -> make server && ./server)
  • Ollama (project was kinda hardcoded for ollama but can be generalized to accept diff backends)
  • Exported Archive Data (from Claude or ChatGPT)
### Environment Setup

Initialize a new venv (mac):

cd tangent-api
source my_env/bin/activate

Install Python packages:

pip install -r requirements.txt
### Ollama Setup

Install Ollama

find the appropriate image for your system here: https://ollama.com/

Verify installation

ollama --version
ollama version is 0.4.4

Download models (embedding + llm)

if you choose to swap these pls see the Configure local models section below

ollama pull all-minilm
ollama pull qwen2.5-coder:7b

Start Ollama (download if u don't already have it)

ollama serve
### Backend Setup

Configure local models:

cd src
export EMBEDDING_MODEL="custom-embedding-model"
export GENERATION_MODEL="custom-generation-model"

Then run with:

python3 app.py

Or all together:

python3 app.py --embedding-model "custom-embedding-model" --generation-model "custom-generation-model"

The backend will start up at http://localhost:5001/api.

### Frontend Setup
cd simplified-ui
npm i
npm start

if you get any missing pckg error just manually install it and restart the UI

### API Endpoints

The backend exposes these main endpoints:

  • /api/process: Send your chat data for processing
  • /api/process/status/<task_id>: Check how your processing is going
  • /api/chats/save: Save chat data
  • /api/chats/load/<chat_id>: Load up specific chats
  • /api/topics: Get all the generated topics

Feel free to contribute! Just submit a PR or open an issue for any cool features or fixes you've got in mind.

Licensed under Apache 2.0 - see the LICENSE file for the full details.

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for tangent

Similar Open Source Tools

For similar tasks

For similar jobs