topsha

topsha

Local Topsha 🐧 AI Agent for simple PC tasks - focused on local LLM (GPT-OSS, Qwen, GLM)

Stars: 81

Visit
 screenshot

LocalTopSH is an AI Agent Framework designed for companies and developers who require 100% on-premise AI agents with data privacy. It supports various OpenAI-compatible LLM backends and offers production-ready security features. The framework allows simple deployment using Docker compose and ensures that data stays within the user's network, providing full control and compliance. With cost-effective scaling options and compatibility in regions with restrictions, LocalTopSH is a versatile solution for deploying AI agents on self-hosted infrastructure.

README:

🐧 LocalTopSH

AI Agent Framework for Self-Hosted LLMs β€” deploy on your infrastructure, keep data private.

🎯 Built for companies and developers who need:

  • 100% on-premise AI agents (no data leaves your network)
  • Any OpenAI-compatible LLM (vLLM, Ollama, llama.cpp, text-generation-webui)
  • Production-ready security (battle-tested by 1500+ hackers)
  • Simple deployment (docker compose up and you're done)

Why LocalTopSH?

🏠 100% Self-Hosted

Unlike cloud-dependent solutions, LocalTopSH runs entirely on your infrastructure:

Problem Cloud Solutions LocalTopSH
Data Privacy Data sent to external APIs βœ… Everything stays on-premise
Compliance Hard to audit βœ… Full control, easy audit
API Access Need OpenAI/Anthropic account βœ… Any OpenAI-compatible endpoint
Sanctions/Restrictions Blocked in some regions βœ… Works anywhere
Cost at Scale $0.01-0.03 per 1K tokens βœ… Only electricity costs

πŸ€– Supported LLM Backends

Backend Example Models Setup
vLLM gpt-oss-120b, Qwen-72B, Llama-3-70B vllm serve model --api-key dummy
Ollama Llama 3, Mistral, Qwen, 100+ models ollama serve
llama.cpp Any GGUF model llama-server -m model.gguf
text-generation-webui Any HuggingFace model Enable OpenAI API extension
LocalAI Multiple backends Docker compose included
LM Studio Desktop-friendly Built-in server mode

πŸ’° Cost Comparison (1M tokens/day)

Solution Daily Cost Monthly Cost
OpenAI GPT-4 ~$30 ~$900
Anthropic Claude ~$15 ~$450
Self-hosted (LocalTopSH) Electricity only ~$50-100 (GPU power)

🌍 Works Everywhere

  • βœ… Russia, Belarus, Iran β€” sanctions don't apply to self-hosted
  • βœ… China β€” no Great Firewall issues
  • βœ… Air-gapped networks β€” zero internet required
  • βœ… On-premise data centers β€” full compliance

Quick Start

1. Start your LLM backend

# Option A: vLLM (recommended for production)
vllm serve gpt-oss-120b --api-key dummy --port 8000

# Option B: Ollama (easy setup)
ollama serve  # Default port 11434

# Option C: llama.cpp (minimal resources)
llama-server -m your-model.gguf --port 8000

2. Configure LocalTopSH

git clone https://github.com/yourrepo/LocalTopSH
cd LocalTopSH

# Create secrets
mkdir secrets
echo "your-telegram-token" > secrets/telegram_token.txt
echo "http://your-llm-server:8000/v1" > secrets/base_url.txt
echo "dummy" > secrets/api_key.txt  # or real key if required
echo "gpt-oss-120b" > secrets/model_name.txt
echo "your-zai-key" > secrets/zai_api_key.txt

# Set permissions for Docker
chmod 644 secrets/*.txt

3. Deploy

docker compose up -d

# Check status
docker compose ps

# View logs
docker compose logs -f

4. Access

5. Configure Admin Panel Auth (Important!)

# Change default admin password (REQUIRED for production!)
echo "your-secure-password" > secrets/admin_password.txt

# Optionally change admin username via environment variable
# Edit docker-compose.yml and set ADMIN_USER=your_username

# Rebuild admin container
docker compose up -d --build admin

⚠️ Default credentials: admin / changeme123 β€” change them before exposing to network!


Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                           YOUR INFRASTRUCTURE                                    β”‚
β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€
β”‚                                                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”β”‚
β”‚  β”‚   Telegram      β”‚     β”‚   LocalTopSH    β”‚     β”‚   Your LLM Backend          β”‚β”‚
β”‚  β”‚   (optional)    │────▢│   Agent Stack   │────▢│   ────────────────────────  β”‚β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚                 β”‚     β”‚   vLLM / Ollama / llama.cpp β”‚β”‚
β”‚                          β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚     β”‚   gpt-oss-120b              β”‚β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”‚  β”‚   core    β”‚  β”‚     β”‚   Qwen-72B                  β”‚β”‚
β”‚  β”‚   Admin Panel   │────▢│  β”‚  (agent)  β”‚  β”‚     β”‚   Llama-3-70B               β”‚β”‚
β”‚  β”‚   :3000         β”‚     β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚     β”‚   Mistral-22B               β”‚β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β”‚        β”‚        β”‚     β”‚   Your fine-tuned model     β”‚β”‚
β”‚                          β”‚        β–Ό        β”‚     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜β”‚
β”‚                          β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”‚                                    β”‚
β”‚                          β”‚  β”‚  sandbox  β”‚  β”‚     No data leaves your network!  β”‚
β”‚                          β”‚  β”‚ (per-user)β”‚  β”‚                                    β”‚
β”‚                          β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β”‚                                    β”‚
β”‚                          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                                    β”‚
β”‚                                                                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Security (Battle-Tested)

πŸ”₯ Stress-tested by 1500+ hackers in @neuraldeepchat

Attack attempts: Token extraction, RAM exhaustion, container escapes

Result: 0 secrets leaked, 0 downtime

Five Layers of Protection

Layer Protection Details
Access Control DM Policy admin/allowlist/pairing/public modes
Input Validation Blocked patterns 247 dangerous commands blocked
Injection Defense Pattern matching 19 prompt injection patterns
Sandbox Isolation Docker per-user 512MB RAM, 50% CPU, 100 PIDs
Secrets Protection Proxy architecture Agent never sees API keys

Security Audit

# Run security doctor (46 checks)
python scripts/doctor.py

# Run E2E tests (10 checks)
python scripts/e2e_test.py --verbose

Features

πŸ’» Agent Capabilities

Category Features
System Shell execution, file operations, code execution
Web Search (Z.AI), page fetching, link extraction
Memory Persistent notes, task management, chat history
Automation Scheduled tasks, background jobs
Telegram Send files, DMs, message management

πŸ”§ Extensibility

Feature Description
Skills Anthropic-compatible skill packages
MCP Model Context Protocol for external tools
Tools API Dynamic tool loading and management
Admin Panel Web UI for configuration and monitoring

πŸ“¦ Services

Container Port Role
core 4000 ReAct Agent, security, sandbox orchestration
bot 4001 Telegram Bot (aiogram)
proxy 3200 Secrets isolation, LLM proxy
tools-api 8100 Tool registry, MCP, skills
admin 3000 Web admin panel (React)
sandbox_{id} 5000-5999 Per-user isolated execution

Configuration

Secrets

Secret Required Description
telegram_token.txt βœ… Bot token from @BotFather
base_url.txt βœ… LLM API URL (e.g. http://vllm:8000/v1)
api_key.txt βœ… LLM API key (use dummy if not required)
model_name.txt βœ… Model name (e.g. gpt-oss-120b)
zai_api_key.txt βœ… Z.AI search key
admin_password.txt βœ… Admin panel password (default: changeme123)

Environment Examples

vLLM

echo "http://vllm-server:8000/v1" > secrets/base_url.txt
echo "dummy" > secrets/api_key.txt
echo "gpt-oss-120b" > secrets/model_name.txt

Ollama

echo "http://ollama:11434/v1" > secrets/base_url.txt
echo "ollama" > secrets/api_key.txt
echo "llama3:70b" > secrets/model_name.txt

OpenAI-compatible (any)

echo "http://your-server:8000/v1" > secrets/base_url.txt
echo "your-api-key" > secrets/api_key.txt
echo "your-model-name" > secrets/model_name.txt

Admin Panel

Web panel at :3000 for managing the system (protected by Basic Auth):

Authentication

# Default credentials
Username: admin
Password: (from secrets/admin_password.txt, default: changeme123)

# Change password
echo "your-secure-password" > secrets/admin_password.txt
docker compose up -d --build admin

# Change username (optional)
# In docker-compose.yml, set environment variable:
# ADMIN_USER=your_username

Pages

Page Features
Dashboard Stats, active users, sandboxes
Services Start/stop containers
Config Agent settings, rate limits
Security Blocked patterns management
Tools Enable/disable tools
MCP Manage MCP servers
Skills Install/manage skills
Users Sessions, chat history
Logs Real-time service logs

Remote Access (SSH Tunnel)

Admin panel is bound to 127.0.0.1:3000 for security. For remote access:

# On your local machine
ssh -L 3000:localhost:3000 user@your-server

# Then open http://localhost:3000 in browser

Comparison with Alternatives

Feature LocalTopSH OpenClaw LangChain
Self-hosted LLM βœ… Native ⚠️ Limited βœ… Yes
Security hardening βœ… 247 patterns Basic ❌ None
Sandbox isolation βœ… Docker per-user βœ… Docker ❌ None
Admin panel βœ… React UI βœ… React UI ❌ None
Telegram integration βœ… Native βœ… Multi-channel ❌ None
Setup complexity Simple Complex Code-only
OAuth/subscription abuse ❌ No βœ… Yes ❌ No
100% on-premise βœ… Yes ⚠️ Partial βœ… Yes

Use Cases

🏒 Enterprise

  • Internal AI assistant with full data privacy
  • Code review bot that never leaks proprietary code
  • Document analysis without sending files to cloud

πŸ”¬ Research

  • Experiment with open models (Llama, Mistral, Qwen)
  • Fine-tuned model deployment with agent capabilities
  • Reproducible AI workflows in isolated environments

🌍 Restricted Regions

  • Russia/Belarus/Iran β€” no API access restrictions
  • China β€” no Great Firewall issues
  • Air-gapped networks β€” military, government, finance

πŸ’° Cost Optimization

  • High-volume workloads β€” pay for GPU, not per-token
  • Predictable costs β€” no surprise API bills
  • Scale without limits β€” your hardware, your rules

Philosophy

We believe in building real infrastructure, not hacks.

Approach LocalTopSH βœ… Subscription Abuse ❌
LLM Access Your own models/keys Stolen browser sessions
Cost Model Pay for hardware Violate ToS, risk bans
Reliability 100% uptime (your infra) Breaks when UI changes
Security Full control Cookies stored who-knows-where
Ethics Transparent & legal Gray area at best

License

MIT


Links

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for topsha

Similar Open Source Tools

For similar tasks

For similar jobs