myclaw
Your own personal AI assistant. Any OS. Any Platform. mini clawbot.
Stars: 127
myclaw is a personal AI assistant built on agentsdk-go that offers a CLI agent for single message or interactive REPL mode, full orchestration with channels, cron, and heartbeat, support for various messaging channels like Telegram, Feishu, WeCom, WhatsApp, and a web UI, multi-provider support for Anthropic and OpenAI models, image recognition and document processing, scheduled tasks with JSON persistence, long-term and daily memory storage, custom skill loading, and more. It provides a comprehensive solution for interacting with AI models and managing tasks efficiently.
README:
Personal AI assistant built on agentsdk-go.
- CLI Agent - Single message or interactive REPL mode
- Gateway - Full orchestration: channels + cron + heartbeat
- Telegram Channel - Receive and send messages via Telegram bot (text + image + document)
- Feishu Channel - Receive and send messages via Feishu (Lark) bot
- WeCom Channel - Receive inbound messages and send markdown replies via WeCom intelligent bot API mode
- WhatsApp Channel - Receive and send messages via WhatsApp (QR code login)
- Web UI - Browser-based chat interface with WebSocket (responsive, PC + mobile)
- Multi-Provider - Support for Anthropic and OpenAI models
- Multimodal - Image recognition and document processing
- Cron Jobs - Scheduled tasks with JSON persistence
- Heartbeat - Periodic tasks from HEARTBEAT.md
- Memory - Long-term (MEMORY.md) + daily memories
- Skills - Custom skill loading from workspace
# Build
make build
# Interactive config setup
make setup
# Or initialize config and workspace manually
make onboard
# Set your API key
export MYCLAW_API_KEY=your-api-key
# Run agent (single message)
./myclaw agent -m "Hello"
# Run agent (REPL mode)
make run
# Start gateway (channels + cron + heartbeat)
make gateway| Target | Description |
|---|---|
make build |
Build binary |
make run |
Run agent REPL |
make gateway |
Start gateway (channels + cron + heartbeat) |
make onboard |
Initialize config and workspace |
make status |
Show myclaw status |
make setup |
Interactive config setup (generates ~/.myclaw/config.json) |
make tunnel |
Start cloudflared tunnel for Feishu webhook |
make test |
Run tests |
make test-race |
Run tests with race detection |
make test-cover |
Run tests with coverage report |
make docker-up |
Docker build and start |
make docker-up-tunnel |
Docker start with cloudflared tunnel |
make docker-down |
Docker stop |
make lint |
Run golangci-lint |
┌─────────────────────────────────────────────────────────┐
│ CLI (cobra) │
│ agent | gateway | onboard | status │
└──────┬──────────────────┬───────────────────────────────┘
│ │
▼ ▼
┌──────────────┐ ┌───────────────────────────────────────┐
│ Agent Mode │ │ Gateway │
│ (single / │ │ │
│ REPL) │ │ ┌─────────┐ ┌──────┐ ┌─────────┐ │
└──────┬───────┘ │ │ Channel │ │ Cron │ │Heartbeat│ │
│ │ │ Manager │ │ │ │ │ │
│ │ └────┬────┘ └──┬───┘ └────┬────┘ │
│ │ │ │ │ │
▼ │ ▼ ▼ ▼ │
┌──────────────┐ │ ┌─────────────────────────────────┐ │
│ agentsdk-go │ │ │ Message Bus │ │
│ Runtime │◄─┤ │ Inbound ←── Channels │ │
│ │ │ │ Outbound ──► Channels │ │
└──────────────┘ │ └──────────────┬──────────────────┘ │
│ │ │
│ ▼ │
│ ┌──────────────────────────────────┐ │
│ │ agentsdk-go Runtime │ │
│ │ (ReAct loop + tool execution) │ │
│ └──────────────────────────────────┘ │
│ │
│ ┌──────────┐ ┌────────────────────┐ │
│ │ Memory │ │ Config │ │
│ │ (MEMORY │ │ (JSON + env vars) │ │
│ │ + daily)│ │ │ │
│ └──────────┘ └────────────────────┘ │
└───────────────────────────────────────┘
Data Flow (Gateway Mode):
Telegram/Feishu/WeCom/WhatsApp/WebUI ──► Channel ──► Bus.Inbound ──► processLoop
│
▼
Runtime.Run()
│
▼
Bus.Outbound ──► Channel ──► Telegram/Feishu/WeCom/WhatsApp/WebUI
cmd/myclaw/ CLI entry point (agent, gateway, onboard, status)
internal/
bus/ Message bus (inbound/outbound channels)
channel/ Channel interface + implementations
telegram.go Telegram bot (polling, text/image/document)
feishu.go Feishu/Lark bot (webhook)
wecom.go WeCom intelligent bot (webhook, encrypted)
whatsapp.go WhatsApp (whatsmeow, QR login)
webui.go Web UI (WebSocket, embedded HTML)
static/ Embedded web UI assets
config/ Configuration loading (JSON + env vars)
cron/ Cron job scheduling with JSON persistence
gateway/ Gateway orchestration (bus + runtime + channels)
heartbeat/ Periodic heartbeat service
memory/ Memory system (long-term + daily)
skills/ Custom skill loader
docs/
telegram-setup.md Telegram bot setup guide
feishu-setup.md Feishu bot setup guide
wecom-setup.md WeCom intelligent bot setup guide
scripts/
setup.sh Interactive config generator
workspace/
AGENTS.md Agent system prompt
SOUL.md Agent personality
Run make setup for interactive config, or copy config.example.json to ~/.myclaw/config.json:
{
"provider": {
"type": "anthropic",
"apiKey": "your-api-key",
"baseUrl": ""
},
"agent": {
"model": "claude-sonnet-4-5-20250929"
},
"channels": {
"telegram": {
"enabled": true,
"token": "your-bot-token",
"allowFrom": ["123456789"]
},
"feishu": {
"enabled": true,
"appId": "cli_xxx",
"appSecret": "your-app-secret",
"verificationToken": "your-verification-token",
"port": 9876,
"allowFrom": []
},
"wecom": {
"enabled": true,
"token": "your-token",
"encodingAESKey": "your-43-char-encoding-aes-key",
"receiveId": "",
"port": 9886,
"allowFrom": ["zhangsan"]
},
"whatsapp": {
"enabled": true,
"allowFrom": []
},
"webui": {
"enabled": true,
"allowFrom": []
}
}
}| Type | Config | Env Vars |
|---|---|---|
anthropic (default) |
"type": "anthropic" |
MYCLAW_API_KEY, ANTHROPIC_API_KEY
|
openai |
"type": "openai" |
OPENAI_API_KEY |
When using OpenAI, set the model to an OpenAI model name (e.g., gpt-4o).
| Variable | Description |
|---|---|
MYCLAW_API_KEY |
API key (any provider) |
ANTHROPIC_API_KEY |
Anthropic API key |
OPENAI_API_KEY |
OpenAI API key (auto-sets type to openai) |
MYCLAW_BASE_URL |
Custom API base URL |
MYCLAW_TELEGRAM_TOKEN |
Telegram bot token |
MYCLAW_FEISHU_APP_ID |
Feishu app ID |
MYCLAW_FEISHU_APP_SECRET |
Feishu app secret |
MYCLAW_WECOM_TOKEN |
WeCom intelligent bot callback token |
MYCLAW_WECOM_ENCODING_AES_KEY |
WeCom intelligent bot callback EncodingAESKey |
MYCLAW_WECOM_RECEIVE_ID |
Optional receive ID for strict decrypt validation |
Prefer environment variables over config files for sensitive values like API keys.
See docs/telegram-setup.md for detailed setup guide.
Quick steps:
- Create a bot via @BotFather on Telegram
- Set
tokenin config orMYCLAW_TELEGRAM_TOKENenv var - Run
make gateway
See docs/feishu-setup.md for detailed setup guide.
Quick steps:
- Create an app at Feishu Open Platform
- Enable Bot capability
- Add permissions:
im:message,im:message:send_as_bot - Configure Event Subscription URL:
https://your-domain/feishu/webhook - Subscribe to event:
im.message.receive_v1 - Set
appId,appSecret,verificationTokenin config - Run
make gatewayandmake tunnel(for public webhook URL)
See docs/wecom-setup.md for detailed setup guide.
Quick steps:
- Create a WeCom intelligent bot in API mode and get
token,encodingAESKey - Configure callback URL:
https://your-domain/wecom/bot - Set
tokenandencodingAESKeyin both WeCom console and myclaw config - Optionally set
receiveIdif you need strict decrypt receive-id validation - Optional: set
allowFromto your user ID(s) as whitelist (if unset/empty, inbound from all users is allowed) - Run
make gateway
WeCom notes:
- Outbound uses
response_urland sendsmarkdownpayloads -
response_urlis short-lived (often single-use); delayed or repeated replies may fail - Outbound markdown content over 20480 bytes is truncated
Quick steps:
- Set
"whatsapp": {"enabled": true}in config - Run
make gateway - Scan the QR code displayed in terminal with your WhatsApp
- Session is stored locally in SQLite (auto-reconnects on restart)
Quick steps:
- Set
"webui": {"enabled": true}in config - Run
make gateway - Open
http://localhost:18790in your browser (PC or mobile)
Features:
- Responsive design (PC + mobile)
- Dark mode (follows system preference)
- WebSocket real-time communication
- Markdown rendering (code blocks, bold, italic, links)
- Auto-reconnect on connection loss
docker build -t myclaw .
docker run -d \
-e MYCLAW_API_KEY=your-api-key \
-e MYCLAW_TELEGRAM_TOKEN=your-token \
-p 18790:18790 \
-p 9876:9876 \
-p 9886:9886 \
-v myclaw-data:/root/.myclaw \
myclaw# Create .env from example
cp .env.example .env
# Edit .env with your credentials
# Start gateway
docker compose up -d
# Start with cloudflared tunnel (for Feishu webhook)
docker compose --profile tunnel up -d
# View logs
docker compose logs -f myclawFor Feishu webhooks, you need a public URL:
# Temporary tunnel (dev)
make tunnel
# Or via docker compose
docker compose --profile tunnel up -d
docker compose logs tunnel | grep trycloudflareSet the output URL + /feishu/webhook as your Feishu event subscription URL.
-
~/.myclaw/config.jsonis set tochmod 600(owner read/write only) -
.gitignoreexcludesconfig.json,.env, and workspace memory files - Use environment variables for sensitive values in CI/CD and production
- Never commit real API keys or tokens to version control
make test # Run all tests
make test-race # Run with race detection
make test-cover # Run with coverage report
make lint # Run golangci-lintMIT
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for myclaw
Similar Open Source Tools
myclaw
myclaw is a personal AI assistant built on agentsdk-go that offers a CLI agent for single message or interactive REPL mode, full orchestration with channels, cron, and heartbeat, support for various messaging channels like Telegram, Feishu, WeCom, WhatsApp, and a web UI, multi-provider support for Anthropic and OpenAI models, image recognition and document processing, scheduled tasks with JSON persistence, long-term and daily memory storage, custom skill loading, and more. It provides a comprehensive solution for interacting with AI models and managing tasks efficiently.
Shannon
Shannon is a battle-tested infrastructure for AI agents that solves problems at scale, such as runaway costs, non-deterministic failures, and security concerns. It offers features like intelligent caching, deterministic replay of workflows, time-travel debugging, WASI sandboxing, and hot-swapping between LLM providers. Shannon allows users to ship faster with zero configuration multi-agent setup, multiple AI patterns, time-travel debugging, and hot configuration changes. It is production-ready with features like WASI sandbox, token budget control, policy engine (OPA), and multi-tenancy. Shannon helps scale without breaking by reducing costs, being provider agnostic, observable by default, and designed for horizontal scaling with Temporal workflow orchestration.
vibium
Vibium is a browser automation infrastructure designed for AI agents, providing a single binary that manages browser lifecycle, WebDriver BiDi protocol, and an MCP server. It offers zero configuration, AI-native capabilities, and is lightweight with no runtime dependencies. It is suitable for AI agents, test automation, and any tasks requiring browser interaction.
solo-server
Solo Server is a lightweight server designed for managing hardware-aware inference. It provides seamless setup through a simple CLI and HTTP servers, an open model registry for pulling models from platforms like Ollama and Hugging Face, cross-platform compatibility for effortless deployment of AI models on hardware, and a configurable framework that auto-detects hardware components (CPU, GPU, RAM) and sets optimal configurations.
mesh
MCP Mesh is an open-source control plane for MCP traffic that provides a unified layer for authentication, routing, and observability. It replaces multiple integrations with a single production endpoint, simplifying configuration management. Built for multi-tenant organizations, it offers workspace/project scoping for policies, credentials, and logs. With core capabilities like MeshContext, AccessControl, and OpenTelemetry, it ensures fine-grained RBAC, full tracing, and metrics for tools and workflows. Users can define tools with input/output validation, access control checks, audit logging, and OpenTelemetry traces. The project structure includes apps for full-stack MCP Mesh, encryption, observability, and more, with deployment options ranging from Docker to Kubernetes. The tech stack includes Bun/Node runtime, TypeScript, Hono API, React, Kysely ORM, and Better Auth for OAuth and API keys.
mimiclaw
MimiClaw is a pocket AI assistant that runs on a $5 chip, specifically designed for the ESP32-S3 board. It operates without Linux or Node.js, using pure C language. Users can interact with MimiClaw through Telegram, enabling it to handle various tasks and learn from local memory. The tool is energy-efficient, running on USB power 24/7. With MimiClaw, users can have a personal AI assistant on a chip the size of a thumb, making it convenient and accessible for everyday use.
boxlite
BoxLite is an embedded, lightweight micro-VM runtime designed for AI agents running OCI containers with hardware-level isolation. It is built for high concurrency with no daemon required, offering features like lightweight VMs, high concurrency, hardware isolation, embeddability, and OCI compatibility. Users can spin up 'Boxes' to run containers for AI agent sandboxes and multi-tenant code execution scenarios where Docker alone is insufficient and full VM infrastructure is too heavy. BoxLite supports Python, Node.js, and Rust with quick start guides for each, along with features like CPU/memory limits, storage options, networking capabilities, security layers, and image registry configuration. The tool provides SDKs for Python and Node.js, with Go support coming soon. It offers detailed documentation, examples, and architecture insights for users to understand how BoxLite works under the hood.
openakita
OpenAkita is a self-evolving AI Agent framework that autonomously learns new skills, performs daily self-checks and repairs, accumulates experience from task execution, and persists until the task is done. It auto-generates skills, installs dependencies, learns from mistakes, and remembers preferences. The framework is standards-based, multi-platform, and provides a Setup Center GUI for intuitive installation and configuration. It features self-learning and evolution mechanisms, a Ralph Wiggum Mode for persistent execution, multi-LLM endpoints, multi-platform IM support, desktop automation, multi-agent architecture, scheduled tasks, identity and memory management, a tool system, and a guided wizard for setup.
claudex
Claudex is an open-source, self-hosted Claude Code UI that runs entirely on your machine. It provides multiple sandboxes, allows users to use their own plans, offers a full IDE experience with VS Code in the browser, and is extensible with skills, agents, slash commands, and MCP servers. Users can run AI agents in isolated environments, view and interact with a browser via VNC, switch between multiple AI providers, automate tasks with Celery workers, and enjoy various chat features and preview capabilities. Claudex also supports marketplace plugins, secrets management, integrations like Gmail, and custom instructions. The tool is configured through providers and supports various providers like Anthropic, OpenAI, OpenRouter, and Custom. It has a tech stack consisting of React, FastAPI, Python, PostgreSQL, Celery, Redis, and more.
starknet-agentic
Open-source stack for giving AI agents wallets, identity, reputation, and execution rails on Starknet. `starknet-agentic` is a monorepo with Cairo smart contracts for agent wallets, identity, reputation, and validation, TypeScript packages for MCP tools, A2A integration, and payment signing, reusable skills for common Starknet agent capabilities, and examples and docs for integration. It provides contract primitives + runtime tooling in one place for integrating agents. The repo includes various layers such as Agent Frameworks / Apps, Integration + Runtime Layer, Packages / Tooling Layer, Cairo Contract Layer, and Starknet L2. It aims for portability of agent integrations without giving up Starknet strengths, with a cross-chain interop strategy and skills marketplace. The repository layout consists of directories for contracts, packages, skills, examples, docs, and website.
blendsql
BlendSQL is a superset of SQLite designed for problem decomposition and hybrid question-answering with Large Language Models (LLMs). It allows users to blend operations over heterogeneous data sources like tables, text, and images, combining the structured and interpretable reasoning of SQL with the generalizable reasoning of LLMs. Users can oversee all calls (LLM + SQL) within a unified query language, enabling tasks such as building LLM chatbots for travel planning and answering complex questions by injecting 'ingredients' as callable functions.
gpt-all-star
GPT-All-Star is an AI-powered code generation tool designed for scratch development of web applications with team collaboration of autonomous AI agents. The primary focus of this research project is to explore the potential of autonomous AI agents in software development. Users can organize their team, choose leaders for each step, create action plans, and work together to complete tasks. The tool supports various endpoints like OpenAI, Azure, and Anthropic, and provides functionalities for project management, code generation, and team collaboration.
shell_gpt
ShellGPT is a command-line productivity tool powered by AI large language models (LLMs). This command-line tool offers streamlined generation of shell commands, code snippets, documentation, eliminating the need for external resources (like Google search). Supports Linux, macOS, Windows and compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc.
sandbox
AIO Sandbox is an all-in-one agent sandbox environment that combines Browser, Shell, File, MCP operations, and VSCode Server in a single Docker container. It provides a unified, secure execution environment for AI agents and developers, with features like unified file system, multiple interfaces, secure execution, zero configuration, and agent-ready MCP-compatible APIs. The tool allows users to run shell commands, perform file operations, automate browser tasks, and integrate with various development tools and services.
topsha
LocalTopSH is an AI Agent Framework designed for companies and developers who require 100% on-premise AI agents with data privacy. It supports various OpenAI-compatible LLM backends and offers production-ready security features. The framework allows simple deployment using Docker compose and ensures that data stays within the user's network, providing full control and compliance. With cost-effective scaling options and compatibility in regions with restrictions, LocalTopSH is a versatile solution for deploying AI agents on self-hosted infrastructure.
httpjail
httpjail is a cross-platform tool designed for monitoring and restricting HTTP/HTTPS requests from processes using network isolation and transparent proxy interception. It provides process-level network isolation, HTTP/HTTPS interception with TLS certificate injection, script-based and JavaScript evaluation for custom request logic, request logging, default deny behavior, and zero-configuration setup. The tool operates on Linux and macOS, creating an isolated network environment for target processes and intercepting all HTTP/HTTPS traffic through a transparent proxy enforcing user-defined rules.
For similar tasks
myclaw
myclaw is a personal AI assistant built on agentsdk-go that offers a CLI agent for single message or interactive REPL mode, full orchestration with channels, cron, and heartbeat, support for various messaging channels like Telegram, Feishu, WeCom, WhatsApp, and a web UI, multi-provider support for Anthropic and OpenAI models, image recognition and document processing, scheduled tasks with JSON persistence, long-term and daily memory storage, custom skill loading, and more. It provides a comprehensive solution for interacting with AI models and managing tasks efficiently.
spring-ai
The Spring AI project provides a Spring-friendly API and abstractions for developing AI applications. It offers a portable client API for interacting with generative AI models, enabling developers to easily swap out implementations and access various models like OpenAI, Azure OpenAI, and HuggingFace. Spring AI also supports prompt engineering, providing classes and interfaces for creating and parsing prompts, as well as incorporating proprietary data into generative AI without retraining the model. This is achieved through Retrieval Augmented Generation (RAG), which involves extracting, transforming, and loading data into a vector database for use by AI models. Spring AI's VectorStore abstraction allows for seamless transitions between different vector database implementations.
ruby-nano-bots
Ruby Nano Bots is an implementation of the Nano Bots specification supporting various AI providers like Cohere Command, Google Gemini, Maritaca AI MariTalk, Mistral AI, Ollama, OpenAI ChatGPT, and others. It allows calling tools (functions) and provides a helpful assistant for interacting with AI language models. The tool can be used both from the command line and as a library in Ruby projects, offering features like REPL, debugging, and encryption for data privacy.
rag-chat
The `@upstash/rag-chat` package simplifies the development of retrieval-augmented generation (RAG) chat applications by providing Next.js compatibility with streaming support, built-in vector store, optional Redis compatibility for fast chat history management, rate limiting, and disableRag option. Users can easily set up the environment variables and initialize RAGChat to interact with AI models, manage knowledge base, chat history, and enable debugging features. Advanced configuration options allow customization of RAGChat instance with built-in rate limiting, observability via Helicone, and integration with Next.js route handlers and Vercel AI SDK. The package supports OpenAI models, Upstash-hosted models, and custom providers like TogetherAi and Replicate.
ryoma
Ryoma is an AI Powered Data Agent framework that offers a comprehensive solution for data analysis, engineering, and visualization. It leverages cutting-edge technologies like Langchain, Reflex, Apache Arrow, Jupyter Ai Magics, Amundsen, Ibis, and Feast to provide seamless integration of language models, build interactive web applications, handle in-memory data efficiently, work with AI models, and manage machine learning features in production. Ryoma also supports various data sources like Snowflake, Sqlite, BigQuery, Postgres, MySQL, and different engines like Apache Spark and Apache Flink. The tool enables users to connect to databases, run SQL queries, and interact with data and AI models through a user-friendly UI called Ryoma Lab.
shell-pilot
Shell-pilot is a simple, lightweight shell script designed to interact with various AI models such as OpenAI, Ollama, Mistral AI, LocalAI, ZhipuAI, Anthropic, Moonshot, and Novita AI from the terminal. It enhances intelligent system management without any dependencies, offering features like setting up a local LLM repository, using official models and APIs, viewing history and session persistence, passing input prompts with pipe/redirector, listing available models, setting request parameters, generating and running commands in the terminal, easy configuration setup, system package version checking, and managing system aliases.
awesome-llm-web-ui
Curating the best Large Language Model (LLM) Web User Interfaces that facilitate interaction with powerful AI models. Explore and catalogue intuitive, feature-rich, and innovative web interfaces for interacting with LLMs, ranging from simple chatbots to comprehensive platforms equipped with functionalities like PDF generation and web search.
java-sdk
The MCP Java SDK is a set of projects that provide Java SDK integration for the Model Context Protocol. It enables Java applications to interact with AI models and tools through a standardized interface, supporting both synchronous and asynchronous communication patterns.
For similar jobs
promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.
deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.
MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".
leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.
llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.
carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.
TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.
AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.