botserver
Complete open-source AI collaboration suite and multi-agent platform featuring LLM orchestration, automation, and virtual assistants. Scales seamlessly from small deployments to large enterprise environments.
Stars: 74
General Bots is a self-hosted AI automation platform and LLM conversational platform focused on convention over configuration and code-less approaches. It serves as the core API server handling LLM orchestration, business logic, database operations, and multi-channel communication. The platform offers features like multi-vendor LLM API, MCP + LLM Tools Generation, Semantic Caching, Web Automation Engine, Enterprise Data Connectors, and Git-like Version Control. It enforces a ZERO TOLERANCE POLICY for code quality and security, with strict guidelines for error handling, performance optimization, and code patterns. The project structure includes modules for core functionalities like Rhai BASIC interpreter, security, shared types, tasks, auto task system, file operations, learning system, and LLM assistance.
README:
Version: 6.2.0
Purpose: Main API server for General Bots (Axum + Diesel + Rhai BASIC)
General Bots is a self-hosted AI automation platform and strongly-typed LLM conversational platform focused on convention over configuration and code-less approaches. It serves as the core API server handling LLM orchestration, business logic, database operations, and multi-channel communication.
For comprehensive documentation, see docs.pragmatismo.com.br or the BotBook for detailed guides, API references, and tutorials.
- Rust (1.75+) - Install from rustup.rs
- Git - Download from git-scm.com
-
Mold -
sudo apt-get install mold
git clone https://github.com/GeneralBots/botserver
cd botserver
cargo install sccache
sudo apt-get install mold # or build from source
cargo runOn first run, botserver automatically:
- Installs required components (PostgreSQL, S3 storage, Redis cache, LLM)
- Sets up database with migrations
- Downloads AI models
- Starts HTTP server at
http://localhost:8088
cargo run # Default: console UI + web server
cargo run -- --noconsole # Background service mode
cargo run -- --desktop # Desktop application (Tauri)
cargo run -- --tenant <name> # Specify tenant
cargo run -- --container # LXC container modeUnified interface for OpenAI, Groq, Claude, Anthropic, and local models.
Instant tool creation from code and functions - no complex configurations.
Intelligent response caching achieving 70% cost reduction on LLM calls.
Browser automation combined with AI intelligence for complex workflows.
Native integrations with CRM, ERP, databases, and external services.
Full history with rollback capabilities for all configurations and data.
USE KB "kb-name" ' Load knowledge base into vector database
CLEAR KB "kb-name" ' Remove KB from session
USE TOOL "tool-name" ' Make tool available to LLM
CLEAR TOOLS ' Remove all tools from session' customer-support.bas
USE KB "support-docs"
USE TOOL "create-ticket"
USE TOOL "check-order"
SET CONTEXT "support" AS "You are a helpful customer support agent."
TALK "Welcome! How can I help you today?"src/
├── core/ # Bootstrap, config, routes
├── basic/ # Rhai BASIC interpreter
│ └── keywords/ # BASIC keyword implementations
├── security/ # Security modules
│ ├── command_guard.rs # Safe command execution
│ ├── error_sanitizer.rs # Error message sanitization
│ └── sql_guard.rs # SQL injection prevention
├── shared/ # Shared types, models
├── tasks/ # AutoTask system (2651 lines - NEEDS REFACTORING)
├── auto_task/ # App generator (2981 lines - NEEDS REFACTORING)
├── drive/ # File operations (1522 lines - NEEDS REFACTORING)
├── learn/ # Learning system (2306 lines - NEEDS REFACTORING)
└── attendance/ # LLM assistance (2053 lines - NEEDS REFACTORING)
migrations/ # Database migrations
botserver-stack/ # Stack deployment files
EVERY SINGLE WARNING MUST BE FIXED. NO EXCEPTIONS.
❌ NEVER use #![allow()] or #[allow()] in source code
❌ NEVER use .unwrap() - use ? or proper error handling
❌ NEVER use .expect() - use ? or proper error handling
❌ NEVER use panic!() or unreachable!()
❌ NEVER use todo!() or unimplemented!()
❌ NEVER leave unused imports or dead code
❌ NEVER add comments - code must be self-documenting
❌ NEVER use CDN links - all assets must be local
❌ NEVER build SQL queries with format! - use parameterized queries
❌ NEVER pass user input to Command::new() without validation
❌ NEVER log passwords, tokens, API keys, or PII
Current Status: 955 instances of unwrap()/expect() found in codebase
Target: 0 instances in production code (tests excluded)
// ❌ WRONG - Found 955 times in codebase
let value = something.unwrap();
let value = something.expect("msg");
// ✅ CORRECT - Required replacements
let value = something?;
let value = something.ok_or_else(|| Error::NotFound)?;
let value = something.unwrap_or_default();
let value = something.unwrap_or_else(|e| {
log::error!("Operation failed: {e}");
default_value
});Current Status: 12,973 excessive clone()/to_string() calls
Target: Minimize allocations, use references where possible
// ❌ WRONG - Excessive allocations
let name = user.name.clone();
let msg = format!("Hello {}", name.to_string());
// ✅ CORRECT - Minimize allocations
let name = &user.name;
let msg = format!("Hello {name}");
// ✅ CORRECT - Use Cow for conditional ownership
use std::borrow::Cow;
fn process_name(name: Cow<str>) -> String {
match name {
Cow::Borrowed(s) => s.to_uppercase(),
Cow::Owned(s) => s.to_uppercase(),
}
}// ❌ WRONG
let query = format!("SELECT * FROM {}", table_name);
// ✅ CORRECT - whitelist validation
const ALLOWED_TABLES: &[&str] = &["users", "sessions"];
if !ALLOWED_TABLES.contains(&table_name) {
return Err(Error::InvalidTable);
}// ❌ WRONG
Command::new("tool").arg(user_input).output()?;
// ✅ CORRECT - Use SafeCommand
use crate::security::command_guard::SafeCommand;
SafeCommand::new("allowed_command")?
.arg("safe_arg")?
.execute()// ❌ WRONG
Json(json!({ "error": e.to_string() }))
format!("Database error: {}", e)
// ✅ CORRECT
use crate::security::error_sanitizer::log_and_sanitize;
let sanitized = log_and_sanitize(&e, "context", None);
(StatusCode::INTERNAL_SERVER_ERROR, sanitized)// ❌ WRONG
format!("Hello {}", name)
// ✅ CORRECT
format!("Hello {name}")// ❌ WRONG
impl MyStruct {
fn new() -> MyStruct { MyStruct { } }
}
// ✅ CORRECT
impl MyStruct {
fn new() -> Self { Self { } }
}// ❌ WRONG
#[derive(PartialEq)]
struct MyStruct { }
// ✅ CORRECT
#[derive(PartialEq, Eq)]
struct MyStruct { }// ✅ CORRECT
opt.unwrap_or(default)
opt.unwrap_or_else(|| compute_default())
opt.map_or(default, |x| transform(x))// ❌ WRONG
date.with_hour(9).unwrap().with_minute(0).unwrap()
// ✅ CORRECT
date.with_hour(9).and_then(|d| d.with_minute(0)).unwrap_or(date)When a file grows beyond this limit:
- Identify logical groups - Find related functions
-
Create subdirectory module - e.g.,
handlers/ -
Split by responsibility:
-
types.rs- Structs, enums, type definitions -
handlers.rs- HTTP handlers and routes -
operations.rs- Core business logic -
utils.rs- Helper functions -
mod.rs- Re-exports and configuration
-
- Keep files focused - Single responsibility
- Update mod.rs - Re-export all public items
NEVER let a single file exceed 450 lines - split proactively at 350 lines
| File | Lines | Target Split |
|---|---|---|
auto_task/app_generator.rs |
2981 | → 7 files |
tasks/mod.rs |
2651 | → 6 files |
learn/mod.rs |
2306 | → 5 files |
attendance/llm_assist.rs |
2053 | → 5 files |
drive/mod.rs |
1522 | → 4 files |
See TODO-refactor1.md for detailed refactoring plans
- TABLES AND INDEXES ONLY (no stored procedures, nothing, no views, no triggers, no functions)
-
JSON columns: use TEXT with
_jsonsuffix - ORM: Use diesel - no sqlx
-
Migrations: Located in
botserver/migrations/
- Use HTMX - minimize JavaScript
- NO external CDN - all assets local
- Server-side rendering with Askama templates
| Library | Version | Purpose |
|---|---|---|
| axum | 0.7.5 | Web framework |
| diesel | 2.1 | PostgreSQL ORM |
| tokio | 1.41 | Async runtime |
| rhai | git | BASIC scripting |
| reqwest | 0.12 | HTTP client |
| serde | 1.0 | Serialization |
| askama | 0.12 | HTML Templates |
When configuring CI/CD pipelines (e.g., Forgejo Actions):
-
Minimal Checkout: Clone only the root
gband thebotlibsubmodule. Do NOT recursively clone everything. -
BotServer Context: Replace the empty
botserverdirectory with the current set of files being tested.
Example Step:
- name: Setup Workspace
run: |
# 1. Clone only the root workspace configuration
git clone --depth 1 <your-git-repo-url> workspace
# 2. Setup only the necessary dependencies (botlib)
cd workspace
git submodule update --init --depth 1 botlib
cd ..
# 3. Inject current BotServer code
rm -rf workspace/botserver
mv botserver workspace/botserverdocs/
├── api/ # API documentation
│ ├── README.md # API overview
│ ├── rest-endpoints.md # HTTP endpoints
│ └── websocket.md # Real-time communication
├── guides/ # How-to guides
│ ├── getting-started.md # Quick start
│ ├── deployment.md # Production setup
│ └── templates.md # Using templates
└── reference/ # Technical reference
├── basic-language.md # BASIC keywords
├── configuration.md # Config options
└── architecture.md # System design
- docs.pragmatismo.com.br - Full online documentation
- BotBook - Local comprehensive guide with tutorials and examples
- API Reference - REST and WebSocket endpoints
- BASIC Language - Dialog scripting reference
| Project | Description |
|---|---|
| botui | Pure web UI (HTMX-based) |
| botapp | Tauri desktop wrapper |
| botlib | Shared Rust library |
| botbook | Documentation |
| bottemplates | Templates and examples |
- AGPL-3.0 License - True open source with contribution requirements
- Self-hosted - Your data stays on your infrastructure
- Enterprise-grade - 5+ years of stability
- No vendor lock-in - Open protocols and standards
Report security issues to: [email protected]
We welcome contributions! Please read our contributing guidelines before submitting PRs.
- ZERO WARNINGS - Fix every clippy warning
- ZERO COMMENTS - No comments, no doc comments
- NO ALLOW IN CODE - Configure exceptions in Cargo.toml only
- NO DEAD CODE - Delete unused code
- NO UNWRAP/EXPECT - Use ? or combinators (955 instances to fix)
- MINIMIZE CLONES - Avoid excessive allocations (12,973 instances to optimize)
- PARAMETERIZED SQL - Never format! for queries
- VALIDATE COMMANDS - Never pass raw user input
-
INLINE FORMAT ARGS -
format!("{name}")notformat!("{}", name) - USE SELF - In impl blocks, use Self not type name
- FILE SIZE LIMIT - Max 450 lines per file, refactor at 350 lines
- Version 6.2.0 - Do not change without approval
- GIT WORKFLOW - ALWAYS push to ALL repositories (github, pragmatismo)
- Replace 955 unwrap()/expect() calls with proper error handling
- Optimize 12,973 clone()/to_string() calls for performance
- Refactor 5 large files following TODO-refactor1.md
- Add missing error handling in critical paths
- Implement proper logging instead of panicking
General Bot Copyright (c) pragmatismo.com.br. All rights reserved.
Licensed under the AGPL-3.0.
According to our dual licensing model, this program can be used either under the terms of the GNU Affero General Public License, version 3, or under a proprietary license.
- Website: pragmatismo.com.br
- Documentation: docs.pragmatismo.com.br
- GitHub: github.com/GeneralBots/botserver
-
Stack Overflow: Tag questions with
generalbots - Video Tutorial: 7 AI General Bots LLM Templates
General Bots Code Name: Guaribas
"No one should have to do work that can be done by a machine." - Roberto Mangabeira Unger
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for botserver
Similar Open Source Tools
For similar tasks
autogen
AutoGen is a framework that enables the development of LLM applications using multiple agents that can converse with each other to solve tasks. AutoGen agents are customizable, conversable, and seamlessly allow human participation. They can operate in various modes that employ combinations of LLMs, human inputs, and tools.
tracecat
Tracecat is an open-source automation platform for security teams. It's designed to be simple but powerful, with a focus on AI features and a practitioner-obsessed UI/UX. Tracecat can be used to automate a variety of tasks, including phishing email investigation, evidence collection, and remediation plan generation.
ciso-assistant-community
CISO Assistant is a tool that helps organizations manage their cybersecurity posture and compliance. It provides a centralized platform for managing security controls, threats, and risks. CISO Assistant also includes a library of pre-built frameworks and tools to help organizations quickly and easily implement best practices.
ck
Collective Mind (CM) is a collection of portable, extensible, technology-agnostic and ready-to-use automation recipes with a human-friendly interface (aka CM scripts) to unify and automate all the manual steps required to compose, run, benchmark and optimize complex ML/AI applications on any platform with any software and hardware: see online catalog and source code. CM scripts require Python 3.7+ with minimal dependencies and are continuously extended by the community and MLCommons members to run natively on Ubuntu, MacOS, Windows, RHEL, Debian, Amazon Linux and any other operating system, in a cloud or inside automatically generated containers while keeping backward compatibility - please don't hesitate to report encountered issues here and contact us via public Discord Server to help this collaborative engineering effort! CM scripts were originally developed based on the following requirements from the MLCommons members to help them automatically compose and optimize complex MLPerf benchmarks, applications and systems across diverse and continuously changing models, data sets, software and hardware from Nvidia, Intel, AMD, Google, Qualcomm, Amazon and other vendors: * must work out of the box with the default options and without the need to edit some paths, environment variables and configuration files; * must be non-intrusive, easy to debug and must reuse existing user scripts and automation tools (such as cmake, make, ML workflows, python poetry and containers) rather than substituting them; * must have a very simple and human-friendly command line with a Python API and minimal dependencies; * must require minimal or zero learning curve by using plain Python, native scripts, environment variables and simple JSON/YAML descriptions instead of inventing new workflow languages; * must have the same interface to run all automations natively, in a cloud or inside containers. CM scripts were successfully validated by MLCommons to modularize MLPerf inference benchmarks and help the community automate more than 95% of all performance and power submissions in the v3.1 round across more than 120 system configurations (models, frameworks, hardware) while reducing development and maintenance costs.
zenml
ZenML is an extensible, open-source MLOps framework for creating portable, production-ready machine learning pipelines. By decoupling infrastructure from code, ZenML enables developers across your organization to collaborate more effectively as they develop to production.
clearml
ClearML is a suite of tools designed to streamline the machine learning workflow. It includes an experiment manager, MLOps/LLMOps, data management, and model serving capabilities. ClearML is open-source and offers a free tier hosting option. It supports various ML/DL frameworks and integrates with Jupyter Notebook and PyCharm. ClearML provides extensive logging capabilities, including source control info, execution environment, hyper-parameters, and experiment outputs. It also offers automation features, such as remote job execution and pipeline creation. ClearML is designed to be easy to integrate, requiring only two lines of code to add to existing scripts. It aims to improve collaboration, visibility, and data transparency within ML teams.
devchat
DevChat is an open-source workflow engine that enables developers to create intelligent, automated workflows for engaging with users through a chat panel within their IDEs. It combines script writing flexibility, latest AI models, and an intuitive chat GUI to enhance user experience and productivity. DevChat simplifies the integration of AI in software development, unlocking new possibilities for developers.
LLM-Finetuning-Toolkit
LLM Finetuning toolkit is a config-based CLI tool for launching a series of LLM fine-tuning experiments on your data and gathering their results. It allows users to control all elements of a typical experimentation pipeline - prompts, open-source LLMs, optimization strategy, and LLM testing - through a single YAML configuration file. The toolkit supports basic, intermediate, and advanced usage scenarios, enabling users to run custom experiments, conduct ablation studies, and automate fine-tuning workflows. It provides features for data ingestion, model definition, training, inference, quality assurance, and artifact outputs, making it a comprehensive tool for fine-tuning large language models.
For similar jobs
langmanus
LangManus is a community-driven AI automation framework that combines language models with specialized tools for tasks like web search, crawling, and Python code execution. It implements a hierarchical multi-agent system with agents like Coordinator, Planner, Supervisor, Researcher, Coder, Browser, and Reporter. The framework supports LLM integration, search and retrieval tools, Python integration, workflow management, and visualization. LangManus aims to give back to the open-source community and welcomes contributions in various forms.
botserver
General Bots is a self-hosted AI automation platform and LLM conversational platform focused on convention over configuration and code-less approaches. It serves as the core API server handling LLM orchestration, business logic, database operations, and multi-channel communication. The platform offers features like multi-vendor LLM API, MCP + LLM Tools Generation, Semantic Caching, Web Automation Engine, Enterprise Data Connectors, and Git-like Version Control. It enforces a ZERO TOLERANCE POLICY for code quality and security, with strict guidelines for error handling, performance optimization, and code patterns. The project structure includes modules for core functionalities like Rhai BASIC interpreter, security, shared types, tasks, auto task system, file operations, learning system, and LLM assistance.
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.
