
dexto
The Orchestration Layer for AI agents. Connect your models, tools, and data into a smart interface to create agentic apps that can think, act and talk to you
Stars: 225

Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.
README:
An all-in-one toolkit to build agentic applications that turn natural language into real-world actions.
Dexto is a universal agent interface for building agentic apps—software that understands natural language and takes real-world actions. It orchestrates LLMs, tools, and data into persistent, stateful systems with memory, so you can rapidly create AI assistants, copilots, and context-aware apps that think, act and feel alive.
- Autonomous Agents - Agents that plan, execute, and adapt to user goals.
- Digital Companions - AI assistants & copilots that remember context and anticipate needs.
- Multi-Agent Systems - Architect agents that collaborate, delegate, and solve complex tasks together.
- MCP Clients - Connect multiple tools, files, APIs, and data via MCP Servers.
- Agent-as-a-Service – Transform your existing SaaS products and APIs into dynamic, conversational experiences.
- Agentic Applications – Integrate Dexto as a reasoning engine to power interactive, multimodal, AI-native applications.
- Batteries Included – Session management, tool orchestration, multimodal support.
- 20+ LLMs – Instantly switch between OpenAI, Anthropic, Google, Groq, local models or bring your own.
- Run Anywhere – Local for privacy, cloud for reach, or hybrid. Same agent, any deployment.
- Native Multimodal – Text, images, files, and tools in a single conversation. Upload screenshots, ask questions, take actions.
- Persistent Sessions – Conversations, context, and memory are saved and can be exported, imported, or shared across environments.
- Flexible Interfaces – One agent, endless ways to interact: Ready to-use CLI, WebUI, APIs, or integrate with your own UI.
- Production Ready – Observability and error handling built-in.
- Tooling & MCP – Integrate 100+ tools and connect to external servers via the Model Context Protocol (MCP).
- Customizable Agents – Define agent behavior, tools, and prompts in YAML or TypeScript.
- Pluggable Storage – Use Redis, PostgreSQL, SQLite, in-memory, and more for cache and database backends.
# NPM global
npm install -g dexto
# —or— build from source
# this sets up dexto CLI from the cloned code
git clone https://github.com/truffle-ai/dexto.git
cd dexto && pnpm install && pnpm install-cli
# 1. Run setup workflow - this prompts for your preferred LLM and API keys and starts the interactive CLI
dexto
# 2. Try a multi-step task
dexto "create a snake game in HTML/CSS/JS, then open it in the browser"
# 3. Launch the Dexto Web UI
dexto --mode web
In 2 -> Dexto will use filesystem tools to write code and browser tools to open it — all from a single prompt. The Web UI allows you to navigate previous conversations and experiment with different models, tools and more.
Dexto comes with pre-built agent recipes for common use cases. Install and use them instantly:
# List available agents
dexto list-agents
# Install specific agents
dexto install nano-banana-agent podcast-agent
# Use an agent
dexto --agent nano-banana-agent "create a futuristic cityscape with flying cars"
dexto --agent podcast-agent "generate a podcast intro with two hosts discussing AI"
Available Agents:
- Nano Banana Agent – Advanced image generation and editing using Google's Nano Banana (Gemini 2.5 Flash Image)
- Podcast Agent – Advanced podcast generation using Google Gemini TTS for multi-speaker audio content
- Database Agent – Demo agent for SQL queries and database operations
- Image Editor Agent – Image editing and manipulation
- Music Agent – Music creation and audio processing
- PDF Agent – Document analysis and conversation
- Product Researcher – Product naming and branding research
- Triage Agent – Demo multi-agent customer support routing system
Each agent is pre-configured with the right tools, prompts, and LLM settings for its domain. No setup required—just install and start building.
More ready-to-run recipes live in agents/
and the docs site.
Task: Can you go to amazon and add some snacks to my cart? I like trail mix, cheetos and maybe surprise me with something else?
# Default agent has browser tools
dexto
Task: Detect all faces in this image and draw bounding boxes around them.
dexto --agent image-editor-agent
Task: Generate an intro for a podcast about the latest in AI.
dexto --agent podcast-agent
Task: Generate a photo of a baby panda.
You can add your own Model Context Protocol (MCP) servers to extend Dexto's capabilities with new tools or data sources. Just edit your agent YAML or add it directly in the WebUI.
Task: Create a snake game in HTML/CSS/JS, then open it in the browser
dexto "create a snake game in HTML/CSS/JS, then open it in the browser"
Task: Summarize emails and send highlights to Slack
dexto --agent ./agents/examples/email_slack.yml
Mode | Command | Best for |
---|---|---|
Interactive CLI | dexto |
Everyday automation & quick tasks |
Web UI | dexto --mode web |
Friendly chat interface w/ image support |
Headless Server | dexto --mode server |
REST & WebSocket APIs for agent interaction |
MCP Server (Agent) | dexto --mode mcp |
Exposing your agent as a tool for others via stdio |
MCP Server (Aggregator) | dexto mcp --group-servers |
Re-exposing tools from multiple MCP servers via stdio |
Discord Bot | dexto --mode discord |
Community servers & channels (Requires Setup) |
Telegram Bot | dexto --mode telegram |
Mobile chat (Requires Setup) |
Run dexto --help
for all flags, sub-commands, and environment variables.
Dexto treats each configuration as a unique agent allowing you to define and save combinations of LLMs, servers, storage options, etc. based on your needs for easy portability. Define agents in version-controlled YAML. Change the file, reload, and chat—state, memory, and tools update automatically.
# agents/my-agent.yml
llm:
provider: openai
model: gpt-4.1-mini
apiKey: $OPENAI_API_KEY
mcpServers:
filesystem:
type: stdio
command: npx
args: ['-y', '@modelcontextprotocol/server-filesystem', '.']
web:
type: stdio
command: npx
args: ['-y', '@modelcontextprotocol/server-brave-search']
systemPrompt: |
You are a helpful AI assistant with access to files and web search.
Switch between providers instantly—no code changes required.
Provider | Models | Setup |
---|---|---|
OpenAI |
gpt-5 , gpt-5-mini , gpt-5-nano , gpt-4.1 , gpt-4.1-mini , gpt-4.1-nano , gpt-4o , gpt-4o-mini , gpt-4o-audio-preview , o4-mini , o3 , o3-mini , o1
|
export OPENAI_API_KEY=... |
Anthropic |
claude-opus-4-1-20250805 , claude-4-opus-20250514 , claude-4-sonnet-20250514 , claude-3-7-sonnet-20250219 , claude-3-5-sonnet-20240620 , claude-3-5-haiku-20241022
|
export ANTHROPIC_API_KEY=... |
gemini-2.5-pro , gemini-2.5-flash , gemini-2.5-flash-lite , gemini-2.0-flash , gemini-2.0-flash-lite
|
export GOOGLE_GENERATIVE_AI_API_KEY=... |
|
Groq |
llama-3.3-70b-versatile , meta-llama/llama-4-scout-17b-16e-instruct , meta-llama/llama-4-maverick-17b-128e-instruct , qwen/qwen3-32b , gemma-2-9b-it , openai/gpt-oss-20b , openai/gpt-oss-120b , moonshotai/kimi-k2-instruct , deepseek-r1-distill-llama-70b
|
export GROQ_API_KEY=... |
xAI |
grok-4 , grok-3 , grok-3-mini , grok-code-fast-1
|
export XAI_API_KEY=... |
Cohere |
command-a-03-2025 , command-r-plus , command-r , command-r7b
|
export COHERE_API_KEY=... |
# Switch models via CLI
dexto -m claude-4-sonnet-20250514
dexto -m gemini-2.5-pro
See our Configuration Guide for complete setup instructions.
Install the @dexto/core
library, and build applications with the DextoAgent
class. Everything the CLI can do, your code can too.
npm install @dexto/core
import { DextoAgent } from '@dexto/core';
// Create and start agent
const agent = new DextoAgent({
llm: {
provider: 'openai',
model: 'gpt-4.1-mini',
apiKey: process.env.OPENAI_API_KEY
}
});
await agent.start();
// Run tasks
const response = await agent.run('List the 5 largest files in this repo');
console.log(response);
// Hold conversations
await agent.run('Write a haiku about TypeScript');
await agent.run('Make it funnier');
await agent.stop();
See our TypeScript SDK docs for complete examples with MCP tools, sessions, and advanced features.
Create and manage multiple conversation sessions with persistent storage.
const agent = new DextoAgent(config);
await agent.start();
// Create and manage sessions
const session = await agent.createSession('user-123');
await agent.run('Hello, how can you help me?', undefined, 'user-123');
// List and manage sessions
const sessions = await agent.listSessions();
const sessionHistory = await agent.getSessionHistory('user-123');
await agent.deleteSession('user-123');
// Search across conversations
const results = await agent.searchMessages('bug fix', { limit: 10 });
Switch between models and providers dynamically.
// Get current configuration
const currentLLM = agent.getCurrentLLMConfig();
// Switch models (provider inferred automatically)
await agent.switchLLM({ model: 'gpt-4.1-mini' });
await agent.switchLLM({ model: 'claude-4-sonnet-20250514' });
// Switch model for a specific session id 1234
await agent.switchLLM({ model: 'gpt-4.1-mini' }, '1234')
// Get supported providers and models
const providers = agent.getSupportedProviders();
const models = agent.getSupportedModels();
const openaiModels = agent.getSupportedModelsForProvider('openai');
For advanced MCP server management, use the MCPManager directly.
import { MCPManager } from '@dexto/core';
const manager = new MCPManager();
// Connect to MCP servers
await manager.connectServer('filesystem', {
type: 'stdio',
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', '.']
});
// Access tools, prompts, and resources
const tools = await manager.getAllTools();
const prompts = await manager.getAllPrompts();
const resources = await manager.getAllResources();
// Execute tools
const result = await manager.executeTool('readFile', { path: './README.md' });
await manager.disconnectAll();
Configure storage backends for production-ready persistence and caching.
# agents/production-agent.yml
storage:
cache:
type: redis
url: $REDIS_URL
maxConnections: 100
database:
type: postgres
connectionString: $POSTGRES_CONNECTION_STRING
maxConnections: 25
sessions:
maxSessions: 1000
sessionTTL: 86400000 # 24 hours
Supported Backends:
- Cache: Redis, In-Memory (fast, ephemeral)
- Database: PostgreSQL, SQLite, In-Memory (persistent, reliable)
Use Cases:
- Development: In-memory for quick testing
- Production: Redis + PostgreSQL for scale
- Simple: SQLite for single-instance persistence
See the DextoAgent API Documentation for complete method references.
Click to expand for full CLI reference (`dexto --help`)
Usage: dexto [options] [command] [prompt...]
Dexto CLI allows you to talk to Dexto, build custom AI Agents, build complex AI applications like Cursor, and more.
Run dexto interactive CLI with `dexto` or run a one-shot prompt with `dexto -p "<prompt>"` or `dexto "<prompt>"`
Start with a new session using `dexto --new-session [sessionId]`
Run dexto web UI with `dexto --mode web`
Run dexto as a server (REST APIs + WebSockets) with `dexto --mode server`
Run dexto as a discord bot with `dexto --mode discord`
Run dexto as a telegram bot with `dexto --mode telegram`
Run dexto agent as an MCP server with `dexto --mode mcp`
Run dexto as an MCP server aggregator with `dexto mcp --group-servers`
Check subcommands for more features. Check https://github.com/truffle-ai/dexto for documentation on how to customize dexto and other examples
Arguments:
prompt Natural-language prompt to run once. If not passed, dexto will start as an interactive CLI
Options:
-v, --version output the current version
-a, --agent <name|path> Agent name or path to agent config file
-p, --prompt <text> One-shot prompt text. Alternatively provide a single quoted string as positional argument.
-s, --strict Require all server connections to succeed
--no-verbose Disable verbose output
--no-interactive Disable interactive prompts and API key setup
-m, --model <model> Specify the LLM model to use
-r, --router <router> Specify the LLM router to use (vercel or in-built)
--new-session [sessionId] Start with a new session (optionally specify session ID)
--mode <mode> The application in which dexto should talk to you - cli | web | server | discord | telegram | mcp (default: "cli")
--web-port <port> optional port for the web UI (default: "3000")
--no-auto-install Disable automatic installation of missing agents from registry
-h, --help display help for command
Commands:
create-app Scaffold a new Dexto Typescript app
init-app Initialize an existing Typescript app with Dexto
setup [options] Configure global Dexto preferences
install [options] [agents...] Install agents from the registry
uninstall [options] [agents...] Uninstall agents from the local installation
list-agents [options] List available and installed agents
which <agent> Show the path to an agent
mcp [options] Start Dexto as an MCP server. Use --group-servers to aggregate and re-expose tools from configured MCP servers.
- Quick Start – Get up and running in minutes.
- Configuration Guide – Configure agents, LLMs, and tools.
- Building with Dexto – Developer guides and patterns.
- API Reference – REST APIs, WebSocket, and SDKs.
We collect anonymous usage data (no personal/sensitive info) to help improve Dexto. This includes:
- Commands used
- Command execution time
- Error occurrences
- System information (OS, Node version)
- LLM Models used
To opt-out:
Set env variable DEXTO_ANALYTICS_DISABLED=1
We welcome contributions! Refer to our Contributing Guide for more details.
Dexto is built by the team at Truffle AI.
Join our Discord to share projects, ask questions, or just say hi!
If you enjoy Dexto, please give us a ⭐ on GitHub—it helps a lot!
Thanks to all these amazing people for contributing to Dexto!
Elastic License 2.0. See LICENSE for full terms.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for dexto
Similar Open Source Tools

dexto
Dexto is a lightweight runtime for creating and running AI agents that turn natural language into real-world actions. It serves as the missing intelligence layer for building AI applications, standalone chatbots, or as the reasoning engine inside larger products. Dexto features a powerful CLI and Web UI for running AI agents, supports multiple interfaces, allows hot-swapping of LLMs from various providers, connects to remote tool servers via the Model Context Protocol, is config-driven with version-controlled YAML, offers production-ready core features, extensibility for custom services, and enables multi-agent collaboration via MCP and A2A.

LEANN
LEANN is an innovative vector database that democratizes personal AI, transforming your laptop into a powerful RAG system that can index and search through millions of documents using 97% less storage than traditional solutions without accuracy loss. It achieves this through graph-based selective recomputation and high-degree preserving pruning, computing embeddings on-demand instead of storing them all. LEANN allows semantic search of file system, emails, browser history, chat history, codebase, or external knowledge bases on your laptop with zero cloud costs and complete privacy. It is a drop-in semantic search MCP service fully compatible with Claude Code, enabling intelligent retrieval without changing your workflow.

chat
deco.chat is an open-source foundation for building AI-native software, providing developers, engineers, and AI enthusiasts with robust tools to rapidly prototype, develop, and deploy AI-powered applications. It empowers Vibecoders to prototype ideas and Agentic engineers to deploy scalable, secure, and sustainable production systems. The core capabilities include an open-source runtime for composing tools and workflows, MCP Mesh for secure integration of models and APIs, a unified TypeScript stack for backend logic and custom frontends, global modular infrastructure built on Cloudflare, and a visual workspace for building agents and orchestrating everything in code.

mcp-ts-template
The MCP TypeScript Server Template is a production-grade framework for building powerful and scalable Model Context Protocol servers with TypeScript. It features built-in observability, declarative tooling, robust error handling, and a modular, DI-driven architecture. The template is designed to be AI-agent-friendly, providing detailed rules and guidance for developers to adhere to best practices. It enforces architectural principles like 'Logic Throws, Handler Catches' pattern, full-stack observability, declarative components, and dependency injection for decoupling. The project structure includes directories for configuration, container setup, server resources, services, storage, utilities, tests, and more. Configuration is done via environment variables, and key scripts are available for development, testing, and publishing to the MCP Registry.

LLMTSCS
LLMLight is a novel framework that employs Large Language Models (LLMs) as decision-making agents for Traffic Signal Control (TSC). The framework leverages the advanced generalization capabilities of LLMs to engage in a reasoning and decision-making process akin to human intuition for effective traffic control. LLMLight has been demonstrated to be remarkably effective, generalizable, and interpretable against various transportation-based and RL-based baselines on nine real-world and synthetic datasets.

backend.ai
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs. It allocates and isolates the underlying computing resources for multi-tenant computation sessions on-demand or in batches with customizable job schedulers with its own orchestrator. All its functions are exposed as REST/GraphQL/WebSocket APIs.

orbit
ORBIT (Open Retrieval-Based Inference Toolkit) is a middleware platform that provides a unified API for AI inference. It acts as a central gateway, allowing you to connect various local and remote AI models with your private data sources like SQL databases, vector stores, and local files. ORBIT uses a flexible adapter architecture to connect your data to AI models, creating specialized 'agents' for specific tasks. It supports scenarios like Knowledge Base Q&A and Chat with Your SQL Database, enabling users to interact with AI models seamlessly. The tool offers a RESTful API for programmatic access and includes features like authentication, API key management, system prompts, health monitoring, and file management. ORBIT is designed to streamline AI inference tasks and facilitate interactions between users and AI models.

mcpd
mcpd is a tool developed by Mozilla AI to declaratively manage Model Context Protocol (MCP) servers, enabling consistent interface for defining and running tools across different environments. It bridges the gap between local development and enterprise deployment by providing secure secrets management, declarative configuration, and seamless environment promotion. mcpd simplifies the developer experience by offering zero-config tool setup, language-agnostic tooling, version-controlled configuration files, enterprise-ready secrets management, and smooth transition from local to production environments.

dive
Dive is an AI toolkit for Go that enables the creation of specialized teams of AI agents and seamless integration with leading LLMs. It offers a CLI and APIs for easy integration, with features like creating specialized agents, hierarchical agent systems, declarative configuration, multiple LLM support, extended reasoning, model context protocol, advanced model settings, tools for agent capabilities, tool annotations, streaming, CLI functionalities, thread management, confirmation system, deep research, and semantic diff. Dive also provides semantic diff analysis, unified interface for LLM providers, tool system with annotations, custom tool creation, and support for various verified models. The toolkit is designed for developers to build AI-powered applications with rich agent capabilities and tool integrations.

code-graph-rag
Graph-Code is an accurate Retrieval-Augmented Generation (RAG) system that analyzes multi-language codebases using Tree-sitter. It builds comprehensive knowledge graphs, enabling natural language querying of codebase structure and relationships, along with editing capabilities. The system supports various languages, uses Tree-sitter for parsing, Memgraph for storage, and AI models for natural language to Cypher translation. It offers features like code snippet retrieval, advanced file editing, shell command execution, interactive code optimization, reference-guided optimization, dependency analysis, and more. The architecture consists of a multi-language parser and an interactive CLI for querying the knowledge graph.

auto-engineer
Auto Engineer is a tool designed to automate the Software Development Life Cycle (SDLC) by building production-grade applications with a combination of human and AI agents. It offers a plugin-based architecture that allows users to install only the necessary functionality for their projects. The tool guides users through key stages including Flow Modeling, IA Generation, Deterministic Scaffolding, AI Coding & Testing Loop, and Comprehensive Quality Checks. Auto Engineer follows a command/event-driven architecture and provides a modular plugin system for specific functionalities. It supports TypeScript with strict typing throughout and includes a built-in message bus server with a web dashboard for monitoring commands and events.

evalchemy
Evalchemy is a unified and easy-to-use toolkit for evaluating language models, focusing on post-trained models. It integrates multiple existing benchmarks such as RepoBench, AlpacaEval, and ZeroEval. Key features include unified installation, parallel evaluation, simplified usage, and results management. Users can run various benchmarks with a consistent command-line interface and track results locally or integrate with a database for systematic tracking and leaderboard submission.

docs-mcp-server
The docs-mcp-server repository contains the server-side code for the documentation management system. It provides functionalities for managing, storing, and retrieving documentation files. Users can upload, update, and delete documents through the server. The server also supports user authentication and authorization to ensure secure access to the documentation system. Additionally, the server includes APIs for integrating with other systems and tools, making it a versatile solution for managing documentation in various projects and organizations.

dotclaude
A sophisticated multi-agent configuration system for Claude Code that provides specialized agents and command templates to accelerate code review, refactoring, security audits, tech-lead-guidance, and UX evaluations. It offers essential commands, directory structure details, agent system overview, command templates, usage patterns, collaboration philosophy, sync management, advanced usage guidelines, and FAQ. The tool aims to streamline development workflows, enhance code quality, and facilitate collaboration between developers and AI agents.

gpt-computer-assistant
GPT Computer Assistant (GCA) is an open-source framework designed to build vertical AI agents that can automate tasks on Windows, macOS, and Ubuntu systems. It leverages the Model Context Protocol (MCP) and its own modules to mimic human-like actions and achieve advanced capabilities. With GCA, users can empower themselves to accomplish more in less time by automating tasks like updating dependencies, analyzing databases, and configuring cloud security settings.

search_with_ai
Build your own conversation-based search with AI, a simple implementation with Node.js & Vue3. Live Demo Features: * Built-in support for LLM: OpenAI, Google, Lepton, Ollama(Free) * Built-in support for search engine: Bing, Sogou, Google, SearXNG(Free) * Customizable pretty UI interface * Support dark mode * Support mobile display * Support local LLM with Ollama * Support i18n * Support Continue Q&A with contexts.
For similar tasks

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.

danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"

semantic-kernel
Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code. What makes Semantic Kernel _special_ , however, is its ability to _automatically_ orchestrate plugins with AI. With Semantic Kernel planners, you can ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user.

floneum
Floneum is a graph editor that makes it easy to develop your own AI workflows. It uses large language models (LLMs) to run AI models locally, without any external dependencies or even a GPU. This makes it easy to use LLMs with your own data, without worrying about privacy. Floneum also has a plugin system that allows you to improve the performance of LLMs and make them work better for your specific use case. Plugins can be used in any language that supports web assembly, and they can control the output of LLMs with a process similar to JSONformer or guidance.

mindsdb
MindsDB is a platform for customizing AI from enterprise data. You can create, serve, and fine-tune models in real-time from your database, vector store, and application data. MindsDB "enhances" SQL syntax with AI capabilities to make it accessible for developers worldwide. With MindsDB’s nearly 200 integrations, any developer can create AI customized for their purpose, faster and more securely. Their AI systems will constantly improve themselves — using companies’ own data, in real-time.

aiscript
AiScript is a lightweight scripting language that runs on JavaScript. It supports arrays, objects, and functions as first-class citizens, and is easy to write without the need for semicolons or commas. AiScript runs in a secure sandbox environment, preventing infinite loops from freezing the host. It also allows for easy provision of variables and functions from the host.

activepieces
Activepieces is an open source replacement for Zapier, designed to be extensible through a type-safe pieces framework written in Typescript. It features a user-friendly Workflow Builder with support for Branches, Loops, and Drag and Drop. Activepieces integrates with Google Sheets, OpenAI, Discord, and RSS, along with 80+ other integrations. The list of supported integrations continues to grow rapidly, thanks to valuable contributions from the community. Activepieces is an open ecosystem; all piece source code is available in the repository, and they are versioned and published directly to npmjs.com upon contributions. If you cannot find a specific piece on the pieces roadmap, please submit a request by visiting the following link: Request Piece Alternatively, if you are a developer, you can quickly build your own piece using our TypeScript framework. For guidance, please refer to the following guide: Contributor's Guide

superagent-js
Superagent is an open source framework that enables any developer to integrate production ready AI Assistants into any application in a matter of minutes.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.