AGiXT
AGiXT is a dynamic AI Agent Automation Platform that seamlessly orchestrates instruction management and complex task execution across diverse AI providers. Combining adaptive memory, smart features, and a versatile plugin system, AGiXT delivers efficient and comprehensive AI solutions.
Stars: 2627
AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.
README:
AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.
Embracing the spirit of extremity in every facet of life, we introduce AGiXT. This advanced AI Automation Platform is our bold step towards the realization of Artificial General Intelligence (AGI). Seamlessly orchestrating instruction management and executing complex tasks across diverse AI providers, AGiXT combines adaptive memory, smart features, and a versatile plugin system to maximize AI potential. With our unwavering commitment to innovation, we're dedicated to pushing the boundaries of AI and bringing AGI closer to reality.
- AGiXT
Please note that using some AI providers (such as OpenAI's GPT-4 API) can be expensive! Monitor your usage carefully to avoid incurring unexpected costs. We're NOT responsible for your usage under any circumstances.
- Context and Token Management: Adaptive handling of long-term and short-term memory for an optimized AI performance, allowing the software to process information more efficiently and accurately.
- Smart Instruct: An advanced feature enabling AI to comprehend, plan, and execute tasks effectively. The system leverages web search, planning strategies, and executes instructions while ensuring output accuracy.
- Interactive Chat & Smart Chat: User-friendly chat interface for dynamic conversational tasks. The Smart Chat feature integrates AI with web research to deliver accurate and contextually relevant responses.
- Task Execution & Smart Task Management: Efficient management and execution of complex tasks broken down into sub-tasks. The Smart Task feature employs AI-driven agents to dynamically handle tasks, optimizing efficiency and avoiding redundancy.
- Chain Management: Sophisticated handling of chains or a series of linked commands, enabling the automation of complex workflows and processes.
- Web Browsing & Command Execution: Advanced capabilities to browse the web and execute commands for a more interactive AI experience, opening a wide range of possibilities for AI assistance.
- Multi-Provider Compatibility: Seamless integration with leading AI providers such as OpenAI (as well as any that use OpenAI style endpoints), ezLocalai, Hugging Face, GPT4Free, Google Gemini, and more.
- Versatile Plugin System & Code Evaluation: Extensible command support for various AI models along with robust support for code evaluation, providing assistance in programming tasks.
- Docker Deployment: Simplified setup and maintenance through Docker deployment.
- Audio-to-Text & Text-to-Speech Options: Integration with Hugging Face for seamless audio-to-text transcription, and multiple TTS choices, featuring Brian TTS, Mac OS TTS, and ElevenLabs.
- Platform Interoperability & AI Agent Management: Streamlined creation, renaming, deletion, and updating of AI agent settings along with easy interaction with popular platforms like Twitter, GitHub, Google, DALL-E, and more.
- Custom Prompts & Command Control: Granular control over agent abilities through enabling or disabling specific commands, and easy creation, editing, and deletion of custom prompts to standardize user inputs.
- RESTful API: FastAPI-powered RESTful API for seamless integration with external applications and services.
- Expanding AI Support: Continually updated to include new AI providers and services, ensuring the software stays at the forefront of AI technology.
The features that AGiXT provides cover a wide range of services and are used for different tasks. Refer to Processes and Frameworks for more details about the services and framework.
Provide the following prerequisites based on the Operating System you use.
- Git
- Docker
- Docker Compose
- Python 3.10.x
- NVIDIA Container Toolkit (if using local models on GPU)
If you're using Linux, you may need to prefix the python
command with sudo
depending on your system configuration.
git clone https://github.com/Josh-XT/AGiXT
cd AGiXT
python start.py
The script will check for Docker and Docker Compose installation:
- On Linux, it will attempt to install them if missing (requires sudo privileges).
- On macOS and Windows, it will provide instructions to download and install Docker Desktop.
Run the script with Python:
python start.py
To run AGiXT with ezLocalai, use the --with-ezlocalai
flag:
python start.py --with-ezlocalai true
You can also use command-line arguments to set specific environment variables to run in different ways. For example, to use the development branch and enable auto-updates, run:
python start.py --agixt-branch dev --agixt-auto-update true --with-ezlocalai true
- Access the AGiXT Management interface at http://localhost:8501 to create and manage your agents, prompts, chains, and configurations.
- Access the AGiXT Interactive interface at http://localhost:3437 to interact with your configured agents.
- Access the AGiXT API documentation at http://localhost:7437
The script supports setting any of the environment variables via command-line arguments. Here's a detailed list of available options:
-
--agixt-api-key
: Set the AGiXT API key (automatically generated if not provided) -
--agixt-uri
: Set the AGiXT URI (default:http://localhost:7437
) -
--agixt-agent
: Set the default AGiXT agent (default:AGiXT
) -
--agixt-branch
: Choose betweenstable
anddev
branches -
--agixt-file-upload-enabled
: Enable or disable file uploads (default:true
) -
--agixt-voice-input-enabled
: Enable or disable voice input (default:true
) -
--agixt-footer-message
: Set the footer message (default:Powered by AGiXT
) -
--agixt-require-api-key
: Require API key for access (default:false
) -
--agixt-rlhf
: Enable or disable reinforcement learning from human feedback (default:true
) -
--agixt-show-selection
: Set which selectors to show in the UI (default:conversation,agent
) -
--agixt-show-agent-bar
: Show or hide the agent bar in the UI (default:true
) -
--agixt-show-app-bar
: Show or hide the app bar in the UI (default:true
) -
--agixt-conversation-mode
: Set the conversation mode (default:select
) -
--allowed-domains
: Set allowed domains for API access (default:*
) -
--app-description
: Set the application description -
--app-name
: Set the application name (default:AGiXT Chat
) -
--app-uri
: Set the application URI (default:http://localhost:3437
) -
--streamlit-app-uri
: Set the Streamlit app URI (default:http://localhost:8501
) -
--auth-web
: Set the authentication web URI (default:http://localhost:3437/user
) -
--auth-provider
: Set the authentication provider (options:none
,magicalauth
) -
--create-agent-on-register
: Create an agent named from yourAGIXT_AGENT
environment variable if it is different thanAGiXT
using settings fromdefault_agent.json
if defined (default:true
) -
--create-agixt-agent
: Create an agent calledAGiXT
and trains it on the AGiXT documentation upon user registration (default:true
) -
--disabled-providers
: Set disabled providers (comma-separated list) -
--disabled-extensions
: Set disabled extensions (comma-separated list) -
--working-directory
: Set the working directory (default:./WORKSPACE
) -
--github-client-id
: Set GitHub client ID for authentication -
--github-client-secret
: Set GitHub client secret for authentication -
--google-client-id
: Set Google client ID for authentication -
--google-client-secret
: Set Google client secret for authentication -
--microsoft-client-id
: Set Microsoft client ID for authentication -
--microsoft-client-secret
: Set Microsoft client secret for authentication -
--tz
: Set the timezone (default: system timezone) -
--interactive-mode
: Set the interactive mode (default:chat
) -
--theme-name
: Set the UI theme (options:default
,christmas
,conspiracy
,doom
,easter
,halloween
,valentines
) -
--allow-email-sign-in
: Allow email sign-in (default:true
) -
--database-type
: Set the database type (options:sqlite
,postgres
) -
--database-name
: Set the database name (default:models/agixt
) -
--log-level
: Set the logging level (default:INFO
) -
--log-format
: Set the log format (default:%(asctime)s | %(levelname)s | %(message)s
) -
--uvicorn-workers
: Set the number of Uvicorn workers (default:10
) -
--agixt-auto-update
: Enable or disable auto-updates (default:true
) -
--with-streamlit
: Enable or disable the Streamlit UI (default:true
)
Options specific to ezLocalai:
-
--with-ezlocalai
: Start AGiXT with ezLocalai integration. -
--ezlocalai-uri
: Set the ezLocalai URI (default:http://{local_ip}:8091
) -
--default-model
: Set the default language model for ezLocalai (default:QuantFactory/dolphin-2.9.2-qwen2-7b-GGUF
) -
--vision-model
: Set the vision model for ezLocalai (default:deepseek-ai/deepseek-vl-1.3b-chat
) -
--llm-max-tokens
: Set the maximum number of tokens for language models (default:32768
) -
--whisper-model
: Set the Whisper model for speech recognition (default:base.en
) -
--gpu-layers
: Set the number of GPU layers to use (automatically determined based on available VRAM but can be modified.) (default:-1
for all)
For a full list of options with their current values, run:
python start.py --help
After setting up the environment variables and ensuring Docker and Docker Compose are installed, the script will:
- Stop any running AGiXT Docker containers
- Pull the latest Docker images (if auto-update is enabled)
- Start the AGiXT services using Docker Compose
- If the script fails to run on Linux, run it with
sudo
. - If you encounter any issues with Docker installation:
- On Linux, ensure you have sudo privileges and that your system is up to date.
- On macOS and Windows, follow the instructions to install Docker Desktop manually if the script cannot install it automatically.
- Check the Docker logs for any error messages if the containers fail to start.
- Verify that all required ports are available and not in use by other services.
- If the
python
command is not recognized, try usingpython3
instead.
- The
AGIXT_API_KEY
is automatically generated if not provided. Ensure to keep this key secure and do not share it publicly. - When using authentication providers (GitHub, Google, Microsoft), ensure that the client IDs and secrets are kept confidential.
- Be cautious when enabling file uploads and voice input, as these features may introduce potential security risks if not properly managed.
Each AGiXT Agent has its own settings for interfacing with AI providers, and other configuration options. These settings can be set and modified through the web interface.
Need more information? Check out the documentation for more details to get a better understanding of the concepts and features of AGiXT.
Check out the other AGiXT repositories at https://github.com/orgs/AGiXT/repositories - these include the AGiXT Streamlit Web UI, AGiXT Python SDK, AGiXT TypeScript SDK, AGiXT Dart SDK, AGiXT C# SDK, and more!
graph TD
Start[Start] --> IA[Initialize Agent]
IA --> IM[Initialize Memories]
IM --> A[User Input]
A --> B[Multi-modal Input Handler]
B --> B1{Input Type?}
B1 -->|Text| C[Process Text Input]
B1 -->|Voice| STT[Speech-to-Text Conversion]
B1 -->|Image| VIS[Vision Processing]
B1 -->|File Upload| F[Handle file uploads]
STT --> C
VIS --> C
F --> C
C --> S[Log user input]
C --> T[Log agent activities]
C --> E[Override Agent settings if applicable]
E --> G[Handle URLs and Websearch if applicable]
G --> H[Data Analysis if applicable]
H --> K{Agent Mode?}
K -->|Command| EC[Execute Command]
K -->|Chain| EX[Execute Chain]
K -->|Prompt| RI[Run Inference]
EC --> O[Prepare response]
EX --> O
RI --> O
O --> Q[Format response]
Q --> R[Text Response]
R --> P[Calculate tokens]
P --> U[Log final response]
Q --> TTS[Text-to-Speech Conversion]
TTS --> VAudio[Voice Audio Response]
Q --> IMG_GEN[Image Generation]
IMG_GEN --> GImg[Generated Image]
subgraph HF[Handle File Uploads]
F1[Download files to workspace]
F2[Learn from files]
F3[Update Memories]
F1 --> F2 --> F3
end
subgraph HU[Handle URLs in User Input]
G1[Learn from websites]
G2[Handle GitHub Repositories if applicable]
G3[Update Memories]
G1 --> G2 --> G3
end
subgraph AC[Data Analysis]
H1[Identify CSV content in agent workspace or user input]
H2[Determine files or content to analyze]
H3[Generate and verify Python code for analysis]
H4[Execute Python code]
H5{Execution successful?}
H6[Update memories with results from data analysis]
H7[Attempt code fix]
H1 --> H2 --> H3 --> H4 --> H5
H5 -->|Yes| H6
H5 -->|No| H7
H7 --> H4
end
subgraph IA[Agent Initialization]
I1[Load agent config]
I2[Initialize providers]
I3[Load available commands]
I4[Initialize Conversation]
I5[Initialize agent workspace]
I1 --> I2 --> I3 --> I4 --> I5
end
subgraph IM[Initialize Memories]
J1[Initialize vector database]
J2[Initialize embedding provider]
J3[Initialize relevant memory collections]
J1 --> J2 --> J3
end
subgraph EC[Execute Command]
L1[Inject user settings]
L2[Inject agent extensions settings]
L3[Run command]
L1 --> L2 --> L3
end
subgraph EX[Execute Chain]
M1[Load chain data]
M2[Inject user settings]
M3[Inject agent extension settings]
M4[Execute chain steps]
M5[Handle dependencies]
M6[Update chain responses]
M1 --> M2 --> M3 --> M4 --> M5 --> M6
end
subgraph RI[Run Inference]
N1[Get prompt template]
N2[Format prompt]
N3[Inject relevant memories]
N4[Inject conversation history]
N5[Inject recent activities]
N6[Call inference method to LLM provider]
N1 --> N2 --> N3 --> N4 --> N5 --> N6
end
subgraph WS[Websearch]
W1[Initiate web search]
W2[Perform search query]
W3[Scrape websites]
W4[Recursive browsing]
W5[Summarize content]
W6[Update agent memories]
W1 --> W2 --> W3 --> W4 --> W5 --> W6
end
subgraph PR[Providers]
P1[LLM Provider]
P2[TTS Provider]
P3[STT Provider]
P4[Vision Provider]
P5[Image Generation Provider]
P6[Embedding Provider]
end
subgraph CL[Conversation Logging]
S[Log user input]
T[Log agent activities]
end
F --> HF
G --> HU
G --> WS
H --> AC
TTS --> P2
STT --> P3
VIS --> P4
IMG_GEN --> P5
J2 --> P6
N6 --> P1
F --> T
G --> T
H --> T
L3 --> T
M4 --> T
N6 --> T
style U fill:#0000FF,stroke:#333,stroke-width:4px
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AGiXT
Similar Open Source Tools
AGiXT
AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.
BentoML
BentoML is an open-source model serving library for building performant and scalable AI applications with Python. It comes with everything you need for serving optimization, model packaging, and production deployment.
react-native-fast-tflite
A high-performance TensorFlow Lite library for React Native that utilizes JSI for power, zero-copy ArrayBuffers for efficiency, and low-level C/C++ TensorFlow Lite core API for direct memory access. It supports swapping out TensorFlow Models at runtime and GPU-accelerated delegates like CoreML/Metal/OpenGL. Easy VisionCamera integration allows for seamless usage. Users can load TensorFlow Lite models, interpret input and output data, and utilize GPU Delegates for faster computation. The library is suitable for real-time object detection, image classification, and other machine learning tasks in React Native applications.
ragflow
RAGFlow is an open-source Retrieval-Augmented Generation (RAG) engine that combines deep document understanding with Large Language Models (LLMs) to provide accurate question-answering capabilities. It offers a streamlined RAG workflow for businesses of all sizes, enabling them to extract knowledge from unstructured data in various formats, including Word documents, slides, Excel files, images, and more. RAGFlow's key features include deep document understanding, template-based chunking, grounded citations with reduced hallucinations, compatibility with heterogeneous data sources, and an automated and effortless RAG workflow. It supports multiple recall paired with fused re-ranking, configurable LLMs and embedding models, and intuitive APIs for seamless integration with business applications.
chatllm.cpp
ChatLLM.cpp is a pure C++ implementation tool for real-time chatting with RAG on your computer. It supports inference of various models ranging from less than 1B to more than 300B. The tool provides accelerated memory-efficient CPU inference with quantization, optimized KV cache, and parallel computing. It allows streaming generation with a typewriter effect and continuous chatting with virtually unlimited content length. ChatLLM.cpp also offers features like Retrieval Augmented Generation (RAG), LoRA, Python/JavaScript/C bindings, web demo, and more possibilities. Users can clone the repository, quantize models, build the project using make or CMake, and run quantized models for interactive chatting.
bigcodebench
BigCodeBench is an easy-to-use benchmark for code generation with practical and challenging programming tasks. It aims to evaluate the true programming capabilities of large language models (LLMs) in a more realistic setting. The benchmark is designed for HumanEval-like function-level code generation tasks, but with much more complex instructions and diverse function calls. BigCodeBench focuses on the evaluation of LLM4Code with diverse function calls and complex instructions, providing precise evaluation & ranking and pre-generated samples to accelerate code intelligence research. It inherits the design of the EvalPlus framework but differs in terms of execution environment and test evaluation.
text-embeddings-inference
Text Embeddings Inference (TEI) is a toolkit for deploying and serving open source text embeddings and sequence classification models. TEI enables high-performance extraction for popular models like FlagEmbedding, Ember, GTE, and E5. It implements features such as no model graph compilation step, Metal support for local execution on Macs, small docker images with fast boot times, token-based dynamic batching, optimized transformers code for inference using Flash Attention, Candle, and cuBLASLt, Safetensors weight loading, and production-ready features like distributed tracing with Open Telemetry and Prometheus metrics.
elyra
Elyra is a set of AI-centric extensions to JupyterLab Notebooks that includes features like Visual Pipeline Editor, running notebooks/scripts as batch jobs, reusable code snippets, hybrid runtime support, script editors with execution capabilities, debugger, version control using Git, and more. It provides a comprehensive environment for data scientists and AI practitioners to develop, test, and deploy machine learning models and workflows efficiently.
evalscope
Eval-Scope is a framework designed to support the evaluation of large language models (LLMs) by providing pre-configured benchmark datasets, common evaluation metrics, model integration, automatic evaluation for objective questions, complex task evaluation using expert models, reports generation, visualization tools, and model inference performance evaluation. It is lightweight, easy to customize, supports new dataset integration, model hosting on ModelScope, deployment of locally hosted models, and rich evaluation metrics. Eval-Scope also supports various evaluation modes like single mode, pairwise-baseline mode, and pairwise (all) mode, making it suitable for assessing and improving LLMs.
jina
Jina is a tool that allows users to build multimodal AI services and pipelines using cloud-native technologies. It provides a Pythonic experience for serving ML models and transitioning from local deployment to advanced orchestration frameworks like Docker-Compose, Kubernetes, or Jina AI Cloud. Users can build and serve models for any data type and deep learning framework, design high-performance services with easy scaling, serve LLM models while streaming their output, integrate with Docker containers via Executor Hub, and host on CPU/GPU using Jina AI Cloud. Jina also offers advanced orchestration and scaling capabilities, a smooth transition to the cloud, and easy scalability and concurrency features for applications. Users can deploy to their own cloud or system with Kubernetes and Docker Compose integration, and even deploy to JCloud for autoscaling and monitoring.
litserve
LitServe is a high-throughput serving engine for deploying AI models at scale. It generates an API endpoint for a model, handles batching, streaming, autoscaling across CPU/GPUs, and more. Built for enterprise scale, it supports every framework like PyTorch, JAX, Tensorflow, and more. LitServe is designed to let users focus on model performance, not the serving boilerplate. It is like PyTorch Lightning for model serving but with broader framework support and scalability.
gpt-translate
Markdown Translation BOT is a GitHub action that translates markdown files into multiple languages using various AI models. It supports markdown, markdown-jsx, and json files only. The action can be executed by individuals with write permissions to the repository, preventing API abuse by non-trusted parties. Users can set up the action by providing their API key and configuring the workflow settings. The tool allows users to create comments with specific commands to trigger translations and automatically generate pull requests or add translated files to existing pull requests. It supports multiple file translations and can interpret any language supported by GPT-4 or GPT-3.5.
doc-comments-ai
doc-comments-ai is a tool designed to automatically generate code documentation using language models. It allows users to easily create documentation comment blocks for methods in various programming languages such as Python, Typescript, Javascript, Java, Rust, and more. The tool supports both OpenAI and local LLMs, ensuring data privacy and security. Users can generate documentation comments for methods in files, inline comments in method bodies, and choose from different models like GPT-3.5-Turbo, GPT-4, and Azure OpenAI. Additionally, the tool provides support for Treesitter integration and offers guidance on selecting the appropriate model for comprehensive documentation needs.
Vitron
Vitron is a unified pixel-level vision LLM designed for comprehensive understanding, generating, segmenting, and editing static images and dynamic videos. It addresses challenges in existing vision LLMs such as superficial instance-level understanding, lack of unified support for images and videos, and insufficient coverage across various vision tasks. The tool requires Python >= 3.8, Pytorch == 2.1.0, and CUDA Version >= 11.8 for installation. Users can deploy Gradio demo locally and fine-tune their models for specific tasks.
backend.ai
Backend.AI is a streamlined, container-based computing cluster platform that hosts popular computing/ML frameworks and diverse programming languages, with pluggable heterogeneous accelerator support including CUDA GPU, ROCm GPU, TPU, IPU and other NPUs. It allocates and isolates the underlying computing resources for multi-tenant computation sessions on-demand or in batches with customizable job schedulers with its own orchestrator. All its functions are exposed as REST/GraphQL/WebSocket APIs.
yomo
YoMo is an open-source LLM Function Calling Framework for building Geo-distributed AI applications. It is built atop QUIC Transport Protocol and Stateful Serverless architecture, making AI applications low-latency, reliable, secure, and easy. The framework focuses on providing low-latency, secure, stateful serverless functions that can be distributed geographically to bring AI inference closer to end users. It offers features such as low-latency communication, security with TLS v1.3, stateful serverless functions for faster GPU processing, geo-distributed architecture, and a faster-than-real-time codec called Y3. YoMo enables developers to create and deploy stateful serverless functions for AI inference in a distributed manner, ensuring quick responses to user queries from various locations worldwide.
For similar tasks
AGiXT
AGiXT is a dynamic Artificial Intelligence Automation Platform engineered to orchestrate efficient AI instruction management and task execution across a multitude of providers. Our solution infuses adaptive memory handling with a broad spectrum of commands to enhance AI's understanding and responsiveness, leading to improved task completion. The platform's smart features, like Smart Instruct and Smart Chat, seamlessly integrate web search, planning strategies, and conversation continuity, transforming the interaction between users and AI. By leveraging a powerful plugin system that includes web browsing and command execution, AGiXT stands as a versatile bridge between AI models and users. With an expanding roster of AI providers, code evaluation capabilities, comprehensive chain management, and platform interoperability, AGiXT is consistently evolving to drive a multitude of applications, affirming its place at the forefront of AI technology.
aiexe
aiexe is a cutting-edge command-line interface (CLI) and graphical user interface (GUI) tool that integrates powerful AI capabilities directly into your terminal or desktop. It is designed for developers, tech enthusiasts, and anyone interested in AI-powered automation. aiexe provides an easy-to-use yet robust platform for executing complex tasks with just a few commands. Users can harness the power of various AI models from OpenAI, Anthropic, Ollama, Gemini, and GROQ to boost productivity and enhance decision-making processes.
claude.vim
Claude.vim is a Vim plugin that integrates Claude, an AI pair programmer, into your Vim workflow. It allows you to chat with Claude about what to build or how to debug problems, and Claude offers opinions, proposes modifications, or even writes code. The plugin provides a chat/instruction-centric interface optimized for human collaboration, with killer features like access to chat history and vimdiff interface. It can refactor code, modify or extend selected pieces of code, execute complex tasks by reading documentation, cloning git repositories, and more. Note that it is early alpha software and expected to rapidly evolve.
mistreevous
Mistreevous is a library written in TypeScript for Node and browsers, used to declaratively define, build, and execute behaviour trees for creating complex AI. It allows defining trees with JSON or a minimal DSL, providing in-browser editor and visualizer. The tool offers methods for tree state, stepping, resetting, and getting node details, along with various composite, decorator, leaf nodes, callbacks, guards, and global functions/subtrees. Version history includes updates for node types, callbacks, global functions, and TypeScript conversion.
project_alice
Alice is an agentic workflow framework that integrates task execution and intelligent chat capabilities. It provides a flexible environment for creating, managing, and deploying AI agents for various purposes, leveraging a microservices architecture with MongoDB for data persistence. The framework consists of components like APIs, agents, tasks, and chats that interact to produce outputs through files, messages, task results, and URL references. Users can create, test, and deploy agentic solutions in a human-language framework, making it easy to engage with by both users and agents. The tool offers an open-source option, user management, flexible model deployment, and programmatic access to tasks and chats.
superagent-py
Superagent is an open-source framework that enables developers to integrate production-ready AI assistants into any application quickly and easily. It provides a Python SDK for interacting with the Superagent API, allowing developers to create, manage, and invoke AI agents. The SDK simplifies the process of building AI-powered applications, making it accessible to developers of all skill levels.
infra
E2B Infra is a cloud runtime for AI agents. It provides SDKs and CLI to customize and manage environments and run AI agents in the cloud. The infrastructure is deployed using Terraform and is currently only deployable on GCP. The main components of the infrastructure are the API server, daemon running inside instances (sandboxes), Nomad driver for managing instances (sandboxes), and Nomad driver for building environments (templates).
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.