ai-code-interface.el
Unified Emacs interface supporting Claude Code, OpenAI Codex, Gemini CLI, GitHub Copilot CLI, Opencode, and more
Stars: 139
AI Code Interface is an Emacs package designed for AI-assisted software development, providing a uniform interface for various AI backends. It offers context-aware AI coding actions and seamless integration with AI-driven agile development workflows. The package supports multiple AI coding CLIs such as Claude Code, Gemini CLI, OpenAI Codex, GitHub Copilot CLI, Opencode, Grok CLI, Cursor CLI, Kiro CLI, CodeBuddy Code CLI, and Aider CLI. It aims to streamline the use of different AI tools within Emacs while maintaining a consistent user experience.
README:
[[file:./icon.png]]
- AI Code Interface
[[https://melpa.org/#/ai-code][https://melpa.org/packages/ai-code-badge.svg]] [[https://stable.melpa.org/#/ai-code][https://stable.melpa.org/packages/ai-code-badge.svg]] [[https://github.com/tninja/ai-code.el/graphs/contributors][https://img.shields.io/github/contributors/tninja/ai-code.el.svg]]
An Emacs interface for AI-assisted software development. The purpose is to provide a uniform interface and experience for different AI backends, with context-aware AI coding actions, and integrating seamlessly with AI-driven agile development workflows.
-
Currently it supports these AI coding CLIs:
- [[https://github.com/anthropics/claude-code][Claude Code]]
- [[https://github.com/google-gemini/gemini-cli][Gemini CLI]]
- [[https://github.com/openai/codex][OpenAI Codex]]
- [[https://docs.github.com/en/copilot/how-tos/use-copilot-agents/use-copilot-cli][GitHub Copilot CLI]]
- [[https://opencode.ai/][Opencode]]
- [[https://grokcli.io/][Grok CLI]]
- [[https://docs.cursor.com/en/cli][Cursor CLI]]
- [[https://kiro.dev/cli/][Kiro CLI]]
- [[https://cnb.cool/codebuddy/codebuddy-code][CodeBuddy Code CLI]]
- [[https://aider.chat/][Aider CLI]]
-
I switch between different CLI-based AI tools in Emacs: Claude Code / OpenAI Codex / Gemini CLI / etc. If you also use different AI tools inside Emacs, but want to keep the same user interface and experience, this package is for you.
-
Lots of features and tools are ported from [[https://github.com/tninja/aider.el][aider.el]]. If you like the features in aider.el, but wish to switch to modern AI coding CLI, this package is also for you.
-
Screenshot
[[./transient_menu.png]]
** Installation
Enable installation of packages from MELPA by adding an entry to package-archives after (require 'package) and before the call to package-initialize in your init.el or .emacs file:
#+begin_src emacs-lisp (require 'package) (add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t) (package-initialize) #+end_src
- Use =M-x package-refresh-contents= or =M-x package-list-packages= to ensure that Emacs has fetched the MELPA package list
- Use =M-x package-install= to install =ai-code= package
- Import and configure =ai-code= in your init.el or .emacs file:
#+begin_src emacs-lisp (use-package ai-code ;; :straight (:host github :repo "tninja/ai-code-interface.el") ;; if you want to use straight to install, no need to have MELPA setting above :config ;; use codex as backend, other options are 'claude-code, 'gemini, 'github-copilot-cli, 'opencode, 'grok, 'cursor, 'kiro, 'codebuddy, 'aider, 'claude-code-ide, 'claude-code-el (ai-code-set-backend 'codex) ;; Enable global keybinding for the main menu (global-set-key (kbd "C-c a") #'ai-code-menu) ;; Optional: Use eat if you prefer, by default it is vterm ;; (setq ai-code-backends-infra-terminal-backend 'eat) ;; the way to config all native supported CLI. for external backend such as claude-code-ide.el and claude-code.el, please check their config ;; Optional: Enable @ file completion in comments and AI sessions (ai-code-prompt-filepath-completion-mode 1) ;; Optional: Ask AI to run test after code changes, for a tighter build-test loop (setq ai-code-auto-test-type 'test-after-change) ;; Optional: In AI session buffers, SPC in Evil normal state triggers the prompt-enter UI (with-eval-after-load 'evil (ai-code-backends-infra-evil-setup)) ;; Optional: Turn on auto-revert buffer, so that the AI code change automatically appears in the buffer (global-auto-revert-mode 1) (setq auto-revert-interval 1) ;; set to 1 second for faster update ;; (global-set-key (kbd "C-c a C") #'ai-code-toggle-filepath-completion) ;; Optional: Set up Magit integration for AI commands in Magit popups (with-eval-after-load 'magit (ai-code-magit-setup-transients))) #+end_src
** Dependencies
*** Required Dependencies
- Emacs 28.1 or later
-
org: Org-mode support -
magit: Git integration -
transient: For the menu system - vterm (default) or eat needs to be installed to support AI coding CLI backends.
*** Optional Dependencies
-
helm: For an enhanced auto-completion experience (ai-code-input.el). -
yasnippet: For snippet support in the prompt file. A library of snippets is included. -
gptel: For intelligent, AI-generated headlines in the prompt file. -
flycheck: To enable theai-code-flycheck-fix-errors-in-scopecommand. -
projectile: For project root initialization. -
helm-gtags: For tags creation. -
python-pytest: For running python tests in the TDD workflow. -
jest: For running JavaScript / TypeScript tests in the TDD workflow.
** Key Features
-
Transient-Driven Hub (
C-c a): One keystroke opens a contextual transient menu that groups every capability (CLI control, code actions, agile workflows, utilities) so you never need to memorize scattered keybindings. -
AI CLI Session Management: Start (
a), resume (R), or jump back into (z) the active AI CLI buffer, instantly swap backends (s), upgrade them (u), edit backend configs (g), and run prompts against the current file (|). It support multiple sessions per project. -
Context-Aware Code Actions: The menu exposes dedicated entries for changing code (
c), implementing TODOs (i), asking questions (q), explaining code (x), sending free-form commands (<SPC>), and refreshing AI context (@). Each command automatically captures the surrounding function, region, or clipboard contents (viaC-u) to keep prompts precise. -
Agile Development Workflows: Use the refactoring navigator (
r), the guided TDD cycle (t), and the pull/review diff helper (v) to keep AI-assisted work aligned with agile best practices. Prompt authoring is first-class through quick access to the prompt file (p), block sending (b), and AI-assisted shell/file execution (!). -
Productivity & Debugging Utilities: Initialize project navigation assets (
.), investigate exceptions (e), auto-fix Flycheck issues in scope (f), copy or open file paths formatted for prompts (k,o), generate MCP inspector commands (m), capture session notes straight into Org (n), and toggle desktop notifications (N) to get alerted when AI responses are ready in background sessions. -
Seamless Prompt Management: Open the prompt file via
ai-code-open-prompt-file(stored under.ai.code.files/.ai.code.prompt.orgby default), send regions withai-code-prompt-send-block, and reuse prompt snippets viayasnippetto keep conversations organized. - Interactive Chat & Context Tools: Dedicated buffers hold long-running chats, automatically enriched with file paths, diffs, and history from Magit or Git commands for richer AI responses.
-
AI-Assisted Bash Commands: From Dired, shell, eshell, or vterm, run
C-c a !and type natural-language commands prefixed with:(e.g.,:count lines of python code recursively); the tool generates the shell command for review and executes it in a compile buffer.
*** Typical Workflows Example
-
Changing Code: Position the cursor on a function or select a region of code. Press
C-c a, thenc(ai-code-code-change). Describe the change you want to make in the prompt. The AI will receive the context of the function or region and your instruction. -
Implementing a TODO: Write a comment in your code, like
;; TODO: Implement caching for this function. Place your cursor on that line and pressC-c a, theni(ai-code-implement-todo). The AI will generate the implementation based on the comment. -
Asking a Question: Place your cursor within a function, press
C-c a, thenq(ai-code-ask-question), type your question, and press Enter. The question, along with context, will be sent to the AI. -
Refactoring a Function: With the cursor in a function, press
C-c a, thenr(ai-code-refactor-book-method). Select a refactoring technique from the list, provide any required input (e.g., a new method name), and the prompt will be generated. - Automatically run tests after change: When ai-code-auto-test-type is non-nil, AI will automatically run tests after code changes and follow up on results.
-
Reviewing a Pull Request: Press
C-c a, thenv(ai-code-pull-or-review-diff-file). Choose to generate a diff between two branches. The diff will be created in a new buffer, and you'll be prompted to start a review. - Multiple Sessions Support: Start more AI coding session with C-c a a after launching one. Select active session with C-c a z. Prompt with above command will be sent to the selected session.
*** Context Engineering
Context engineering is the deliberate practice of selecting, structuring, and delivering the right information to an AI model so the output is specific, accurate, and actionable. For AI-assisted programming, the model cannot read your whole codebase by default, so the quality of the result depends heavily on the clarity and relevance of the provided context (file paths, functions, regions, related files, and repo-level notes). Good context engineering reduces ambiguity, prevents irrelevant suggestions, and keeps changes aligned with the current code.
This package makes context engineering easy by automatically assembling precise context blocks and letting you curate additional context on demand:
- Automatic file and window context: prompts can include the current file and other visible files (
ai-code--get-context-files-string), so the AI sees related code without manual copying. - Function or region scoping: most actions capture the current function or active region, keeping requests focused (e.g.,
ai-code-code-change,ai-code-implement-todo,ai-code-ask-question). - Manual context curation:
C-c a @(ai-code-context-action) stores file paths, function anchors, or region ranges in a repo-scoped list, which is appended to prompts viaai-code--format-repo-context-info. - Optional clipboard context: prefix with
C-uto append clipboard content to prompts for external snippets or logs. - @-triggered filepath completion in comments and AI sessions. Type
@to open a completion list of recent and visible repo files, then select a path to insert. - Prompt suffix guardrails: set
ai-code-prompt-suffixto append persistent constraints to every prompt (whenai-code-use-prompt-suffixis non-nil). Example:(setq ai-code-prompt-suffix "Only use English in code file, but Reply in Simplified Chinese language"). - Optional GPTel headline generation: set
ai-code-use-gptel-headlineto auto-generate prompt headings with GPTel.
Example (focused refactor with curated context):
- In a buffer, run
C-c a @to add the current function or selected region to stored repo context. - Open another related file in a window so it is picked up by
ai-code--get-context-files-string. - Place the cursor in the target function and run
C-c a cto request a change. The generated prompt will include the function/region scope, visible file list, and stored repo context entries, giving the AI exactly the surrounding information it needs.
-
MCP can provide critical context to AI model. You can use C-c a g to open and add mcp config for corresponding AI coding CLI. Examples MCPs:
- [[https://github.com/github/github-mcp-server][Github MCP]]
- [[https://github.com/sooperset/mcp-atlassian][Atlassian MCP]]
- [[https://github.com/crystaldba/postgres-mcp][Postgresql MCP]] / Sqlite MCP
- [[https://github.com/docker/mcp-gateway][Docker]] / [[https://github.com/containers/kubernetes-mcp-server][Kubernetes]] MCP
-
[[https://martinfowler.com/articles/exploring-gen-ai/context-engineering-coding-agents.html][Context Engineering for Coding Agents]], recommended by Martin Fowler.
*** Build / Test Feedback Loop
Use the prompt suffix and TDD helpers to keep a tight build + test loop. This reduces context switching, shortens the time between a change and verified feedback, and lets the AI work more independently with less human-in-the-loop effort:
-
ai-code-auto-test-type: Selects how prompts request tests after code changes (test-after-change, TDD Red+Green, or off). -
ai-code--tdd-red-green-stage: Generates a single prompt for Red + Green with explicit test follow-up. -
ai-code-build-or-test-project: Run the project build or test from the menu (C-c a b).
*** Desktop Notifications (Experimental)
When working with multiple AI sessions, it can be useful to receive desktop notifications when AI responses are complete. This is especially helpful when you prompt an AI and then switch to other tasks while waiting for the response.
**** Enabling Notifications
- Notifications are disabled by default.
- Press
C-c athenNto toggle notifications on/off. - Alternatively, use
M-x ai-code-notifications-toggle. - To enable notifications in your config: #+begin_src emacs-lisp (setq ai-code-notifications-enabled t) (setq ai-code-notifications-show-on-response t) #+end_src
**** How It Works
- The package monitors terminal activity in AI session buffers.
- When the terminal has been idle for ~5 seconds (configurable via
ai-code-backends-infra-idle-delay), it's considered a completed response. - If the AI session buffer is not currently visible/focused, a desktop notification is sent.
- Notifications are throttled to avoid spam (minimum 2 seconds between notifications).
**** Platform Support
- On Linux with D-Bus, native desktop notifications are used.
- On other platforms, notifications appear in the Emacs minibuffer.
** AI coding CLI backend
*** Backend Configuration This package acts as a generic interface that requires a backend AI assistant package to function. You can configure it to work with different backends.
- Press
C-c ato open the AI menu, thensto "Select Backend". - Pick one of the supported backends and the integration will switch immediately.
- The selection updates the start/switch/send commands and the CLI used by
ai-code-apply-prompt-on-current-file.
Natively supported options:
- [[https://github.com/anthropics/claude-code][Claude Code]] (
[[./ai-code-claude-code.el][ai-code-claude-code.el]]) - [[https://github.com/google-gemini/gemini-cli][Gemini CLI]] (
[[./ai-code-gemini-cli.el][ai-code-gemini-cli.el]]) - [[https://github.com/openai/codex][OpenAI codex CLI]] (
[[./ai-code-codex-cli.el][ai-code-codex-cli.el]]) - [[https://docs.github.com/en/copilot/how-tos/use-copilot-agents/use-copilot-cli][GitHub Copilot CLI]] (
[[./ai-code-github-copilot-cli.el][ai-code-github-copilot-cli.el]]) - [[https://opencode.ai/][Opencode]] (
[[./ai-code-opencode.el][ai-code-opencode.el]]) - [[https://grokcli.io/][Grok CLI]] (
[[./ai-code-grok-cli.el][ai-code-grok-cli.el]]) - [[https://docs.cursor.com/en/cli][Cursor CLI]] (
[[./ai-code-cursor-cli.el][ai-code-cursor-cli.el]]) - [[https://kiro.dev/cli/][Kiro CLI]] (
[[./ai-code-kiro-cli.el][ai-code-kiro-cli.el]]) - [[https://cnb.cool/codebuddy/codebuddy-code][CodeBuddy Code CLI]] (
[[./ai-code-codebuddy-cli.el][ai-code-codebuddy-cli.el]]) - [[https://aider.chat/][Aider CLI]] (
[[./ai-code-aider-cli.el][ai-code-aider-cli.el]])
It also supports external backends through customization of the ai-code-backends variable; currently it includes:
- Claude Code IDE (
[[https://github.com/manzaltu/claude-code-ide.el][claude-code-ide.el]]) - Claude Code (
[[https://github.com/stevemolitor/claude-code.el][claude-code.el]])
**** Grok CLI setup
Install [[https://grokcli.io/][grok-cli]] and ensure the grok executable is on your PATH.
Customize grok-cli-program or grok-cli-program-switches if you want to
point at a different binary or pass additional flags (for example,
selecting a profile). After that, select the backend through
ai-code-select-backend or bind a helper in your config.
**** CodeBuddy Code CLI setup
Install CodeBuddy Code CLI via npm: =npm install -g @tencent-ai/codebuddy-code=, or via Homebrew: =brew install Tencent-CodeBuddy/tap/codebuddy-code=.
Ensure the codebuddy executable is on your PATH.
Customize ai-code-codebuddy-cli-program or ai-code-codebuddy-cli-program-switches if you want to
point at a different binary or pass additional flags. After that, select the backend through
ai-code-select-backend or bind a helper in your config.
To resume previous conversations, use =-c= flag (automatically handled by the resume command).
You can add other backends by customizing the ai-code-backends variable.
**** Add a new AI coding CLI backend
-
[[https://github.com/tninja/ai-code-interface.el/pull/2][This PR]] adds github-copilot-cli. It can be an example to add basic support for other AI coding CLI.
-
Open an issue, post information about the new AI coding CLI backend (eg. cursor CLI?), at least providing the command line name. You can also include the version upgrade command, how to resume, where the configuration files are located, and so on. We can ask GitHub Copilot to add support features based on the issue.
** [[https://github.com/tninja/aider.el/blob/main/appendix.org#be-careful-about-ai-generated-code][Why Agile development with AI?]]
** FAQ
*** Q: Using Opencode as backend, it might have performance issues with eat.el in Doom Emacs. [[https://github.com/tninja/ai-code-interface.el/issues/9#issuecomment-3543277108][Issue]]
- A: Use vterm as the backend; Opencode won't trigger mouse hover and will not cause Emacs flickering. Setting "theme" to "system" in the Opencode config can reduce glitches. From [[https://github.com/tninja/ai-code-interface.el/issues/9#issuecomment-3543335121][gkzhb's answer]]: #+begin_src json { "$schema": "https://opencode.ai/config.json", "theme": "system" } #+end_src
*** Q: Gemini CLI response is relatively slow, how to improve?
- A: use gemini-3-flash model, it is pretty fast, with good quality (being able to solve leetcode hard problems), and it is free. You can set the following in your Emacs config:
#+begin_src elisp (setq ai-code-gemini-cli-program-switches '("--model" "gemini-3-flash-preview")) #+end_src
*** Q: Codex CLI use my API key, instead of my ChatGPT Plus subscription and cost money, how to fix that?
- A: use
codex loginto login with your OpenAI account that has ChatGPT Plus subscription. After that, Codex CLI will use your ChatGPT Plus subscription automatically. To confirm, check with /status inside the codex CLI buffer.
** AI Assisted Programming related books
The following books introduce how to use AI to assist programming and potentially be helpful to aider / aider.el users.
-
[[https://learning.oreilly.com/library/view/beyond-vibe-coding/9798341634749/][Beyond Vibe Coding]], by Addy Osmani, August, 2025
-
[[https://learning.oreilly.com/library/view/critical-thinking-habits/0642572243326/][Critical Thinking Habits for Coding with AI]], by Andrew Stellman, Oct 2025
-
[[https://github.com/tninja/aider.el?tab=readme-ov-file#ai-assisted-programming-related-books][More AI Assisted Programming related books]]
** Related Emacs packages
- Claude Code (
[[https://github.com/stevemolitor/claude-code.el][claude-code.el]]) - Claude Code IDE (
[[https://github.com/manzaltu/claude-code-ide.el][claude-code-ide.el]]) - Gemini CLI (
[[https://github.com/linchen2chris/gemini-cli.el][gemini-cli.el]]) - [[https://github.com/xenodium/agent-shell][agent-shell]] ([[https://github.com/xenodium/acp.el][acp.el]])
** License
Apache-2.0 License
** Contributing
Contributions, issue reports, and improvement suggestions are welcome! Please open an issue or submit a pull request on the project's GitHub repository.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for ai-code-interface.el
Similar Open Source Tools
For similar tasks
ai-code-interface.el
AI Code Interface is an Emacs package designed for AI-assisted software development, providing a uniform interface for various AI backends. It offers context-aware AI coding actions and seamless integration with AI-driven agile development workflows. The package supports multiple AI coding CLIs such as Claude Code, Gemini CLI, OpenAI Codex, GitHub Copilot CLI, Opencode, Grok CLI, Cursor CLI, Kiro CLI, CodeBuddy Code CLI, and Aider CLI. It aims to streamline the use of different AI tools within Emacs while maintaining a consistent user experience.
aidermacs
Aidermacs is an AI pair programming tool for Emacs that integrates Aider, a powerful open-source AI pair programming tool. It provides top performance on the SWE Bench, support for multi-file edits, real-time file synchronization, and broad language support. Aidermacs delivers an Emacs-centric experience with features like intelligent model selection, flexible terminal backend support, smarter syntax highlighting, enhanced file management, and streamlined transient menus. It thrives on community involvement, encouraging contributions, issue reporting, idea sharing, and documentation improvement.
gptel-aibo
gptel-aibo is an AI writing assistant system built on top of gptel. It helps users create and manage content in Emacs, including code, documentation, and novels. Users can interact with the Language Model (LLM) to receive suggestions and apply them easily. The tool provides features like sending requests, applying suggestions, and completing content at the current position based on context. Users can customize settings and face settings for a better user experience. gptel-aibo aims to enhance productivity and efficiency in content creation and management within Emacs environment.
commanddash
Dash AI is an open-source coding assistant for Flutter developers. It is designed to not only write code but also run and debug it, allowing it to assist beyond code completion and automate routine tasks. Dash AI is powered by Gemini, integrated with the Dart Analyzer, and specifically tailored for Flutter engineers. The vision for Dash AI is to create a single-command assistant that can automate tedious development tasks, enabling developers to focus on creativity and innovation. It aims to assist with the entire process of engineering a feature for an app, from breaking down the task into steps to generating exploratory tests and iterating on the code until the feature is complete. To achieve this vision, Dash AI is working on providing LLMs with the same access and information that human developers have, including full contextual knowledge, the latest syntax and dependencies data, and the ability to write, run, and debug code. Dash AI welcomes contributions from the community, including feature requests, issue fixes, and participation in discussions. The project is committed to building a coding assistant that empowers all Flutter developers.
ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.
crewAI-tools
The crewAI Tools repository provides a guide for setting up tools for crewAI agents, enabling the creation of custom tools to enhance AI solutions. Tools play a crucial role in improving agent functionality. The guide explains how to equip agents with a range of tools and how to create new tools. Tools are designed to return strings for generating responses. There are two main methods for creating tools: subclassing BaseTool and using the tool decorator. Contributions to the toolset are encouraged, and the development setup includes steps for installing dependencies, activating the virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. Enhance AI agent capabilities with advanced tooling.
lightning-lab
Lightning Lab is a public template for artificial intelligence and machine learning research projects using Lightning AI's PyTorch Lightning. It provides a structured project layout with modules for command line interface, experiment utilities, Lightning Module and Trainer, data acquisition and preprocessing, model serving APIs, project configurations, training checkpoints, technical documentation, logs, notebooks for data analysis, requirements management, testing, and packaging. The template simplifies the setup of deep learning projects and offers extras for different domains like vision, text, audio, reinforcement learning, and forecasting.
Magic_Words
Magic_Words is a repository containing code for the paper 'What's the Magic Word? A Control Theory of LLM Prompting'. It implements greedy back generation and greedy coordinate gradient (GCG) to find optimal control prompts (magic words). Users can set up a virtual environment, install the package and dependencies, and run example scripts for pointwise control and optimizing prompts for datasets. The repository provides scripts for finding optimal control prompts for question-answer pairs and dataset optimization using the GCG algorithm.
For similar jobs
ai-code-interface.el
AI Code Interface is an Emacs package designed for AI-assisted software development, providing a uniform interface for various AI backends. It offers context-aware AI coding actions and seamless integration with AI-driven agile development workflows. The package supports multiple AI coding CLIs such as Claude Code, Gemini CLI, OpenAI Codex, GitHub Copilot CLI, Opencode, Grok CLI, Cursor CLI, Kiro CLI, CodeBuddy Code CLI, and Aider CLI. It aims to streamline the use of different AI tools within Emacs while maintaining a consistent user experience.
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.