
ash_ai
Structured outputs, vectorization and tool calling for your Ash application
Stars: 130

Ash AI is a tool that provides a Model Context Protocol (MCP) server for exposing tool definitions to an MCP client. It allows for the installation of dev and production MCP servers, and supports features like OAuth2 flow with AshAuthentication, tool data access, tool execution callbacks, prompt-backed actions, and vectorization strategies. Users can also generate a chat feature for their Ash & Phoenix application using `ash_oban` and `ash_postgres`, and specify LLM API keys for OpenAI. The tool is designed to help developers experiment with tools and actions, monitor tool execution, and expose actions as tool calls.
README:
You can install AshAi
using igniter. For example:
mix igniter.install ash_ai
Add AshAi
to your list of dependencies:
def deps do
[
{:ash_ai, "~> 0.2"}
]
end
Both the dev & production MCP servers can be installed with
mix ash_ai.gen.mcp
To install the dev MCP server, add the AshAi.Mcp.Dev
plug to your
endpoint module, in the code_reloading?
block. By default the
mcp server will be available under http://localhost:4000/ash_ai/mcp
.
if code_reloading? do
socket "/phoenix/live_reload/socket", Phoenix.LiveReloader.Socket
plug AshAi.Mcp.Dev,
# see the note below on protocol versions below
protocol_version_statement: "2024-11-05",
otp_app: :your_app
We are still experimenting to see what tools (if any) are useful while developing with agents.
AshAi provides a pre-built MCP server that can be used to expose your tool definitions to an MCP client (typically some kind of IDE, or Claude Desktop for example).
The protocol version we implement is 2025-03-26. As of this writing, many tools have not yet been updated to support this version. You will generally need to use some kind of proxy until tools have been updated accordingly. We suggest this one, provided by tidewave. https://github.com/tidewave-ai/mcp_proxy_rust#installation
However, as of the writing of this guide, it requires setting a previous protocol version as noted above.
- Implement OAuth2 flow with AshAuthentication (long term)
- Implement support for more than just tools, i.e resources etc.
- Implement sessions, and provide a session id context to tools (this code is just commented out, and can be uncommented, just needs timeout logic for inactive sesions)
We don't currently support the OAuth2 flow out of the box with AshAi, but the goal is to eventually support this with AshAuthentication. You can always implement that yourself, but the quickest way to value is to use the new api_key
strategy.
If you haven't installed AshAuthentication
yet, install it like so: mix igniter.install ash_authentication --auth-strategy api_key
.
If its already been installed, and you haven't set up API keys, use mix ash_authentication.add_strategy api_key
.
Then, create a separate pipeline for :mcp
, and add the api key plug to it:
pipeline :mcp do
plug AshAuthentication.Strategy.ApiKey.Plug,
resource: YourApp.Accounts.User,
# Use `required?: false` to allow unauthenticated
# users to connect, for example if some tools
# are publicly accessible.
required?: false
end
scope "/mcp" do
pipe_through :mcp
forward "/", AshAi.Mcp.Router,
tools: [
:list,
:of,
:tools
],
# For many tools, you will need to set the `protocol_version_statement` to the older version.
protocol_version_statement: "2024-11-05",
otp_app: :my_app
end
This is a new and experimental tool to generate a chat feature for your Ash & Phoenix application. It is backed by ash_oban
and ash_postgres
, using pub_sub
to stream messages to the client. This is primarily a tool to get started with chat features and is by no means intended to handle every case you can come up with.
To get started:
mix ash_ai.gen.chat --live
The --live
flag indicates that you wish to generate liveviews in addition to the chat resources.
It requires a user
resource to exist. If your user
resource is not called <YourApp>.Accounts.User
, provide a custom user resource with the --user
flag.
To try it out from scratch:
mix igniter.new my_app \
--with phx.new \
--install ash,ash_postgres,ash_phoenix \
--install ash_authentication_phoenix,ash_oban \
--install ash_ai@github:ash-project/ash_ai \
--auth-strategy password
and then run:
mix ash_ai.gen.chat --live
By default, it uses Open AI as the LLM provider so you need to specify your OpenAI API key as an environment variable (eg OPEN_API_KEY=sk_...
).
The Chat UI liveview templates assume you have Tailwind and DaisyUI installed for styling purposes. DaisyUI is included in Phoenix 1.8 and later but if you generated your Phoenix app pre-1.8 then install DaisyUI.
You can then start your server and visit http://localhost:4000/chat
to see the chat feature in action. You will be prompted to register first and sign in the first time.
You should then be able to type chat messages, but until you have some tools registered (see below) and set a default system prompt, the LLM won't know anything about your app.
defmodule MyApp.Blog do
use Ash.Domain, extensions: [AshAi]
tools do
tool :read_posts, MyApp.Blog.Post, :read
tool :create_post, MyApp.Blog.Post, :create
tool :publish_post, MyApp.Blog.Post, :publish
tool :read_comments, MyApp.Blog.Comment, :read
end
end
Expose these actions as tools. When you call AshAi.setup_ash_ai(chain, opts)
, or AshAi.iex_chat/2
it will add those as tool calls to the agent.
Important: Tools have different access levels for different operations:
-
Filtering/Sorting/Aggregation: Only public attributes (
public?: true
) can be used - Arguments: Only public action arguments are exposed
- Response data: Public attributes are returned by default
-
Loading data: Use the
load
option to include relationships, calculations, or additional attributes (including private ones) in responses
Example:
tools do
# Returns only public attributes
tool :read_posts, MyApp.Blog.Post, :read
# Returns public attributes AND loaded relationships/calculations
# Note: loaded fields can include private attributes
tool :read_posts_with_details, MyApp.Blog.Post, :read,
load: [:author, :comment_count, :internal_notes]
end
Key distinction:
- Private attributes cannot be used for filtering, sorting, or aggregation
- Private attributes CAN be included in responses when using the
load
option - The
load
option is primarily for loading relationships and calculations, but also makes any loaded attributes (including private ones) visible
Monitor tool execution in real-time by providing callbacks to AshAi.setup_ash_ai/2
:
chain
|> AshAi.setup_ash_ai(
actor: current_user,
on_tool_start: fn %AshAi.ToolStartEvent{} = event ->
# event includes: tool_name, action, resource, arguments, actor, tenant
IO.puts("Starting #{event.tool_name}...")
end,
on_tool_end: fn %AshAi.ToolEndEvent{} = event ->
# event includes: tool_name, result ({:ok, ...} or {:error, ...})
IO.puts("Completed #{event.tool_name}")
end
)
This is useful for showing progress indicators, logging, metrics collection, or debugging tool execution.
This allows defining an action, including input and output types, and delegating the
implementation to an LLM. We use structured outputs to ensure that it always returns
the correct data type. We also derive a default prompt from the action description and
action inputs. See AshAi.Actions.Prompt
for more information.
action :analyze_sentiment, :atom do
constraints one_of: [:positive, :negative]
description """
Analyzes the sentiment of a given piece of text to determine if it is overall positive or negative.
"""
argument :text, :string do
allow_nil? false
description "The text for analysis"
end
run prompt(
LangChain.ChatModels.ChatOpenAI.new!(%{ model: "gpt-4o"}),
# setting `tools: true` allows it to use all exposed tools in your app
tools: true
# alternatively you can restrict it to only a set of tools
# tools: [:list, :of, :tool, :names]
# provide an optional prompt, which is an EEx template
# prompt: "Analyze the sentiment of the following text: <%= @input.arguments.description %>",
# adapter: {Adapter, [some: :opt]}
)
end
The action's return type provides the JSON schema automatically. For complex structured outputs, you can use any Ash type:
# Example using Ash.TypedStruct
defmodule JobListing do
use Ash.TypedStruct
typed_struct do
field :title, :string, allow_nil?: false
field :company, :string, allow_nil?: false
field :location, :string
field :requirements, {:array, :string}
end
end
# Use it as the return type for your action
action :parse_job, JobListing do
argument :raw_content, :string, allow_nil?: false
run prompt(
LangChain.ChatModels.ChatOpenAI.new!(%{model: "gpt-4o-mini"}),
prompt: "Parse this job listing: <%= @input.arguments.raw_content %>",
tools: false
)
end
Adapters are used to determine how a given LLM fulfills a prompt-backed action. The adapter is guessed automatically from the model where possible.
See AshAi.Actions.Prompt.Adapter
for more information.
For any langchain models you use, you will need to configure them. See https://hexdocs.pm/langchain/ for more information.
For AshAI Specific changes to use different models:
See AshPostgres
vector setup for required steps: https://hexdocs.pm/ash_postgres/AshPostgres.Extensions.Vector.html
This extension creates a vector search action, and provides a few different strategies for how to update the embeddings when needed.
# in a resource
vectorize do
full_text do
text(fn record ->
"""
Name: #{record.name}
Biography: #{record.biography}
"""
end)
# When used_attributes are defined, embeddings will only be rebuilt when
# the listed attributes are changed in an update action.
used_attributes [:name, :biography]
end
strategy :after_action
attributes(name: :vectorized_name, biography: :vectorized_biography)
# See the section below on defining an embedding model
embedding_model MyApp.OpenAiEmbeddingModel
end
If you are using policies, add a bypass to allow us to update the vector embeddings:
bypass action(:ash_ai_update_embeddings) do
authorize_if AshAi.Checks.ActorIsAshAi
end
Currently there are three strategies to choose from:
-
:after_action
(default) - The embeddings will be updated synchronously on after every create & update action. -
:ash_oban
- Embeddings will be updated asynchronously through anash_oban
-trigger when a record is created and updated. -
:manual
- The embeddings will not be automatically updated in any way.
Will add a global change on the resource, that will run a generated action named :ash_ai_update_embeddings
on every update that requires the embeddings to be rebuilt. The :ash_ai_update_embeddings
-action will be run in the after_transaction
-phase of any create action and update action that requires the embeddings to be rebuilt.
This will make your app incredibly slow, and is not recommended for any real production usage.
Requires the ash_oban
-dependency to be installed, and that the resource in question uses it as an extension, like this:
defmodule MyApp.Artist do
use Ash.Resource, extensions: [AshAi, AshOban]
end
Just like the :after_action
-strategy, this strategy creates an :ash_ai_update_embeddings
update-action, and adds a global change that will run an ash_oban
-trigger (also in the after_transaction
-phase) whenever embeddings need to be rebuilt.
You will to define this trigger yourself, and then reference it in the vectorize
-section like this:
defmodule MyApp.Artist do
use Ash.Resource, extensions: [AshAi, AshOban]
vectorize do
full_text do
...
end
strategy :ash_oban
ash_oban_trigger_name :my_vectorize_trigger (default name is :ash_ai_update_embeddings)
...
end
oban do
triggers do
trigger :my_vectorize_trigger do
action :ash_ai_update_embeddings
queue :artist_vectorizer
worker_read_action :read
worker_module_name __MODULE__.AshOban.Worker.UpdateEmbeddings
scheduler_module_name __MODULE__.AshOban.Scheduler.UpdateEmbeddings
scheduler_cron false # change this to a cron expression if you want to rerun the embedding at specified intervals
list_tenants MyApp.ListTenants
end
end
end
end
You'll also need to create the queue in the Oban config by changing your config.exs
file.
config :my_app, Oban,
engine: Oban.Engines.Basic,
notifier: Oban.Notifiers.Postgres,
queues: [
default: 10,
chat_responses: [limit: 10],
conversations: [limit: 10],
artist_vectorizer: [limit: 20], #set the limit of concurrent workers
],
repo: MyApp.Repo,
plugins: [{Oban.Plugins.Cron, []}]
The queue defaults to the resources short name plus the name of the trigger. (if you didn't set it through the queue option on the trigger).
Will not automatically update the embeddings in any way, but will by default generated an update action
named :ash_ai_update_embeddings
that can be run on demand. If needed, you can also disable the
generation of this action like this:
vectorize do
full_text do
...
end
strategy :manual
define_update_action_for_manual_strategy? false
...
end
Embedding models are modules that are in charge of defining what the dimensions
are of a given vector and how to generate one. This example uses Req
to
generate embeddings using OpenAi
. To use it, you'd need to install req
(mix igniter.install req
).
defmodule Tunez.OpenAIEmbeddingModel do
use AshAi.EmbeddingModel
@impl true
def dimensions(_opts), do: 3072
@impl true
def generate(texts, _opts) do
api_key = System.fetch_env!("OPEN_AI_API_KEY")
headers = [
{"Authorization", "Bearer #{api_key}"},
{"Content-Type", "application/json"}
]
body = %{
"input" => texts,
"model" => "text-embedding-3-large"
}
response =
Req.post!("https://api.openai.com/v1/embeddings",
json: body,
headers: headers
)
case response.status do
200 ->
response.body["data"]
|> Enum.map(fn %{"embedding" => embedding} -> embedding end)
|> then(&{:ok, &1})
_status ->
{:error, response.body}
end
end
end
Opts can be used to make embedding models that are dynamic depending on the resource, i.e
embedding_model {MyApp.OpenAiEmbeddingModel, model: "a-specific-model"}
Those opts are available in the _opts
argument to functions on your embedding model
You can use expressions in filters and sorts like vector_cosine_distance(full_text_vector, ^search_vector)
. For example:
read :search do
argument :query, :string, allow_nil?: false
prepare before_action(fn query, context ->
case YourEmbeddingModel.generate([query.arguments.query], []) do
{:ok, [search_vector]} ->
Ash.Query.filter(
query,
vector_cosine_distance(full_text_vector, ^search_vector) < 0.5
)
|> Ash.Query.sort(
{calc(vector_cosine_distance(full_text_vector, ^search_vector),
type: :float
), :asc}
)
|> Ash.Query.limit(10)
{:error, error} ->
{:error, error}
end
end)
end
If your database stores more than ~10,000 vectors, you may see search performance degrade. You can ameliorate this by building an index on the vector column. Vector indices come at the expense of write speeds and higher resource usage.
The below example uses an hnsw
index, which trades higher memory usage and vector build times for faster query speeds. An ivfflat
index will have different settings, faster build times, lower memory usage, but slower query speeds. Do research and consider the tradeoffs for your use case.
postgres do
table "embeddings"
repo MyApp.Repo
custom_statements do
statement :vector_idx do
up "CREATE INDEX vector_idx ON embeddings USING hnsw (vectorized_body vector_cosine_ops) WITH (m = 16, ef_construction = 64)"
down "DROP INDEX vector_idx;"
end
end
end
- more action types, like:
- bulk updates
- bulk destroys
- bulk creates.
- Setup
LangChain
- Modify a
LangChain
usingAshAi.setup_ash_ai/2
or useAshAi.iex_chat
(see below) - Run
iex -S mix
and then runAshAi.iex_chat
to start chatting with your app. - Build your own chat interface. See the implementation of
AshAi.iex_chat
to see how its done.
- make sure to run
mix test.create && mix test.migrate
to set up locally - ensure that
mix check
passes
defmodule MyApp.ChatBot do
alias LangChain.Chains.LLMChain
alias LangChain.ChatModels.ChatOpenAI
def iex_chat(actor \\ nil) do
%{
llm: ChatOpenAI.new!(%{model: "gpt-4o", stream: true}),
verbose: true
}
|> LLMChain.new!()
|> AshAi.iex_chat(actor: actor, otp_app: :my_app)
end
end
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for ash_ai
Similar Open Source Tools

ash_ai
Ash AI is a tool that provides a Model Context Protocol (MCP) server for exposing tool definitions to an MCP client. It allows for the installation of dev and production MCP servers, and supports features like OAuth2 flow with AshAuthentication, tool data access, tool execution callbacks, prompt-backed actions, and vectorization strategies. Users can also generate a chat feature for their Ash & Phoenix application using `ash_oban` and `ash_postgres`, and specify LLM API keys for OpenAI. The tool is designed to help developers experiment with tools and actions, monitor tool execution, and expose actions as tool calls.

opencharacter
OpenCharacter is an open-source tool that allows users to create and run characters locally with local models or use the hosted version. The stack includes Next.js for frontend, TailwindCSS for styling, Drizzle ORM for database access, NextAuth for authentication, Cloudflare D1 for serverless databases, Cloudflare Pages for hosting, and ShadcnUI as the component library. Users can integrate OpenCharacter with OpenRouter by configuring the OpenRouter API key. The tool is fully scalable, composable, and cost-effective, with powerful tools like Wrangler for database management and migrations. No environment variables are needed, making it easy to use and deploy.

supabase-mcp
Supabase MCP Server standardizes how Large Language Models (LLMs) interact with Supabase, enabling AI assistants to manage tables, fetch config, and query data. It provides tools for project management, database operations, project configuration, branching (experimental), and development tools. The server is pre-1.0, so expect some breaking changes between versions.

appworld
AppWorld is a high-fidelity execution environment of 9 day-to-day apps, operable via 457 APIs, populated with digital activities of ~100 people living in a simulated world. It provides a benchmark of natural, diverse, and challenging autonomous agent tasks requiring rich and interactive coding. The repository includes implementations of AppWorld apps and APIs, along with tests. It also introduces safety features for code execution and provides guides for building agents and extending the benchmark.

basehub
JavaScript / TypeScript SDK for BaseHub, the first AI-native content hub. **Features:** * ✨ Infers types from your BaseHub repository... _meaning IDE autocompletion works great._ * 🏎️ No dependency on graphql... _meaning your bundle is more lightweight._ * 🌐 Works everywhere `fetch` is supported... _meaning you can use it anywhere._

magic-cli
Magic CLI is a command line utility that leverages Large Language Models (LLMs) to enhance command line efficiency. It is inspired by projects like Amazon Q and GitHub Copilot for CLI. The tool allows users to suggest commands, search across command history, and generate commands for specific tasks using local or remote LLM providers. Magic CLI also provides configuration options for LLM selection and response generation. The project is still in early development, so users should expect breaking changes and bugs.

codespin
CodeSpin.AI is a set of open-source code generation tools that leverage large language models (LLMs) to automate coding tasks. With CodeSpin, you can generate code in various programming languages, including Python, JavaScript, Java, and C++, by providing natural language prompts. CodeSpin offers a range of features to enhance code generation, such as custom templates, inline prompting, and the ability to use ChatGPT as an alternative to API keys. Additionally, CodeSpin provides options for regenerating code, executing code in prompt files, and piping data into the LLM for processing. By utilizing CodeSpin, developers can save time and effort in coding tasks, improve code quality, and explore new possibilities in code generation.

termax
Termax is an LLM agent in your terminal that converts natural language to commands. It is featured by: - Personalized Experience: Optimize the command generation with RAG. - Various LLMs Support: OpenAI GPT, Anthropic Claude, Google Gemini, Mistral AI, and more. - Shell Extensions: Plugin with popular shells like `zsh`, `bash` and `fish`. - Cross Platform: Able to run on Windows, macOS, and Linux.

slack-bot
The Slack Bot is a tool designed to enhance the workflow of development teams by integrating with Jenkins, GitHub, GitLab, and Jira. It allows for custom commands, macros, crons, and project-specific commands to be implemented easily. Users can interact with the bot through Slack messages, execute commands, and monitor job progress. The bot supports features like starting and monitoring Jenkins jobs, tracking pull requests, querying Jira information, creating buttons for interactions, generating images with DALL-E, playing quiz games, checking weather, defining custom commands, and more. Configuration is managed via YAML files, allowing users to set up credentials for external services, define custom commands, schedule cron jobs, and configure VCS systems like Bitbucket for automated branch lookup in Jenkins triggers.

llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.

paper-qa
PaperQA is a minimal package for question and answering from PDFs or text files, providing very good answers with in-text citations. It uses OpenAI Embeddings to embed and search documents, and includes a process of embedding docs, queries, searching for top passages, creating summaries, using an LLM to re-score and select relevant summaries, putting summaries into prompt, and generating answers. The tool can be used to answer specific questions related to scientific research by leveraging citations and relevant passages from documents.

py-vectara-agentic
The `vectara-agentic` Python library is designed for developing powerful AI assistants using Vectara and Agentic-RAG. It supports various agent types, includes pre-built tools for domains like finance and legal, and enables easy creation of custom AI assistants and agents. The library provides tools for summarizing text, rephrasing text, legal tasks like summarizing legal text and critiquing as a judge, financial tasks like analyzing balance sheets and income statements, and database tools for inspecting and querying databases. It also supports observability via LlamaIndex and Arize Phoenix integration.

mycoder
An open-source mono-repository containing the MyCoder agent and CLI. It leverages Anthropic's Claude API for intelligent decision making, has a modular architecture with various tool categories, supports parallel execution with sub-agents, can modify code by writing itself, features a smart logging system for clear output, and is human-compatible using README.md, project files, and shell commands to build its own context.

tiledesk-dashboard
Tiledesk is an open-source live chat platform with integrated chatbots written in Node.js and Express. It is designed to be a multi-channel platform for web, Android, and iOS, and it can be used to increase sales or provide post-sales customer service. Tiledesk's chatbot technology allows for automation of conversations, and it also provides APIs and webhooks for connecting external applications. Additionally, it offers a marketplace for apps and features such as CRM, ticketing, and data export.

gpt-cli
gpt-cli is a command-line interface tool for interacting with various chat language models like ChatGPT, Claude, and others. It supports model customization, usage tracking, keyboard shortcuts, multi-line input, markdown support, predefined messages, and multiple assistants. Users can easily switch between different assistants, define custom assistants, and configure model parameters and API keys in a YAML file for easy customization and management.

hordelib
horde-engine is a wrapper around ComfyUI designed to run inference pipelines visually designed in the ComfyUI GUI. It enables users to design inference pipelines in ComfyUI and then call them programmatically, maintaining compatibility with the existing horde implementation. The library provides features for processing Horde payloads, initializing the library, downloading and validating models, and generating images based on input data. It also includes custom nodes for preprocessing and tasks such as face restoration and QR code generation. The project depends on various open source projects and bundles some dependencies within the library itself. Users can design ComfyUI pipelines, convert them to the backend format, and run them using the run_image_pipeline() method in hordelib.comfy.Comfy(). The project is actively developed and tested using git, tox, and a specific model directory structure.
For similar tasks

ash_ai
Ash AI is a tool that provides a Model Context Protocol (MCP) server for exposing tool definitions to an MCP client. It allows for the installation of dev and production MCP servers, and supports features like OAuth2 flow with AshAuthentication, tool data access, tool execution callbacks, prompt-backed actions, and vectorization strategies. Users can also generate a chat feature for their Ash & Phoenix application using `ash_oban` and `ash_postgres`, and specify LLM API keys for OpenAI. The tool is designed to help developers experiment with tools and actions, monitor tool execution, and expose actions as tool calls.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

tabby
Tabby is a self-hosted AI coding assistant, offering an open-source and on-premises alternative to GitHub Copilot. It boasts several key features: * Self-contained, with no need for a DBMS or cloud service. * OpenAPI interface, easy to integrate with existing infrastructure (e.g Cloud IDE). * Supports consumer-grade GPUs.

spear
SPEAR (Simulator for Photorealistic Embodied AI Research) is a powerful tool for training embodied agents. It features 300 unique virtual indoor environments with 2,566 unique rooms and 17,234 unique objects that can be manipulated individually. Each environment is designed by a professional artist and features detailed geometry, photorealistic materials, and a unique floor plan and object layout. SPEAR is implemented as Unreal Engine assets and provides an OpenAI Gym interface for interacting with the environments via Python.

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.