
pgai
A suite of tools to develop RAG, semantic search, and other AI applications more easily with PostgreSQL
Stars: 4239

pgai simplifies the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL. It brings embedding and generation AI models closer to the database, allowing users to create embeddings, retrieve LLM chat completions, reason over data for classification, summarization, and data enrichment directly from within PostgreSQL in a SQL query. The tool requires an OpenAI API key and a PostgreSQL client to enable AI functionality in the database. Users can install pgai from source, run it in a pre-built Docker container, or enable it in a Timescale Cloud service. The tool provides functions to handle API keys using psql or Python, and offers various AI functionalities like tokenizing, detokenizing, embedding, chat completion, and content moderation.
README:
Supercharge your PostgreSQL database with AI capabilities. Supports:
- Automatic creation and synchronization of vector embeddings for your data
- Seamless vector and semantic search
- Retrieval Augmented Generation (RAG) directly in SQL
- Ability to call out to leading LLMs like OpenAI, Ollama, Cohere, and more via SQL.
- Built-in utilities for dataset loading and processing
All with the reliability, scalability, and ACID compliance of PostgreSQL.
See the install via docker guide for docker compose files and detailed container instructions.
Try pgai on cloud by creating a free trial account on Timescale Cloud.
See the install from source guide for instructions on how to install pgai from source.
This section will walk you through the steps to get started with pgai and Ollama using docker and show you the major features of pgai.
Please note that using Ollama requires a large (>4GB) download of the docker image and model. If you don't want to download so much data, you may want to use the OpenAI quick start or VoyageAI quick start instead.
-
Download the docker compose file file.
curl -O https://raw.githubusercontent.com/timescale/pgai/main/examples/docker_compose_pgai_ollama/docker-compose.yml
-
Start the docker compose file.
docker compose up -d
This will start Ollama and a PostgreSQL instance with the pgai extension installed.
-
Download the Ollama models. We'll use the
all-minilm
model for embeddings and thetinyllama
model for reasoning.docker compose exec ollama ollama pull all-minilm docker compose exec ollama ollama pull tinyllama
-
Connect to the database in your local developer environment The easiest way connect to the database is with the following command:
docker compose exec -it db psql
.Alternatively, you can connect to the database with the following connection string:
postgres://postgres:postgres@localhost:5432/postgres
. -
Enable pgai on your database
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
-
Create a table with the data you want to embed from a huggingface dataset
We'll create a table named
wiki
from a few rows of the english-languagewikimedia/wikipedia
dataset.First, we'll create the table:
CREATE TABLE wiki ( id TEXT PRIMARY KEY, url TEXT, title TEXT, text TEXT );
Then, we'll load the data from the huggingface dataset:
SELECT ai.load_dataset('wikimedia/wikipedia', '20231101.en', table_name=>'wiki', batch_size=>5, max_batches=>1, if_table_exists=>'append');
Related documentation: load dataset from huggingface.
-
Create a vectorizer for
wiki
To enable semantic search on the
wiki
table, we need to create vector embeddings for thetext
column. We use a vectorizer to automatically create these embeddings and keep them in sync with the data in thewiki
table.SELECT ai.create_vectorizer( 'wiki'::regclass, destination => 'wiki_embeddings', embedding => ai.embedding_ollama('all-minilm', 384), chunking => ai.chunking_recursive_character_text_splitter('text') );
Related documentation: vectorizer usage guide and vectorizer API reference.
-
Check the progress of the vectorizer embedding creation
select * from ai.vectorizer_status;
Click to see the output
id source_table target_table view pending_items 1 public.wiki public.wiki_embeddings_store public.wiki_embeddings 10000 All the embeddings have been created when the
pending_items
column is 0. This may take a few minutes as the model is running locally and not on a GPU. -
Search the embeddings
We'll search the embeddings for the concept of "properties of light" even though these words are not in the text of the articles. This is possible because vector embeddings capture the semantic meaning of the text.
Semantic search is a powerful feature in its own right, but it is also a key component of Retrieval Augmented Generation (RAG).
SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> ai.ollama_embed('all-minilm', 'properties of light') LIMIT 1;
Click to see the output
title chunk Albedo Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations.... This query selects from the
wiki_embeddings
view, which is created by the vectorizer and joins the embeddings with the original data in thewiki
table to give us the ability to search using the embeddings but still be able to access (or filter on) all the data in the original table (e.g. thetitle
column).Note the
ai.ollama_embed
function is used to call theall-minilm
model. This is part of pgai's model calling capabilities. -
Modify your data and have the vectorizer automatically update the embeddings
We'll add a row about pgai to the
wiki
table and have the vectorizer automatically update the embeddings.INSERT INTO wiki (id, url, title, text) VALUES (11,'https://en.wikipedia.org/wiki/Pgai', 'pgai - Power your AI applications with PostgreSQL', 'pgai is a tool to make developing RAG and other AI applications easier. It makes it simple to give an LLM access to data in your PostgreSQL database by enabling semantic search on your data and using the results as part of the Retrieval Augmented Generation (RAG) pipeline. This allows the LLM to answer questions about your data without needing to being trained on your data.');
And now you don't need to do anything to update the embeddings. The vectorizer will automatically create the embeddings for the new row without any intervention from you. After a few seconds, you can run a search query related to the new entry and see it returned as part of the results:
SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> ai.ollama_embed('all-minilm', 'AI tools') LIMIT 1;
In this section, we'll have the LLM answer questions about pgai based on the wiki entry we added by using RAG. The LLM was never trained on the pgai wiki entry, and so it needs data in the database to answer questions about pgai.
You can perform RAG purely from within the database using SQL or use a python script to interact with the database and perform RAG. We often find that using SQL is easier to create a quick prototype and get started but as the project matures people easily switch to using Python to have more control and make use of Python tooling.
Click to perform RAG within SQL
-
Define a function to perform RAG
We'll create a function that uses RAG to allow an LLM to answer questions about pgai based on the wiki entry we added.
RAG involves two steps:
- Perform a similarity search to find the most relevant chunks of data.
- Use the LLM to generate a response using the relevant chunks as context.
CREATE OR REPLACE FUNCTION generate_rag_response(query_text TEXT) RETURNS TEXT AS $$ DECLARE context_chunks TEXT; response JSONB; BEGIN -- Perform similarity search to find relevant wiki article SELECT string_agg(title || ': ' || chunk, E'\n') INTO context_chunks FROM ( SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> ai.ollama_embed('all-minilm', query_text) LIMIT 3 ) AS relevant_posts; raise notice 'Context provided to LLM: %', context_chunks; -- Generate a summary using tinyllama select ai.ollama_generate('tinyllama', query_text || E'\nUse the following context to respond.\n' || context_chunks) INTO response; RETURN response->>'response'; END; $$ LANGUAGE plpgsql;
-
Use the RAG function to answer questions about the wiki data
SELECT generate_rag_response('What can I use pgai for?') as response;
Click here to see the output
response PGAI is a tool that makes it easier for developers to create AI applications by providing access to data in a PostgreSQL database using Semantic Search and answering RAG (Recommendation and Answer Generation) questions. This allows the LLM (Language Model) to answer questions about unseen data without being trained on your data, making it an important tool for building accurate and efficient AI applications. The context suggests that PGAI can be useful in a variety of industries or use cases where data access is critical, such as healthcare, finance, or customer service.
Click to perform RAG with Python
-
Install python dependencies
pip install psycopg pgvector ollama
-
Run the Python script to perform RAG
import psycopg from pgvector.psycopg import register_vector from typing import Optional, List, NamedTuple from ollama import Client from dataclasses import dataclass @dataclass class ChunkData: """Represents a chunk of text with its title and content.""" title: str chunk: str def create_db_connection() -> psycopg.Connection: """Create and return a database connection.""" conn = psycopg.connect( "postgres://postgres:postgres@localhost:5432/postgres" # Modify connection string as needed for your setup ) register_vector(conn) return conn def get_embedding(client: Client, text: str) -> list[float]: """Get embeddings using Ollama's all-minilm model.""" response = client.embeddings(model='all-minilm', prompt=text) return response['embedding'] def get_relevant_chunks(cur: psycopg.Cursor, embedding: list[float], limit: int = 1) -> List[ChunkData]: """ Retrieve the most relevant chunks based on vector similarity. Args: cur: Database cursor embedding: Query embedding vector limit: Number of chunks to retrieve Returns: List of ChunkData objects containing relevant chunks """ query = """ SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> %s::vector LIMIT %s """ cur.execute(query, (embedding, limit)) return [ChunkData(title=row[0], chunk=row[1]) for row in cur.fetchall()] def format_context(chunks: List[ChunkData]) -> str: """ Format the chunks into a single context string. Args: chunks: List of ChunkData objects Returns: Formatted context string """ return "\n\n".join(f"{chunk.title}:\n{chunk.chunk}" for chunk in chunks) def generate_rag_response(query_text: str) -> Optional[str]: """ Generate a RAG response using pgai, Ollama embeddings, and database content. Args: query_text: The question or query to answer Returns: str: The generated response from the LLM """ try: # Initialize Ollama client client = Client(host='http://localhost:11434') with create_db_connection() as conn: with conn.cursor() as cur: # Get embeddings for the query using Ollama SDK query_embedding = get_embedding(client, query_text) # Get relevant chunks relevant_chunks = get_relevant_chunks(cur, query_embedding) # Format context context = format_context(relevant_chunks) # Print context for debugging (optional) print("Context provided to LLM:") print("------------------------") print(context) print("------------------------") # Construct prompt with context prompt = f"""Question: {query_text} Please use the following context to provide an accurate response: {context} Answer:""" # Generate response using Ollama SDK response = client.generate( model='tinyllama', prompt=prompt, stream=False ) return response['response'] except Exception as e: print(f"Error generating RAG response: {e}") return None def main(): # Example usage questions = [ "What can I use pgai for?", ] for question in questions: print("\n" + "="*50) print(f"Question: {question}") print("-"*50) response = generate_rag_response(question) if response: print("\nResponse:") print(response) else: print("Failed to generate response") if __name__ == "__main__": main()
-
Generate a summary of the article in the database
We'll generate a summary of the search results using the
ai.ollama_generate
function (this will take a few minutes).SELECT answer->>'response' as summary FROM ai.ollama_generate('tinyllama', 'Summarize the following and output the summary in a single sentence: '|| (SELECT text FROM wiki WHERE title like 'pgai%')) as answer;
Click to see the output
summary Pgai is a tool that simplifies the process of making AI applications easier by providing easy access to data in PostgreSQL and enabling semantic search on the data for the Retrieval Augmented Generation (RAG) pipeline. This allows the AI system to answer questions about unseen data without being trained on it, simplifying the entire process. This is just one example of model calling capabilities. Model calling can be used for a variety of tasks, including classification, summarization, moderation, and other forms of data enrichment.
- Automatically create and sync vector embeddings for your data (learn more)
- Search your data using vector and semantic search (learn more)
- Implement Retrieval Augmented Generation inside a single SQL statement (learn more)
- Perform high-performance, cost-efficient ANN search on large vector workloads with pgvectorscale, which complements pgvector.
- Retrieve LLM chat completions from models like Claude Sonnet 3.5, OpenAI GPT4o, Cohere Command, and Llama 3 (via Ollama). (learn more)
- Reason over your data and facilitate use cases like classification, summarization, and data enrichment on your existing relational data in PostgreSQL (see an example).
- Load datasets from Hugging Face into your database with ai.load_dataset.
- Use chunking algorithms to split text with SQL functions.
- Vector Databases Are the Wrong Abstraction
- pgai: Giving PostgreSQL Developers AI Engineering Superpowers
- How to Automatically Create & Update Embeddings in PostgreSQL—With One SQL Query
- [video] Auto Create and Sync Vector Embeddings in 1 Line of SQL
- Which OpenAI Embedding Model Is Best for Your RAG App With Pgvector?
- Which RAG Chunking and Formatting Strategy Is Best for Your App With Pgvector
- Parsing All the Data With Open-Source Tools: Unstructured and Pgai
- In-Database AI Agents: Teaching Claude to Use Tools With Pgai
- Build Search and RAG Systems on PostgreSQL Using Cohere and Pgai
- Use Open-Source LLMs in PostgreSQL With Ollama and Pgai
We welcome contributions to pgai! See the Contributing page for more information.
- Automatically create and sync vector embeddings for your data
- Search your data using vector and semantic search
- Implement Retrieval Augmented Generation inside a single SQL statement
The pgvector and pgvectorscale extensions allow you to store vector embeddings in your database and perform fast and efficient vector search. The pgai Vectorizer builds on top of these extensions to automatically create and synchronize embeddings for any text data in your database.
With one line of code, you can define a vectorizer that creates embeddings for data in a table:
SELECT ai.create_vectorizer(
<table_name>::regclass,
destination => <embedding_table_name>,
embedding => ai.embedding_ollama(<model_name>, <dimensions>),
chunking => ai.chunking_recursive_character_text_splitter(<column_name>)
);
This newly created vectorizer will automatically track any changes to the data in the source table and update the destination embedding table with the new embeddings asynchronously.
Automate AI embedding with pgai Vectorizer shows you how to implement embeddings in your own data. On a self-hosted Postgres installation, you use a Vectorizer Worker to asynchronously processes your vectorizers. When you create Vectorizers in a Timescale Cloud database, embeddings are automatically created and synchronized in the background.
Note: Timescale Cloud currently supports embedding natively with OpenAI. To use Ollama on the data in your Timescale Cloud service, set scheduling => ai.scheduling_none() in the configuration for your service, then install the vectorizer worker locally and configure it to connect to your Timescale Cloud service.
pgai exposes a set of functions to directly interact with the LLM models through SQL, enabling you to do semantic search directly in your database:
SELECT
chunk,
embedding <=> ai.ollama_embed(<embedding_model>, 'some-query') as distance
FROM <embedding_table>
ORDER BY distance
LIMIT 5;
This is a perfectly normal SQL query. You can combine it with where
clauses and other SQL features to
further refine your search. pgai solves the missing where clause in vector search problem for real.
Similar to semantic search, pgai LLM functions enable you to implement RAG directly in your database. For example:
-
Create a RAG function:
CREATE OR REPLACE FUNCTION generate_rag_response(query_text TEXT) RETURNS TEXT AS $$ DECLARE context_chunks TEXT; response TEXT; BEGIN -- Perform similarity search to find relevant blog posts SELECT string_agg(title || ': ' || chunk, E'\n') INTO context_chunks FROM ( SELECT title, chunk FROM blogs_embedding ORDER BY embedding <=> ai.ollama_embed('nomic-embed-text', query_text) LIMIT 3 ) AS relevant_posts; -- Generate a summary using llama3 SELECT ai.ollama_chat_complete ( 'llama3' , jsonb_build_array ( jsonb_build_object('role', 'system', 'content', 'you are a helpful assistant') , jsonb_build_object ('role', 'user' , 'content', query_text || E'\nUse the following context to respond.\n' || context_chunks ) ) )->'message'->>'content' INTO response; RETURN response; END; $$ LANGUAGE plpgsql;
-
Execute your function in a SQL query:
SELECT generate_rag_response('Give me some startup advice');
Model calling is a feature of pgai that allows you to call LLM models from SQL. This lets you leverage the power of LLMs for a variety of tasks, including classification, summarization, moderation, and other forms of data enrichment.
The following models are supported (click on the model to learn more):
Model | Tokenize | Embed | Chat Complete | Generate | Moderate | Classify | Rerank |
---|---|---|---|---|---|---|---|
Ollama | ✔️ | ✔️ | ✔️ | ||||
OpenAI | ✔️️ | ✔️ | ✔️ | ✔️ | |||
Anthropic | ✔️ | ||||||
Cohere | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ||
Voyage AI | ✔️ | ||||||
Huggingface (with LiteLLM) | ✔️ | ||||||
Mistral (with LiteLLM) | ✔️ | ||||||
Azure OpenAI (with LiteLLM) | ✔️ | ||||||
AWS Bedrock (with LiteLLM) | ✔️ | ||||||
Vertex AI (with LiteLLM) | ✔️ |
Some examples:
- Learn how to moderate content directly in the database using triggers and background jobs.
- load datasets directly from Hugging Face into your database.
- Leverage LLMs for data processing tasks such as classification, summarization, and data enrichment (see the OpenAI example).
pgai is still at an early stage. Now is a great time to help shape the direction of this project; we are currently deciding priorities. Have a look at the list of features we're thinking of working on. Feel free to comment, expand the list, or hop on the Discussions forum.
To get started, take a look at how to contribute and how to set up a dev/test environment.
Timescale is a PostgreSQL database company. To learn more visit the timescale.com.
Timescale Cloud is a high-performance, developer focused, cloud platform that provides PostgreSQL services for the most demanding AI, time-series, analytics, and event workloads. Timescale Cloud is ideal for production applications and provides high availability, streaming backups, upgrades over time, roles and permissions, and great security.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for pgai
Similar Open Source Tools

pgai
pgai simplifies the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL. It brings embedding and generation AI models closer to the database, allowing users to create embeddings, retrieve LLM chat completions, reason over data for classification, summarization, and data enrichment directly from within PostgreSQL in a SQL query. The tool requires an OpenAI API key and a PostgreSQL client to enable AI functionality in the database. Users can install pgai from source, run it in a pre-built Docker container, or enable it in a Timescale Cloud service. The tool provides functions to handle API keys using psql or Python, and offers various AI functionalities like tokenizing, detokenizing, embedding, chat completion, and content moderation.

basiclingua-LLM-Based-NLP
BasicLingua is a Python library that provides functionalities for linguistic tasks such as tokenization, stemming, lemmatization, and many others. It is based on the Gemini Language Model, which has demonstrated promising results in dealing with text data. BasicLingua can be used as an API or through a web demo. It is available under the MIT license and can be used in various projects.

humanlayer
HumanLayer is a Python toolkit designed to enable AI agents to interact with humans in tool-based and asynchronous workflows. By incorporating humans-in-the-loop, agentic tools can access more powerful and meaningful tasks. The toolkit provides features like requiring human approval for function calls, human as a tool for contacting humans, omni-channel contact capabilities, granular routing, and support for various LLMs and orchestration frameworks. HumanLayer aims to ensure human oversight of high-stakes function calls, making AI agents more reliable and safe in executing impactful tasks.

qlib
Qlib is an open-source, AI-oriented quantitative investment platform that supports diverse machine learning modeling paradigms, including supervised learning, market dynamics modeling, and reinforcement learning. It covers the entire chain of quantitative investment, from alpha seeking to order execution. The platform empowers researchers to explore ideas and implement productions using AI technologies in quantitative investment. Qlib collaboratively solves key challenges in quantitative investment by releasing state-of-the-art research works in various paradigms. It provides a full ML pipeline for data processing, model training, and back-testing, enabling users to perform tasks such as forecasting market patterns, adapting to market dynamics, and modeling continuous investment decisions.

nous
Nous is an open-source TypeScript platform for autonomous AI agents and LLM based workflows. It aims to automate processes, support requests, review code, assist with refactorings, and more. The platform supports various integrations, multiple LLMs/services, CLI and web interface, human-in-the-loop interactions, flexible deployment options, observability with OpenTelemetry tracing, and specific agents for code editing, software engineering, and code review. It offers advanced features like reasoning/planning, memory and function call history, hierarchical task decomposition, and control-loop function calling options. Nous is designed to be a flexible platform for the TypeScript community to expand and support different use cases and integrations.

LLamaSharp
LLamaSharp is a cross-platform library to run 🦙LLaMA/LLaVA model (and others) on your local device. Based on llama.cpp, inference with LLamaSharp is efficient on both CPU and GPU. With the higher-level APIs and RAG support, it's convenient to deploy LLM (Large Language Model) in your application with LLamaSharp.

open-assistant-api
Open Assistant API is an open-source, self-hosted AI intelligent assistant API compatible with the official OpenAI interface. It supports integration with more commercial and private models, R2R RAG engine, internet search, custom functions, built-in tools, code interpreter, multimodal support, LLM support, and message streaming output. Users can deploy the service locally and expand existing features. The API provides user isolation based on tokens for SaaS deployment requirements and allows integration of various tools to enhance its capability to connect with the external world.

arbigent
Arbigent (Arbiter-Agent) is an AI agent testing framework designed to make AI agent testing practical for modern applications. It addresses challenges faced by traditional UI testing frameworks and AI agents by breaking down complex tasks into smaller, dependent scenarios. The framework is customizable for various AI providers, operating systems, and form factors, empowering users with extensive customization capabilities. Arbigent offers an intuitive UI for scenario creation and a powerful code interface for seamless test execution. It supports multiple form factors, optimizes UI for AI interaction, and is cost-effective by utilizing models like GPT-4o mini. With a flexible code interface and open-source nature, Arbigent aims to revolutionize AI agent testing in modern applications.

AgentLab
AgentLab is an open, easy-to-use, and extensible framework designed to accelerate web agent research. It provides features for developing and evaluating agents on various benchmarks supported by BrowserGym. The framework allows for large-scale parallel agent experiments using ray, building blocks for creating agents over BrowserGym, and a unified LLM API for OpenRouter, OpenAI, Azure, or self-hosted using TGI. AgentLab also offers reproducibility features, a unified LeaderBoard, and supports multiple benchmarks like WebArena, WorkArena, WebLinx, VisualWebArena, AssistantBench, GAIA, Mind2Web-live, and MiniWoB.

Reflection_Tuning
Reflection-Tuning is a project focused on improving the quality of instruction-tuning data through a reflection-based method. It introduces Selective Reflection-Tuning, where the student model can decide whether to accept the improvements made by the teacher model. The project aims to generate high-quality instruction-response pairs by defining specific criteria for the oracle model to follow and respond to. It also evaluates the efficacy and relevance of instruction-response pairs using the r-IFD metric. The project provides code for reflection and selection processes, along with data and model weights for both V1 and V2 methods.

AgentBench
AgentBench is a benchmark designed to evaluate Large Language Models (LLMs) as autonomous agents in various environments. It includes 8 distinct environments such as Operating System, Database, Knowledge Graph, Digital Card Game, and Lateral Thinking Puzzles. The tool provides a comprehensive evaluation of LLMs' ability to operate as agents by offering Dev and Test sets for each environment. Users can quickly start using the tool by following the provided steps, configuring the agent, starting task servers, and assigning tasks. AgentBench aims to bridge the gap between LLMs' proficiency as agents and their practical usability.

LLM-Pruner
LLM-Pruner is a tool for structural pruning of large language models, allowing task-agnostic compression while retaining multi-task solving ability. It supports automatic structural pruning of various LLMs with minimal human effort. The tool is efficient, requiring only 3 minutes for pruning and 3 hours for post-training. Supported LLMs include Llama-3.1, Llama-3, Llama-2, LLaMA, BLOOM, Vicuna, and Baichuan. Updates include support for new LLMs like GQA and BLOOM, as well as fine-tuning results achieving high accuracy. The tool provides step-by-step instructions for pruning, post-training, and evaluation, along with a Gradio interface for text generation. Limitations include issues with generating repetitive or nonsensical tokens in compressed models and manual operations for certain models.

OpenLLM
OpenLLM is a platform that helps developers run any open-source Large Language Models (LLMs) as OpenAI-compatible API endpoints, locally and in the cloud. It supports a wide range of LLMs, provides state-of-the-art serving and inference performance, and simplifies cloud deployment via BentoML. Users can fine-tune, serve, deploy, and monitor any LLMs with ease using OpenLLM. The platform also supports various quantization techniques, serving fine-tuning layers, and multiple runtime implementations. OpenLLM seamlessly integrates with other tools like OpenAI Compatible Endpoints, LlamaIndex, LangChain, and Transformers Agents. It offers deployment options through Docker containers, BentoCloud, and provides a community for collaboration and contributions.

codellm-devkit
Codellm-devkit (CLDK) is a Python library that serves as a multilingual program analysis framework bridging traditional static analysis tools and Large Language Models (LLMs) specialized for code (CodeLLMs). It simplifies the process of analyzing codebases across multiple programming languages, enabling the extraction of meaningful insights and facilitating LLM-based code analysis. The library provides a unified interface for integrating outputs from various analysis tools and preparing them for effective use by CodeLLMs. Codellm-devkit aims to enable the development and experimentation of robust analysis pipelines that combine traditional program analysis tools and CodeLLMs, reducing friction in multi-language code analysis and ensuring compatibility across different tools and LLM platforms. It is designed to seamlessly integrate with popular analysis tools like WALA, Tree-sitter, LLVM, and CodeQL, acting as a crucial intermediary layer for efficient communication between these tools and CodeLLMs. The project is continuously evolving to include new tools and frameworks, maintaining its versatility for code analysis and LLM integration.

axar
AXAR AI is a lightweight framework designed for building production-ready agentic applications using TypeScript. It aims to simplify the process of creating robust, production-grade LLM-powered apps by focusing on familiar coding practices without unnecessary abstractions or steep learning curves. The framework provides structured, typed inputs and outputs, familiar and intuitive patterns like dependency injection and decorators, explicit control over agent behavior, real-time logging and monitoring tools, minimalistic design with little overhead, model agnostic compatibility with various AI models, and streamed outputs for fast and accurate results. AXAR AI is ideal for developers working on real-world AI applications who want a tool that gets out of the way and allows them to focus on shipping reliable software.

bee-agent-framework
The Bee Agent Framework is an open-source tool for building, deploying, and serving powerful agentic workflows at scale. It provides AI agents, tools for creating workflows in Javascript/Python, a code interpreter, memory optimization strategies, serialization for pausing/resuming workflows, traceability features, production-level control, and upcoming features like model-agnostic support and a chat UI. The framework offers various modules for agents, llms, memory, tools, caching, errors, adapters, logging, serialization, and more, with a roadmap including MLFlow integration, JSON support, structured outputs, chat client, base agent improvements, guardrails, and evaluation.
For similar tasks

superagent
Superagent is an open-source AI assistant framework and API that allows developers to add powerful AI assistants to their applications. These assistants use large language models (LLMs), retrieval augmented generation (RAG), and generative AI to help users with a variety of tasks, including question answering, chatbot development, content generation, data aggregation, and workflow automation. Superagent is backed by Y Combinator and is part of YC W24.

Awesome-Segment-Anything
Awesome-Segment-Anything is a powerful tool for segmenting and extracting information from various types of data. It provides a user-friendly interface to easily define segmentation rules and apply them to text, images, and other data formats. The tool supports both supervised and unsupervised segmentation methods, allowing users to customize the segmentation process based on their specific needs. With its versatile functionality and intuitive design, Awesome-Segment-Anything is ideal for data analysts, researchers, content creators, and anyone looking to efficiently extract valuable insights from complex datasets.

simpletransformers
Simple Transformers is a library based on the Transformers library by HuggingFace, allowing users to quickly train and evaluate Transformer models with only 3 lines of code. It supports various tasks such as Information Retrieval, Language Models, Encoder Model Training, Sequence Classification, Token Classification, Question Answering, Language Generation, T5 Model, Seq2Seq Tasks, Multi-Modal Classification, and Conversational AI.

smile
Smile (Statistical Machine Intelligence and Learning Engine) is a comprehensive machine learning, NLP, linear algebra, graph, interpolation, and visualization system in Java and Scala. It covers every aspect of machine learning, including classification, regression, clustering, association rule mining, feature selection, manifold learning, multidimensional scaling, genetic algorithms, missing value imputation, efficient nearest neighbor search, etc. Smile implements major machine learning algorithms and provides interactive shells for Java, Scala, and Kotlin. It supports model serialization, data visualization using SmilePlot and declarative approach, and offers a gallery showcasing various algorithms and visualizations.

pgai
pgai simplifies the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL. It brings embedding and generation AI models closer to the database, allowing users to create embeddings, retrieve LLM chat completions, reason over data for classification, summarization, and data enrichment directly from within PostgreSQL in a SQL query. The tool requires an OpenAI API key and a PostgreSQL client to enable AI functionality in the database. Users can install pgai from source, run it in a pre-built Docker container, or enable it in a Timescale Cloud service. The tool provides functions to handle API keys using psql or Python, and offers various AI functionalities like tokenizing, detokenizing, embedding, chat completion, and content moderation.

NeMo-Curator
NeMo Curator is a GPU-accelerated open-source framework designed for efficient large language model data curation. It provides scalable dataset preparation for tasks like foundation model pretraining, domain-adaptive pretraining, supervised fine-tuning, and parameter-efficient fine-tuning. The library leverages GPUs with Dask and RAPIDS to accelerate data curation, offering customizable and modular interfaces for pipeline expansion and model convergence. Key features include data download, text extraction, quality filtering, deduplication, downstream-task decontamination, distributed data classification, and PII redaction. NeMo Curator is suitable for curating high-quality datasets for large language model training.

bootcamp_machine-learning
Bootcamp Machine Learning is a one-week program designed by 42 AI to teach the basics of Machine Learning. The curriculum covers topics such as linear algebra, statistics, regression, classification, and regularization. Participants will learn concepts like gradient descent, hypothesis modeling, overfitting detection, logistic regression, and more. The bootcamp is ideal for individuals with prior knowledge of Python who are interested in diving into the field of artificial intelligence.

embedJs
EmbedJs is a NodeJS framework that simplifies RAG application development by efficiently processing unstructured data. It segments data, creates relevant embeddings, and stores them in a vector database for quick retrieval.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.