
pgai
A suite of tools to develop RAG, semantic search, and other AI applications more easily with PostgreSQL
Stars: 4591

pgai simplifies the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL. It brings embedding and generation AI models closer to the database, allowing users to create embeddings, retrieve LLM chat completions, reason over data for classification, summarization, and data enrichment directly from within PostgreSQL in a SQL query. The tool requires an OpenAI API key and a PostgreSQL client to enable AI functionality in the database. Users can install pgai from source, run it in a pre-built Docker container, or enable it in a Timescale Cloud service. The tool provides functions to handle API keys using psql or Python, and offers various AI functionalities like tokenizing, detokenizing, embedding, chat completion, and content moderation.
README:
Supercharge your PostgreSQL database with AI capabilities. Supports:
- Automatic creation and synchronization of vector embeddings for your data
- Seamless vector and semantic search
- Retrieval Augmented Generation (RAG) directly in SQL
- Ability to call out to leading LLMs like OpenAI, Ollama, Cohere, and more via SQL.
- Built-in utilities for dataset loading and processing
All with the reliability, scalability, and ACID compliance of PostgreSQL.
See the install via docker guide for docker compose files and detailed container instructions.
Try pgai on cloud by creating a free trial account on Timescale Cloud.
See the install from source guide for instructions on how to install pgai from source.
This section will walk you through the steps to get started with pgai and Ollama using docker and show you the major features of pgai.
Please note that using Ollama requires a large (>4GB) download of the docker image and model. If you don't want to download so much data, you may want to use the OpenAI quick start or VoyageAI quick start instead.
-
Download the docker compose file file.
curl -O https://raw.githubusercontent.com/timescale/pgai/main/examples/docker_compose_pgai_ollama/docker-compose.yml
-
Start the docker compose file.
docker compose up -d
This will start Ollama and a PostgreSQL instance with the pgai extension installed.
-
Download the Ollama models. We'll use the
all-minilm
model for embeddings and thetinyllama
model for reasoning.docker compose exec ollama ollama pull all-minilm docker compose exec ollama ollama pull tinyllama
-
Connect to the database in your local developer environment The easiest way connect to the database is with the following command:
docker compose exec -it db psql
.Alternatively, you can connect to the database with the following connection string:
postgres://postgres:postgres@localhost:5432/postgres
. -
Enable pgai on your database
CREATE EXTENSION IF NOT EXISTS ai CASCADE;
-
Create a table with the data you want to embed from a huggingface dataset
We'll create a table named
wiki
from a few rows of the english-languagewikimedia/wikipedia
dataset.First, we'll create the table:
CREATE TABLE wiki ( id TEXT PRIMARY KEY, url TEXT, title TEXT, text TEXT );
Then, we'll load the data from the huggingface dataset:
SELECT ai.load_dataset('wikimedia/wikipedia', '20231101.en', table_name=>'wiki', batch_size=>5, max_batches=>1, if_table_exists=>'append');
Related documentation: load dataset from huggingface.
-
Create a vectorizer for
wiki
To enable semantic search on the
wiki
table, we need to create vector embeddings for thetext
column. We use a vectorizer to automatically create these embeddings and keep them in sync with the data in thewiki
table.SELECT ai.create_vectorizer( 'wiki'::regclass, loading => ai.loading_column('text'), destination => 'wiki_embeddings', embedding => ai.embedding_ollama('all-minilm', 384) );
Related documentation: vectorizer usage guide and vectorizer API reference.
-
Check the progress of the vectorizer embedding creation
select * from ai.vectorizer_status;
Click to see the output
id source_table target_table view pending_items 1 public.wiki public.wiki_embeddings_store public.wiki_embeddings 10000 All the embeddings have been created when the
pending_items
column is 0. This may take a few minutes as the model is running locally and not on a GPU. -
Search the embeddings
We'll search the embeddings for the concept of "properties of light" even though these words are not in the text of the articles. This is possible because vector embeddings capture the semantic meaning of the text.
Semantic search is a powerful feature in its own right, but it is also a key component of Retrieval Augmented Generation (RAG).
SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> ai.ollama_embed('all-minilm', 'properties of light') LIMIT 1;
Click to see the output
title chunk Albedo Water reflects light very differently from typical terrestrial materials. The reflectivity of a water surface is calculated using the Fresnel equations.... This query selects from the
wiki_embeddings
view, which is created by the vectorizer and joins the embeddings with the original data in thewiki
table to give us the ability to search using the embeddings but still be able to access (or filter on) all the data in the original table (e.g. thetitle
column).Note the
ai.ollama_embed
function is used to call theall-minilm
model. This is part of pgai's model calling capabilities. -
Modify your data and have the vectorizer automatically update the embeddings
We'll add a row about pgai to the
wiki
table and have the vectorizer automatically update the embeddings.INSERT INTO wiki (id, url, title, text) VALUES (11,'https://en.wikipedia.org/wiki/Pgai', 'pgai - Power your AI applications with PostgreSQL', 'pgai is a tool to make developing RAG and other AI applications easier. It makes it simple to give an LLM access to data in your PostgreSQL database by enabling semantic search on your data and using the results as part of the Retrieval Augmented Generation (RAG) pipeline. This allows the LLM to answer questions about your data without needing to being trained on your data.');
And now you don't need to do anything to update the embeddings. The vectorizer will automatically create the embeddings for the new row without any intervention from you. After a few seconds, you can run a search query related to the new entry and see it returned as part of the results:
SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> ai.ollama_embed('all-minilm', 'AI tools') LIMIT 1;
In this section, we'll have the LLM answer questions about pgai based on the wiki entry we added by using RAG. The LLM was never trained on the pgai wiki entry, and so it needs data in the database to answer questions about pgai.
You can perform RAG purely from within the database using SQL or use a python script to interact with the database and perform RAG. We often find that using SQL is easier to create a quick prototype and get started but as the project matures people easily switch to using Python to have more control and make use of Python tooling.
Click to perform RAG within SQL
-
Define a function to perform RAG
We'll create a function that uses RAG to allow an LLM to answer questions about pgai based on the wiki entry we added.
RAG involves two steps:
- Perform a similarity search to find the most relevant chunks of data.
- Use the LLM to generate a response using the relevant chunks as context.
CREATE OR REPLACE FUNCTION generate_rag_response(query_text TEXT) RETURNS TEXT AS $$ DECLARE context_chunks TEXT; response JSONB; BEGIN -- Perform similarity search to find relevant wiki article SELECT string_agg(title || ': ' || chunk, E'\n') INTO context_chunks FROM ( SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> ai.ollama_embed('all-minilm', query_text) LIMIT 3 ) AS relevant_posts; raise notice 'Context provided to LLM: %', context_chunks; -- Generate a summary using tinyllama select ai.ollama_generate('tinyllama', query_text || E'\nUse the following context to respond.\n' || context_chunks) INTO response; RETURN response->>'response'; END; $$ LANGUAGE plpgsql;
-
Use the RAG function to answer questions about the wiki data
SELECT generate_rag_response('What can I use pgai for?') as response;
Click here to see the output
response PGAI is a tool that makes it easier for developers to create AI applications by providing access to data in a PostgreSQL database using Semantic Search and answering RAG (Recommendation and Answer Generation) questions. This allows the LLM (Language Model) to answer questions about unseen data without being trained on your data, making it an important tool for building accurate and efficient AI applications. The context suggests that PGAI can be useful in a variety of industries or use cases where data access is critical, such as healthcare, finance, or customer service.
Click to perform RAG with Python
-
Install python dependencies
pip install psycopg pgvector ollama
-
Run the Python script to perform RAG
import psycopg from pgvector.psycopg import register_vector from typing import Optional, List, NamedTuple from ollama import Client from dataclasses import dataclass @dataclass class ChunkData: """Represents a chunk of text with its title and content.""" title: str chunk: str def create_db_connection() -> psycopg.Connection: """Create and return a database connection.""" conn = psycopg.connect( "postgres://postgres:postgres@localhost:5432/postgres" # Modify connection string as needed for your setup ) register_vector(conn) return conn def get_embedding(client: Client, text: str) -> list[float]: """Get embeddings using Ollama's all-minilm model.""" response = client.embeddings(model='all-minilm', prompt=text) return response['embedding'] def get_relevant_chunks(cur: psycopg.Cursor, embedding: list[float], limit: int = 1) -> List[ChunkData]: """ Retrieve the most relevant chunks based on vector similarity. Args: cur: Database cursor embedding: Query embedding vector limit: Number of chunks to retrieve Returns: List of ChunkData objects containing relevant chunks """ query = """ SELECT title, chunk FROM wiki_embeddings ORDER BY embedding <=> %s::vector LIMIT %s """ cur.execute(query, (embedding, limit)) return [ChunkData(title=row[0], chunk=row[1]) for row in cur.fetchall()] def format_context(chunks: List[ChunkData]) -> str: """ Format the chunks into a single context string. Args: chunks: List of ChunkData objects Returns: Formatted context string """ return "\n\n".join(f"{chunk.title}:\n{chunk.chunk}" for chunk in chunks) def generate_rag_response(query_text: str) -> Optional[str]: """ Generate a RAG response using pgai, Ollama embeddings, and database content. Args: query_text: The question or query to answer Returns: str: The generated response from the LLM """ try: # Initialize Ollama client client = Client(host='http://localhost:11434') with create_db_connection() as conn: with conn.cursor() as cur: # Get embeddings for the query using Ollama SDK query_embedding = get_embedding(client, query_text) # Get relevant chunks relevant_chunks = get_relevant_chunks(cur, query_embedding) # Format context context = format_context(relevant_chunks) # Print context for debugging (optional) print("Context provided to LLM:") print("------------------------") print(context) print("------------------------") # Construct prompt with context prompt = f"""Question: {query_text} Please use the following context to provide an accurate response: {context} Answer:""" # Generate response using Ollama SDK response = client.generate( model='tinyllama', prompt=prompt, stream=False ) return response['response'] except Exception as e: print(f"Error generating RAG response: {e}") return None def main(): # Example usage questions = [ "What can I use pgai for?", ] for question in questions: print("\n" + "="*50) print(f"Question: {question}") print("-"*50) response = generate_rag_response(question) if response: print("\nResponse:") print(response) else: print("Failed to generate response") if __name__ == "__main__": main()
-
Generate a summary of the article in the database
We'll generate a summary of the search results using the
ai.ollama_generate
function (this will take a few minutes).SELECT answer->>'response' as summary FROM ai.ollama_generate('tinyllama', 'Summarize the following and output the summary in a single sentence: '|| (SELECT text FROM wiki WHERE title like 'pgai%')) as answer;
Click to see the output
summary Pgai is a tool that simplifies the process of making AI applications easier by providing easy access to data in PostgreSQL and enabling semantic search on the data for the Retrieval Augmented Generation (RAG) pipeline. This allows the AI system to answer questions about unseen data without being trained on it, simplifying the entire process. This is just one example of model calling capabilities. Model calling can be used for a variety of tasks, including classification, summarization, moderation, and other forms of data enrichment.
- Automatically create and sync vector embeddings for your data (learn more)
- Search your data using vector and semantic search (learn more)
- Implement Retrieval Augmented Generation inside a single SQL statement (learn more)
- Perform high-performance, cost-efficient ANN search on large vector workloads with pgvectorscale, which complements pgvector.
- Retrieve LLM chat completions from models like Claude Sonnet 3.5, OpenAI GPT4o, Cohere Command, and Llama 3 (via Ollama). (learn more)
- Reason over your data and facilitate use cases like classification, summarization, and data enrichment on your existing relational data in PostgreSQL (see an example).
- Load datasets from Hugging Face into your database with ai.load_dataset.
- Use chunking algorithms to split text with SQL functions.
- Vector Databases Are the Wrong Abstraction
- pgai: Giving PostgreSQL Developers AI Engineering Superpowers
- How to Automatically Create & Update Embeddings in PostgreSQL—With One SQL Query
- [video] Auto Create and Sync Vector Embeddings in 1 Line of SQL
- Which OpenAI Embedding Model Is Best for Your RAG App With Pgvector?
- Which RAG Chunking and Formatting Strategy Is Best for Your App With Pgvector
- Parsing All the Data With Open-Source Tools: Unstructured and Pgai
- In-Database AI Agents: Teaching Claude to Use Tools With Pgai
- Build Search and RAG Systems on PostgreSQL Using Cohere and Pgai
- Use Open-Source LLMs in PostgreSQL With Ollama and Pgai
We welcome contributions to pgai! See the Contributing page for more information.
- Automatically create and sync vector embeddings for your data
- Search your data using vector and semantic search
- Implement Retrieval Augmented Generation inside a single SQL statement
The pgvector and pgvectorscale extensions allow you to store vector embeddings in your database and perform fast and efficient vector search. The pgai Vectorizer builds on top of these extensions to automatically create and synchronize embeddings for any text data in your database.
With one line of code, you can define a vectorizer that creates embeddings for data in a table:
SELECT ai.create_vectorizer(
<table_name>::regclass,
destination => <embedding_table_name>,
loading => ai.loading_column(<column_name>),
embedding => ai.embedding_ollama(<model_name>, <dimensions>)
);
This newly created vectorizer will automatically track any changes to the data in the source table and update the destination embedding table with the new embeddings asynchronously.
Automate AI embedding with pgai Vectorizer shows you how to implement embeddings in your own data. On a self-hosted Postgres installation, you use a Vectorizer Worker to asynchronously processes your vectorizers. When you create Vectorizers in a Timescale Cloud database, embeddings are automatically created and synchronized in the background.
Note: Timescale Cloud currently supports embedding natively with OpenAI. To use Ollama on the data in your Timescale Cloud service, set scheduling => ai.scheduling_none() in the configuration for your service, then install the vectorizer worker locally and configure it to connect to your Timescale Cloud service.
pgai exposes a set of functions to directly interact with the LLM models through SQL, enabling you to do semantic search directly in your database:
SELECT
chunk,
embedding <=> ai.ollama_embed(<embedding_model>, 'some-query') as distance
FROM <embedding_table>
ORDER BY distance
LIMIT 5;
This is a perfectly normal SQL query. You can combine it with where
clauses and other SQL features to
further refine your search. pgai solves the missing where clause in vector search problem for real.
Similar to semantic search, pgai LLM functions enable you to implement RAG directly in your database. For example:
-
Create a RAG function:
CREATE OR REPLACE FUNCTION generate_rag_response(query_text TEXT) RETURNS TEXT AS $$ DECLARE context_chunks TEXT; response TEXT; BEGIN -- Perform similarity search to find relevant blog posts SELECT string_agg(title || ': ' || chunk, E'\n') INTO context_chunks FROM ( SELECT title, chunk FROM blogs_embedding ORDER BY embedding <=> ai.ollama_embed('nomic-embed-text', query_text) LIMIT 3 ) AS relevant_posts; -- Generate a summary using llama3 SELECT ai.ollama_chat_complete ( 'llama3' , jsonb_build_array ( jsonb_build_object('role', 'system', 'content', 'you are a helpful assistant') , jsonb_build_object ('role', 'user' , 'content', query_text || E'\nUse the following context to respond.\n' || context_chunks ) ) )->'message'->>'content' INTO response; RETURN response; END; $$ LANGUAGE plpgsql;
-
Execute your function in a SQL query:
SELECT generate_rag_response('Give me some startup advice');
Model calling is a feature of pgai that allows you to call LLM models from SQL. This lets you leverage the power of LLMs for a variety of tasks, including classification, summarization, moderation, and other forms of data enrichment.
The following models are supported (click on the model to learn more):
Model | Tokenize | Embed | Chat Complete | Generate | Moderate | Classify | Rerank |
---|---|---|---|---|---|---|---|
Ollama | ✔️ | ✔️ | ✔️ | ||||
OpenAI | ✔️️ | ✔️ | ✔️ | ✔️ | |||
Anthropic | ✔️ | ||||||
Cohere | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ||
Voyage AI | ✔️ | ||||||
Huggingface (with LiteLLM) | ✔️ | ||||||
Mistral (with LiteLLM) | ✔️ | ||||||
Azure OpenAI (with LiteLLM) | ✔️ | ||||||
AWS Bedrock (with LiteLLM) | ✔️ | ||||||
Vertex AI (with LiteLLM) | ✔️ |
Some examples:
- Learn how to moderate content directly in the database using triggers and background jobs.
- load datasets directly from Hugging Face into your database.
- Leverage LLMs for data processing tasks such as classification, summarization, and data enrichment (see the OpenAI example).
pgai is still at an early stage. Now is a great time to help shape the direction of this project; we are currently deciding priorities. Have a look at the list of features we're thinking of working on. Feel free to comment, expand the list, or hop on the Discussions forum.
To get started, take a look at how to contribute and how to set up a dev/test environment.
Timescale is a PostgreSQL database company. To learn more visit the timescale.com.
Timescale Cloud is a high-performance, developer focused, cloud platform that provides PostgreSQL services for the most demanding AI, time-series, analytics, and event workloads. Timescale Cloud is ideal for production applications and provides high availability, streaming backups, upgrades over time, roles and permissions, and great security.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for pgai
Similar Open Source Tools

pgai
pgai simplifies the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL. It brings embedding and generation AI models closer to the database, allowing users to create embeddings, retrieve LLM chat completions, reason over data for classification, summarization, and data enrichment directly from within PostgreSQL in a SQL query. The tool requires an OpenAI API key and a PostgreSQL client to enable AI functionality in the database. Users can install pgai from source, run it in a pre-built Docker container, or enable it in a Timescale Cloud service. The tool provides functions to handle API keys using psql or Python, and offers various AI functionalities like tokenizing, detokenizing, embedding, chat completion, and content moderation.

Trace
Trace is a new AutoDiff-like tool for training AI systems end-to-end with general feedback. It generalizes the back-propagation algorithm by capturing and propagating an AI system's execution trace. Implemented as a PyTorch-like Python library, users can write Python code directly and use Trace primitives to optimize certain parts, similar to training neural networks.

basiclingua-LLM-Based-NLP
BasicLingua is a Python library that provides functionalities for linguistic tasks such as tokenization, stemming, lemmatization, and many others. It is based on the Gemini Language Model, which has demonstrated promising results in dealing with text data. BasicLingua can be used as an API or through a web demo. It is available under the MIT license and can be used in various projects.

discord-llm-chatbot
llmcord.py enables collaborative LLM prompting in your Discord server. It works with practically any LLM, remote or locally hosted. ### Features ### Reply-based chat system Just @ the bot to start a conversation and reply to continue. Build conversations with reply chains! You can do things like: - Build conversations together with your friends - "Rewind" a conversation simply by replying to an older message - @ the bot while replying to any message in your server to ask a question about it Additionally: - Back-to-back messages from the same user are automatically chained together. Just reply to the latest one and the bot will see all of them. - You can seamlessly move any conversation into a thread. Just create a thread from any message and @ the bot inside to continue. ### Choose any LLM Supports remote models from OpenAI API, Mistral API, Anthropic API and many more thanks to LiteLLM. Or run a local model with ollama, oobabooga, Jan, LM Studio or any other OpenAI compatible API server. ### And more: - Supports image attachments when using a vision model - Customizable system prompt - DM for private access (no @ required) - User identity aware (OpenAI API only) - Streamed responses (turns green when complete, automatically splits into separate messages when too long, throttled to prevent Discord ratelimiting) - Displays helpful user warnings when appropriate (like "Only using last 20 messages", "Max 5 images per message", etc.) - Caches message data in a size-managed (no memory leaks) and per-message mutex-protected (no race conditions) global dictionary to maximize efficiency and minimize Discord API calls - Fully asynchronous - 1 Python file, ~200 lines of code

premsql
PremSQL is an open-source library designed to help developers create secure, fully local Text-to-SQL solutions using small language models. It provides essential tools for building and deploying end-to-end Text-to-SQL pipelines with customizable components, ideal for secure, autonomous AI-powered data analysis. The library offers features like Local-First approach, Customizable Datasets, Robust Executors and Evaluators, Advanced Generators, Error Handling and Self-Correction, Fine-Tuning Support, and End-to-End Pipelines. Users can fine-tune models, generate SQL queries from natural language inputs, handle errors, and evaluate model performance against predefined metrics. PremSQL is extendible for customization and private data usage.

LLamaSharp
LLamaSharp is a cross-platform library to run 🦙LLaMA/LLaVA model (and others) on your local device. Based on llama.cpp, inference with LLamaSharp is efficient on both CPU and GPU. With the higher-level APIs and RAG support, it's convenient to deploy LLM (Large Language Model) in your application with LLamaSharp.

evidently
Evidently is an open-source Python library designed for evaluating, testing, and monitoring machine learning (ML) and large language model (LLM) powered systems. It offers a wide range of functionalities, including working with tabular, text data, and embeddings, supporting predictive and generative systems, providing over 100 built-in metrics for data drift detection and LLM evaluation, allowing for custom metrics and tests, enabling both offline evaluations and live monitoring, and offering an open architecture for easy data export and integration with existing tools. Users can utilize Evidently for one-off evaluations using Reports or Test Suites in Python, or opt for real-time monitoring through the Dashboard service.

marly
Marly is a tool that allows users to search for and extract context-specific data from various types of documents such as PDFs, Word files, Powerpoints, and websites. It provides the ability to extract data in structured formats like JSON or Markdown, making it easy to integrate into workflows. Marly supports multi-schema and multi-document extraction, offers built-in caching for rapid repeat extractions, and ensures no vendor lock-in by allowing flexibility in choosing model providers.

LazyLLM
LazyLLM is a low-code development tool for building complex AI applications with multiple agents. It assists developers in building AI applications at a low cost and continuously optimizing their performance. The tool provides a convenient workflow for application development and offers standard processes and tools for various stages of application development. Users can quickly prototype applications with LazyLLM, analyze bad cases with scenario task data, and iteratively optimize key components to enhance the overall application performance. LazyLLM aims to simplify the AI application development process and provide flexibility for both beginners and experts to create high-quality applications.

OpenAdapt
OpenAdapt is an open-source software adapter between Large Multimodal Models (LMMs) and traditional desktop and web Graphical User Interfaces (GUIs). It aims to automate repetitive GUI workflows by leveraging the power of LMMs. OpenAdapt records user input and screenshots, converts them into tokenized format, and generates synthetic input via transformer model completions. It also analyzes recordings to generate task trees and replay synthetic input to complete tasks. OpenAdapt is model agnostic and generates prompts automatically by learning from human demonstration, ensuring that agents are grounded in existing processes and mitigating hallucinations. It works with all types of desktop GUIs, including virtualized and web, and is open source under the MIT license.

SheetCopilot
SheetCopilot is an assistant agent that manipulates spreadsheets by following user commands. It leverages Large Language Models (LLMs) to interact with spreadsheets like a human expert, enabling non-expert users to complete tasks on complex software such as Google Sheets and Excel via a language interface. The tool observes spreadsheet states, polishes generated solutions based on external action documents and error feedback, and aims to improve success rate and efficiency. SheetCopilot offers a dataset with diverse task categories and operations, supporting operations like entry & manipulation, management, formatting, charts, and pivot tables. Users can interact with SheetCopilot in Excel or Google Sheets, executing tasks like calculating revenue, creating pivot tables, and plotting charts. The tool's evaluation includes performance comparisons with leading LLMs and VBA-based methods on specific datasets, showcasing its capabilities in controlling various aspects of a spreadsheet.

EasyInstruct
EasyInstruct is a Python package proposed as an easy-to-use instruction processing framework for Large Language Models (LLMs) like GPT-4, LLaMA, ChatGLM in your research experiments. EasyInstruct modularizes instruction generation, selection, and prompting, while also considering their combination and interaction.

LLM-Pruner
LLM-Pruner is a tool for structural pruning of large language models, allowing task-agnostic compression while retaining multi-task solving ability. It supports automatic structural pruning of various LLMs with minimal human effort. The tool is efficient, requiring only 3 minutes for pruning and 3 hours for post-training. Supported LLMs include Llama-3.1, Llama-3, Llama-2, LLaMA, BLOOM, Vicuna, and Baichuan. Updates include support for new LLMs like GQA and BLOOM, as well as fine-tuning results achieving high accuracy. The tool provides step-by-step instructions for pruning, post-training, and evaluation, along with a Gradio interface for text generation. Limitations include issues with generating repetitive or nonsensical tokens in compressed models and manual operations for certain models.

job-llm
ResumeFlow is an automated system utilizing Large Language Models (LLMs) to streamline the job application process. It aims to reduce human effort in various steps of job hunting by integrating LLM technology. Users can access ResumeFlow as a web tool, install it as a Python package, or download the source code. The project focuses on leveraging LLMs to automate tasks such as resume generation and refinement, making job applications smoother and more efficient.

UFO
UFO is a UI-focused dual-agent framework to fulfill user requests on Windows OS by seamlessly navigating and operating within individual or spanning multiple applications.

sec-parser
The `sec-parser` project simplifies extracting meaningful information from SEC EDGAR HTML documents by organizing them into semantic elements and a tree structure. It helps in parsing SEC filings for financial and regulatory analysis, analytics and data science, AI and machine learning, causal AI, and large language models. The tool is especially beneficial for AI, ML, and LLM applications by streamlining data pre-processing and feature extraction.
For similar tasks

superagent
Superagent is an open-source AI assistant framework and API that allows developers to add powerful AI assistants to their applications. These assistants use large language models (LLMs), retrieval augmented generation (RAG), and generative AI to help users with a variety of tasks, including question answering, chatbot development, content generation, data aggregation, and workflow automation. Superagent is backed by Y Combinator and is part of YC W24.

Awesome-Segment-Anything
Awesome-Segment-Anything is a powerful tool for segmenting and extracting information from various types of data. It provides a user-friendly interface to easily define segmentation rules and apply them to text, images, and other data formats. The tool supports both supervised and unsupervised segmentation methods, allowing users to customize the segmentation process based on their specific needs. With its versatile functionality and intuitive design, Awesome-Segment-Anything is ideal for data analysts, researchers, content creators, and anyone looking to efficiently extract valuable insights from complex datasets.

simpletransformers
Simple Transformers is a library based on the Transformers library by HuggingFace, allowing users to quickly train and evaluate Transformer models with only 3 lines of code. It supports various tasks such as Information Retrieval, Language Models, Encoder Model Training, Sequence Classification, Token Classification, Question Answering, Language Generation, T5 Model, Seq2Seq Tasks, Multi-Modal Classification, and Conversational AI.

smile
Smile (Statistical Machine Intelligence and Learning Engine) is a comprehensive machine learning, NLP, linear algebra, graph, interpolation, and visualization system in Java and Scala. It covers every aspect of machine learning, including classification, regression, clustering, association rule mining, feature selection, manifold learning, multidimensional scaling, genetic algorithms, missing value imputation, efficient nearest neighbor search, etc. Smile implements major machine learning algorithms and provides interactive shells for Java, Scala, and Kotlin. It supports model serialization, data visualization using SmilePlot and declarative approach, and offers a gallery showcasing various algorithms and visualizations.

pgai
pgai simplifies the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL. It brings embedding and generation AI models closer to the database, allowing users to create embeddings, retrieve LLM chat completions, reason over data for classification, summarization, and data enrichment directly from within PostgreSQL in a SQL query. The tool requires an OpenAI API key and a PostgreSQL client to enable AI functionality in the database. Users can install pgai from source, run it in a pre-built Docker container, or enable it in a Timescale Cloud service. The tool provides functions to handle API keys using psql or Python, and offers various AI functionalities like tokenizing, detokenizing, embedding, chat completion, and content moderation.

NeMo-Curator
NeMo Curator is a GPU-accelerated open-source framework designed for efficient large language model data curation. It provides scalable dataset preparation for tasks like foundation model pretraining, domain-adaptive pretraining, supervised fine-tuning, and parameter-efficient fine-tuning. The library leverages GPUs with Dask and RAPIDS to accelerate data curation, offering customizable and modular interfaces for pipeline expansion and model convergence. Key features include data download, text extraction, quality filtering, deduplication, downstream-task decontamination, distributed data classification, and PII redaction. NeMo Curator is suitable for curating high-quality datasets for large language model training.

bootcamp_machine-learning
Bootcamp Machine Learning is a one-week program designed by 42 AI to teach the basics of Machine Learning. The curriculum covers topics such as linear algebra, statistics, regression, classification, and regularization. Participants will learn concepts like gradient descent, hypothesis modeling, overfitting detection, logistic regression, and more. The bootcamp is ideal for individuals with prior knowledge of Python who are interested in diving into the field of artificial intelligence.

geoai
geoai is a Python package designed for utilizing Artificial Intelligence (AI) in the context of geospatial data. It allows users to visualize various types of geospatial data such as vector, raster, and LiDAR data. Additionally, the package offers functionalities for segmenting remote sensing imagery using the Segment Anything Model and classifying remote sensing imagery with deep learning models. With a focus on geospatial AI applications, geoai provides a versatile tool for processing and analyzing spatial data with the power of AI.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.