spiceai
A portable accelerated data query and LLM-inference engine, written in Rust, for data-grounded AI apps and agents.
Stars: 2025
Spice is a portable runtime written in Rust that offers developers a unified SQL interface to materialize, accelerate, and query data from any database, data warehouse, or data lake. It connects, fuses, and delivers data to applications, machine-learning models, and AI-backends, functioning as an application-specific, tier-optimized Database CDN. Built with industry-leading technologies such as Apache DataFusion, Apache Arrow, Apache Arrow Flight, SQLite, and DuckDB. Spice makes it fast and easy to query data from one or more sources using SQL, co-locating a managed dataset with applications or machine learning models, and accelerating it with Arrow in-memory, SQLite/DuckDB, or attached PostgreSQL for fast, high-concurrency, low-latency queries.
README:
Spice is a SQL query and AI compute engine, written in Rust, for data-driven apps and agents.
Spice provides three industry standard APIs in a lightweight, portable runtime (single ~140 MB binary):
- SQL Query APIs: Arrow Flight, Arrow Flight SQL, ODBC, JDBC, and ADBC.
- OpenAI-Compatible APIs: HTTP APIs compatible the OpenAI SDK, AI SDK with local model serving (CUDA/Metal accelerated) and gateway to hosted models.
- Iceberg Catalog REST APIs: A unified Iceberg Catalog API.
π― Goal: Developers can focus on building data apps and AI agents confidently, knowing they are grounded in data.
Spice is primarily used for:
- Data Federation: SQL query across any database, data warehouse, or data lake. Learn More.
- Data Materialization and Acceleration: Materialize, accelerate, and cache database queries. Read the MaterializedView interview - Building a CDN for Databases
- AI apps and agents: An AI-database powering retrieval-augmented generation (RAG) and intelligent agents. Learn More.
If you want to build with DataFusion or using DuckDB, Spice provides a simple, flexible, and production-ready engine you can just use.
π£ Read the Spice.ai OSS announcement blog post.
Spice is built-on industry leading technologies including Apache DataFusion, Apache Arrow, Arrow Flight, SQLite, and DuckDB.
π₯ Watch the CMU Databases Accelerating Data and AI with Spice.ai Open-Source
Spice simplifies building data-driven AI applications and agents by making it fast and easy to query, federate, and accelerate data from one or more sources using SQL, while grounding AI in real-time, reliable data. Co-locate datasets with apps and AI models to power AI feedback loops, enable RAG and search, and deliver fast, low-latency data-query and AI-inference with full control over cost and performance.
-
AI-Native Runtime: Spice combines data query and AI inference in a single engine, for data-grounded AI and accurate AI.
-
Application-Focused: Designed to run distributed at the application and agent level, often as a 1:1 or 1:N mapping between app and Spice instance, unlike traditional data systems built for many apps on one centralized database. Itβs common to spin up multiple Spice instancesβeven one per tenant or customer.
-
Dual-Engine Acceleration: Supports both OLAP (Arrow/DuckDB) and OLTP (SQLite/PostgreSQL) engines at the dataset level, providing flexible performance across analytical and transactional workloads.
-
Disaggregated Storage: Separation of compute from disaggregated storage, co-locating local, materialized working sets of data with applications, dashboards, or ML pipelines while accessing source data in its original storage.
-
Edge to Cloud Native: Deploy as a standalone instance, Kubernetes sidecar, microservice, or clusterβacross edge/POP, on-prem, and public clouds. Chain multiple Spice instances for tier-optimized, distributed deployments.
Feature | Spice | Trino / Presto | Dremio | ClickHouse | Materialize |
---|---|---|---|---|---|
Primary Use-Case | Data & AI apps/agents | Big data analytics | Interactive analytics | Real-time analytics | Real-time analytics |
Primary deployment model | Sidecar | Cluster | Cluster | Cluster | Cluster |
Federated Query Support | β | β | β | β | β |
Acceleration/Materialization | β (Arrow, SQLite, DuckDB, PostgreSQL) | Intermediate storage | Reflections (Iceberg) | Materialized views | β (Real-time views) |
Catalog Support | β (Iceberg, Unity Catalog) | β | β | β | β |
Query Result Caching | β | β | β | β | Limited |
Multi-Modal Acceleration | β (OLAP + OLTP) | β | β | β | β |
Change Data Capture (CDC) | β (Debezium) | β | β | β | β (Debezium) |
Feature | Spice | LangChain | LlamaIndex | AgentOps.ai | Ollama |
---|---|---|---|---|---|
Primary Use-Case | Data & AI apps | Agentic workflows | RAG apps | Agent operations | LLM apps |
Programming Language | Any language (HTTP interface) | JavaScript, Python | Python | Python | Any language (HTTP interface) |
Unified Data + AI Runtime | β | β | β | β | β |
Federated Data Query | β | β | β | β | β |
Accelerated Data Access | β | β | β | β | β |
Tools/Functions | β | β | β | Limited | Limited |
LLM Memory | β | β | β | β | β |
Evaluations (Evals) | β | Limited | β | Limited | β |
Search | β (VSS) | β | β | Limited | Limited |
Caching | β (Query and results caching) | Limited | β | β | β |
Embeddings | β (Built-in & pluggable models/DBs) | β | β | Limited | β |
β = Fully supported β = Not supported Limited = Partial or restricted support
- OpenAI-compatible API: Connect to hosted models (OpenAI, Anthropic, xAI) or deploy locally (Llama, NVIDIA NIM). AI Gateway Recipe
- Federated Data Access: Query using SQL and NSQL (text-to-SQL) across databases, data warehouses, and data lakes with advanced query push-down for fast retrieval across disparate data sources. Federated SQL Query Recipe
- Search and RAG: Search and retrieve context with accelerated embeddings for retrieval-augmented generation (RAG) workflows. Vector Search over GitHub Files
- LLM Memory and Observability: Store and retrieve history and context for AI agents while gaining deep visibility into data flows, model performance, and traces. LLM Memory Recipe | Monitoring Features Documentation
- Data Acceleration: Co-locate materialized datasets in Arrow, SQLite, and DuckDB with applications for sub-second query. DuckDB Data Accelerator Recipe
- Resiliency and Local Dataset Replication: Maintain application availability with local replicas of critical datasets. Local Dataset Replication Recipe
- Responsive Dashboards: Enable fast, real-time analytics by accelerating data for frontends and BI tools. Sales BI Dashboard Demo
- Simplified Legacy Migration: Use a single endpoint to unify legacy systems with modern infrastructure, including federated SQL querying across multiple sources. Federated SQL Query Recipe
- Unified Search with Vector Similarity: Perform efficient vector similarity search across structured and unstructured data sources. Vector Search over GitHub Files
- Semantic Knowledge Layer: Define a semantic context model to enrich data for AI. Semantic Model Feature Documentation
- Text-to-SQL: Convert natural language queries into SQL using built-in NSQL and sampling tools for accurate query. Text-to-SQL Recipe
- Model and Data Evaluations: Assess model performance and data quality with integrated evaluation tools. Language Model Evaluations Recipe
-
Is Spice a cache? No specifically; you can think of Spice data acceleration as an active cache, materialization, or data prefetcher. A cache would fetch data on a cache-miss while Spice prefetches and materializes filtered data on an interval, trigger, or as data changes using CDC. In addition to acceleration Spice supports results caching.
-
Is Spice a CDN for databases? Yes, a common use-case for Spice is as a CDN for different data sources. Using CDN concepts, Spice enables you to ship (load) a working set of your database (or data lake, or data warehouse) where it's most frequently accessed, like from a data-intensive application or for AI context.
https://github.com/spiceai/spiceai/assets/80174/7735ee94-3f4a-4983-a98e-fe766e79e03a
See more demos on YouTube.
Name | Description | Status | Protocol/Format |
---|---|---|---|
databricks (mode: delta_lake) |
Databricks | Stable | S3/Delta Lake |
delta_lake |
Delta Lake | Stable | Delta Lake |
dremio |
Dremio | Stable | Arrow Flight |
duckdb |
DuckDB | Stable | Embedded |
file |
File | Stable | Parquet, CSV |
github |
GitHub | Stable | GitHub API |
postgres |
PostgreSQL | Stable | |
s3 |
S3 | Stable | Parquet, CSV |
mysql |
MySQL | Stable | |
graphql |
GraphQL | Release Candidate | JSON |
databricks (mode: spark_connect) |
Databricks | Beta | Spark Connect |
flightsql |
FlightSQL | Beta | Arrow Flight SQL |
iceberg |
Apache Iceberg | Beta | Parquet |
mssql |
Microsoft SQL Server | Beta | Tabular Data Stream (TDS) |
odbc |
ODBC | Beta | ODBC |
snowflake |
Snowflake | Beta | Arrow |
spark |
Spark | Beta | Spark Connect |
spice.ai |
Spice.ai | Beta | Arrow Flight |
abfs |
Azure BlobFS | Alpha | Parquet, CSV |
clickhouse |
Clickhouse | Alpha | |
debezium |
Debezium CDC | Alpha | Kafka + JSON |
dynamodb |
Amazon DynamoDB | Alpha | |
ftp , sftp
|
FTP/SFTP | Alpha | Parquet, CSV |
http , https
|
HTTP(s) | Alpha | Parquet, CSV |
localpod |
Local dataset replication | Alpha | |
sharepoint |
Microsoft SharePoint | Alpha | Unstructured UTF-8 documents |
mongodb |
MongoDB | Coming Soon | |
elasticsearch |
ElasticSearch | Roadmap |
Name | Description | Status | Engine Modes |
---|---|---|---|
arrow |
In-Memory Arrow Records | Stable | memory |
duckdb |
Embedded DuckDB | Stable |
memory , file
|
postgres |
Attached PostgreSQL | Release Candidate | N/A |
sqlite |
Embedded SQLite | Release Candidate |
memory , file
|
Name | Description | Status | ML Format(s) | LLM Format(s) |
---|---|---|---|---|
openai |
OpenAI (or compatible) LLM endpoint | Release Candidate | - | OpenAI-compatible HTTP endpoint |
file |
Local filesystem | Beta | ONNX | GGUF, GGML, SafeTensor |
huggingface |
Models hosted on HuggingFace | Beta | ONNX | GGUF, GGML, SafeTensor |
spice.ai |
Models hosted on the Spice.ai Cloud Platform | ONNX | OpenAI-compatible HTTP endpoint | |
azure |
Azure OpenAI | - | OpenAI-compatible HTTP endpoint | |
anthropic |
Models hosted on Anthropic | Alpha | - | OpenAI-compatible HTTP endpoint |
xai |
Models hosted on xAI | Alpha | - | OpenAI-compatible HTTP endpoint |
Name | Description | Status | ML Format(s) | LLM Format(s)* |
---|---|---|---|---|
openai |
OpenAI (or compatible) LLM endpoint | Release Candidate | - | OpenAI-compatible HTTP endpoint |
file |
Local filesystem | Release Candidate | ONNX | GGUF, GGML, SafeTensor |
huggingface |
Models hosted on HuggingFace | Release Candidate | ONNX | GGUF, GGML, SafeTensor |
azure |
Azure OpenAI | Alpha | - | OpenAI-compatible HTTP endpoint |
Catalog Connectors connect to external catalog providers and make their tables available for federated SQL query in Spice. Configuring accelerations for tables in external catalogs is not supported. The schema hierarchy of the external catalog is preserved in Spice.
Name | Description | Status | Protocol/Format |
---|---|---|---|
unity_catalog |
Unity Catalog | Stable | Delta Lake |
databricks |
Databricks | Beta | Spark Connect, S3/Delta Lake |
iceberg |
Apache Iceberg | Beta | Parquet |
spice.ai |
Spice.ai Cloud Platform | Alpha | Arrow Flight |
glue |
AWS Glue | Coming Soon | JSON, Parquet, Iceberg |
https://github.com/spiceai/spiceai/assets/88671039/85cf9a69-46e7-412e-8b68-22617dcbd4e0
Step 1. Install the Spice CLI:
On macOS, Linux, and WSL:
curl https://install.spiceai.org | /bin/bash
Or using brew
:
brew install spiceai/spiceai/spice
On Windows:
curl -L "https://install.spiceai.org/Install.ps1" -o Install.ps1 && PowerShell -ExecutionPolicy Bypass -File ./Install.ps1
Step 2. Initialize a new Spice app with the spice init
command:
spice init spice_qs
A spicepod.yaml
file is created in the spice_qs
directory. Change to that directory:
cd spice_qs
Step 3. Start the Spice runtime:
spice run
Example output will be shown as follows:
2025/01/20 11:26:10 INFO Spice.ai runtime starting...
2025-01-20T19:26:10.679068Z INFO runtime::init::dataset: No datasets were configured. If this is unexpected, check the Spicepod configuration.
2025-01-20T19:26:10.679716Z INFO runtime::flight: Spice Runtime Flight listening on 127.0.0.1:50051
2025-01-20T19:26:10.679786Z INFO runtime::metrics_server: Spice Runtime Metrics listening on 127.0.0.1:9090
2025-01-20T19:26:10.680140Z INFO runtime::http: Spice Runtime HTTP listening on 127.0.0.1:8090
2025-01-20T19:26:10.682080Z INFO runtime::opentelemetry: Spice Runtime OpenTelemetry listening on 127.0.0.1:50052
2025-01-20T19:26:10.879126Z INFO runtime::init::results_cache: Initialized results cache; max size: 128.00 MiB, item ttl: 1s
The runtime is now started and ready for queries.
Step 4. In a new terminal window, add the spiceai/quickstart
Spicepod. A Spicepod is a package of configuration defining datasets and ML models.
spice add spiceai/quickstart
The spicepod.yaml
file will be updated with the spiceai/quickstart
dependency.
version: v1
kind: Spicepod
name: spice_qs
dependencies:
- spiceai/quickstart
The spiceai/quickstart
Spicepod will add a taxi_trips
data table to the runtime which is now available to query by SQL.
2025-01-20T19:26:30.011633Z INFO runtime::init::dataset: Dataset taxi_trips registered (s3://spiceai-demo-datasets/taxi_trips/2024/), acceleration (arrow), results cache enabled.
2025-01-20T19:26:30.013002Z INFO runtime::accelerated_table::refresh_task: Loading data for dataset taxi_trips
2025-01-20T19:26:40.312839Z INFO runtime::accelerated_table::refresh_task: Loaded 2,964,624 rows (399.41 MiB) for dataset taxi_trips in 10s 299ms
Step 5. Start the Spice SQL REPL:
spice sql
The SQL REPL inferface will be shown:
Welcome to the Spice.ai SQL REPL! Type 'help' for help.
show tables; -- list available tables
sql>
Enter show tables;
to display the available tables for query:
sql> show tables;
+---------------+--------------+---------------+------------+
| table_catalog | table_schema | table_name | table_type |
+---------------+--------------+---------------+------------+
| spice | public | taxi_trips | BASE TABLE |
| spice | runtime | query_history | BASE TABLE |
| spice | runtime | metrics | BASE TABLE |
+---------------+--------------+---------------+------------+
Time: 0.022671708 seconds. 3 rows.
Enter a query to display the longest taxi trips:
SELECT trip_distance, total_amount FROM taxi_trips ORDER BY trip_distance DESC LIMIT 10;
Output:
+---------------+--------------+
| trip_distance | total_amount |
+---------------+--------------+
| 312722.3 | 22.15 |
| 97793.92 | 36.31 |
| 82015.45 | 21.56 |
| 72975.97 | 20.04 |
| 71752.26 | 49.57 |
| 59282.45 | 33.52 |
| 59076.43 | 23.17 |
| 58298.51 | 18.63 |
| 51619.36 | 24.2 |
| 44018.64 | 52.43 |
+---------------+--------------+
Time: 0.045150667 seconds. 10 rows.
Using the Docker image locally:
docker pull spiceai/spiceai
In a Dockerfile:
from spiceai/spiceai:latest
Using Helm:
helm repo add spiceai https://helm.spiceai.org
helm install spiceai spiceai/spiceai
The Spice.ai Cookbook is a collection of recipes and examples for using Spice. Find it at https://github.com/spiceai/cookbook.
Access ready-to-use Spicepods and datasets hosted on the Spice.ai Cloud Platform using the Spice runtime. A list of public Spicepods is available on Spicerack: https://spicerack.org/.
To use public datasets, create a free account on Spice.ai:
-
Visit spice.ai and click Try for Free.
-
After creating an account, create an app to generate an API key.
Once set up, you can access ready-to-use Spicepods including datasets. For this demonstration, use the taxi_trips
dataset from the Spice.ai Quickstart.
Step 1. Initialize a new project.
# Initialize a new Spice app
spice init spice_app
# Change to app directory
cd spice_app
Step 2. Log in and authenticate from the command line using the spice login
command. A pop up browser window will prompt you to authenticate:
spice login
Step 3. Start the runtime:
# Start the runtime
spice run
Step 4. Configure the dataset:
In a new terminal window, configure a new dataset using the spice dataset configure
command:
spice dataset configure
Enter a dataset name that will be used to reference the dataset in queries. This name does not need to match the name in the dataset source.
dataset name: (spice_app) taxi_trips
Enter the description of the dataset:
description: Taxi trips dataset
Enter the location of the dataset:
from: spice.ai/spiceai/quickstart/datasets/taxi_trips
Select y
when prompted whether to accelerate the data:
Locally accelerate (y/n)? y
You should see the following output from your runtime terminal:
2024-12-16T05:12:45.803694Z INFO runtime::init::dataset: Dataset taxi_trips registered (spice.ai/spiceai/quickstart/datasets/taxi_trips), acceleration (arrow, 10s refresh), results cache enabled.
2024-12-16T05:12:45.805494Z INFO runtime::accelerated_table::refresh_task: Loading data for dataset taxi_trips
2024-12-16T05:13:24.218345Z INFO runtime::accelerated_table::refresh_task: Loaded 2,964,624 rows (8.41 GiB) for dataset taxi_trips in 38s 412ms.
Step 5. In a new terminal window, use the Spice SQL REPL to query the dataset
spice sql
SELECT tpep_pickup_datetime, passenger_count, trip_distance from taxi_trips LIMIT 10;
The output displays the results of the query along with the query execution time:
+----------------------+-----------------+---------------+
| tpep_pickup_datetime | passenger_count | trip_distance |
+----------------------+-----------------+---------------+
| 2024-01-11T12:55:12 | 1 | 0.0 |
| 2024-01-11T12:55:12 | 1 | 0.0 |
| 2024-01-11T12:04:56 | 1 | 0.63 |
| 2024-01-11T12:18:31 | 1 | 1.38 |
| 2024-01-11T12:39:26 | 1 | 1.01 |
| 2024-01-11T12:18:58 | 1 | 5.13 |
| 2024-01-11T12:43:13 | 1 | 2.9 |
| 2024-01-11T12:05:41 | 1 | 1.36 |
| 2024-01-11T12:20:41 | 1 | 1.11 |
| 2024-01-11T12:37:25 | 1 | 2.04 |
+----------------------+-----------------+---------------+
Time: 0.00538925 seconds. 10 rows.
You can experiment with the time it takes to generate queries when using non-accelerated datasets. You can change the acceleration setting from true
to false
in the datasets.yaml file.
Comprehensive documentation is available at spiceai.org/docs.
Over 45 quickstarts and samples available in the Spice Cookbook.
Spice.ai is designed to be extensible with extension points documented at EXTENSIBILITY.md. Build custom Data Connectors, Data Accelerators, Catalog Connectors, Secret Stores, Models, or Embeddings.
π See the Roadmap to v1.0-stable for upcoming features.
We greatly appreciate and value your support! You can help Spice in a number of ways:
- Build an app with Spice.ai and send us feedback and suggestions at [email protected] or on Discord, X, or LinkedIn.
- File an issue if you see something not quite working correctly.
- Join our team (Weβre hiring!)
- Contribute code or documentation to the project (see CONTRIBUTING.md).
- Follow our blog at blog.spiceai.org
βοΈ star this repo! Thank you for your support! π
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for spiceai
Similar Open Source Tools
spiceai
Spice is a portable runtime written in Rust that offers developers a unified SQL interface to materialize, accelerate, and query data from any database, data warehouse, or data lake. It connects, fuses, and delivers data to applications, machine-learning models, and AI-backends, functioning as an application-specific, tier-optimized Database CDN. Built with industry-leading technologies such as Apache DataFusion, Apache Arrow, Apache Arrow Flight, SQLite, and DuckDB. Spice makes it fast and easy to query data from one or more sources using SQL, co-locating a managed dataset with applications or machine learning models, and accelerating it with Arrow in-memory, SQLite/DuckDB, or attached PostgreSQL for fast, high-concurrency, low-latency queries.
PredictorLLM
PredictorLLM is an advanced trading agent framework that utilizes large language models to automate trading in financial markets. It includes a profiling module to establish agent characteristics, a layered memory module for retaining and prioritizing financial data, and a decision-making module to convert insights into trading strategies. The framework mimics professional traders' behavior, surpassing human limitations in data processing and continuously evolving to adapt to market conditions for superior investment outcomes.
EVE
EVE is an official PyTorch implementation of Unveiling Encoder-Free Vision-Language Models. The project aims to explore the removal of vision encoders from Vision-Language Models (VLMs) and transfer LLMs to encoder-free VLMs efficiently. It also focuses on bridging the performance gap between encoder-free and encoder-based VLMs. EVE offers a superior capability with arbitrary image aspect ratio, data efficiency by utilizing publicly available data for pre-training, and training efficiency with a transparent and practical strategy for developing a pure decoder-only architecture across modalities.
flute
FLUTE (Flexible Lookup Table Engine for LUT-quantized LLMs) is a tool designed for uniform quantization and lookup table quantization of weights in lower-precision intervals. It offers flexibility in mapping intervals to arbitrary values through a lookup table. FLUTE supports various quantization formats such as int4, int3, int2, fp4, fp3, fp2, nf4, nf3, nf2, and even custom tables. The tool also introduces new quantization algorithms like Learned Normal Float (NFL) for improved performance and calibration data learning. FLUTE provides benchmarks, model zoo, and integration with frameworks like vLLM and HuggingFace for easy deployment and usage.
EasyEdit
EasyEdit is a Python package for edit Large Language Models (LLM) like `GPT-J`, `Llama`, `GPT-NEO`, `GPT2`, `T5`(support models from **1B** to **65B**), the objective of which is to alter the behavior of LLMs efficiently within a specific domain without negatively impacting performance across other inputs. It is designed to be easy to use and easy to extend.
refact
This repository contains Refact WebUI for fine-tuning and self-hosting of code models, which can be used inside Refact plugins for code completion and chat. Users can fine-tune open-source code models, self-host them, download and upload Lloras, use models for code completion and chat inside Refact plugins, shard models, host multiple small models on one GPU, and connect GPT-models for chat using OpenAI and Anthropic keys. The repository provides a Docker container for running the self-hosted server and supports various models for completion, chat, and fine-tuning. Refact is free for individuals and small teams under the BSD-3-Clause license, with custom installation options available for GPU support. The community and support include contributing guidelines, GitHub issues for bugs, a community forum, Discord for chatting, and Twitter for product news and updates.
llm4regression
This project explores the capability of Large Language Models (LLMs) to perform regression tasks using in-context examples. It compares the performance of LLMs like GPT-4 and Claude 3 Opus with traditional supervised methods such as Linear Regression and Gradient Boosting. The project provides preprints and results demonstrating the strong performance of LLMs in regression tasks. It includes datasets, models used, and experiments on adaptation and contamination. The code and data for the experiments are available for interaction and analysis.
agentic
Agentic is a standard AI functions/tools library optimized for TypeScript and LLM-based apps, compatible with major AI SDKs. It offers a set of thoroughly tested AI functions that can be used with favorite AI SDKs without writing glue code. The library includes various clients for services like Bing web search, calculator, Clearbit data resolution, Dexa podcast questions, and more. It also provides compound tools like SearchAndCrawl and supports multiple AI SDKs such as OpenAI, Vercel AI SDK, LangChain, LlamaIndex, Firebase Genkit, and Dexa Dexter. The goal is to create minimal clients with strongly-typed TypeScript DX, composable AIFunctions via AIFunctionSet, and compatibility with major TS AI SDKs.
portkey-python-sdk
The Portkey Python SDK is a control panel for AI apps that allows seamless integration of Portkey's advanced features with OpenAI methods. It provides features such as AI gateway for unified API signature, interoperability, automated fallbacks & retries, load balancing, semantic caching, virtual keys, request timeouts, observability with logging, requests tracing, custom metadata, feedback collection, and analytics. Users can make requests to OpenAI using Portkey SDK and also use async functionality. The SDK is compatible with OpenAI SDK methods and offers Portkey-specific methods like feedback and prompts. It supports various providers and encourages contributions through Github issues or direct contact via email or Discord.
crabml
Crabml is a llama.cpp compatible AI inference engine written in Rust, designed for efficient inference on various platforms with WebGPU support. It focuses on running inference tasks with SIMD acceleration and minimal memory requirements, supporting multiple models and quantization methods. The project is hackable, embeddable, and aims to provide high-performance AI inference capabilities.
DownEdit
DownEdit is a powerful program that allows you to download videos from various social media platforms such as TikTok, Douyin, Kuaishou, and more. With DownEdit, you can easily download videos from user profiles and edit them in bulk. You have the option to flip the videos horizontally or vertically throughout the entire directory with just a single click. Stay tuned for more exciting features coming soon!
ollama-gui
Ollama GUI is a web interface for ollama.ai, a tool that enables running Large Language Models (LLMs) on your local machine. It provides a user-friendly platform for chatting with LLMs and accessing various models for text generation. Users can easily interact with different models, manage chat history, and explore available models through the web interface. The tool is built with Vue.js, Vite, and Tailwind CSS, offering a modern and responsive design for seamless user experience.
gollama
Gollama is a tool designed for managing Ollama models through a Text User Interface (TUI). Users can list, inspect, delete, copy, and push Ollama models, as well as link them to LM Studio. The application offers interactive model selection, sorting by various criteria, and actions using hotkeys. It provides features like sorting and filtering capabilities, displaying model metadata, model linking, copying, pushing, and more. Gollama aims to be user-friendly and useful for managing models, especially for cleaning up old models.
phoenix
Phoenix is a tool that provides MLOps and LLMOps insights at lightning speed with zero-config observability. It offers a notebook-first experience for monitoring models and LLM Applications by providing LLM Traces, LLM Evals, Embedding Analysis, RAG Analysis, and Structured Data Analysis. Users can trace through the execution of LLM Applications, evaluate generative models, explore embedding point-clouds, visualize generative application's search and retrieval process, and statistically analyze structured data. Phoenix is designed to help users troubleshoot problems related to retrieval, tool execution, relevance, toxicity, drift, and performance degradation.
jailbreak_llms
This is the official repository for the ACM CCS 2024 paper 'Do Anything Now': Characterizing and Evaluating In-The-Wild Jailbreak Prompts on Large Language Models. The project employs a new framework called JailbreakHub to conduct the first measurement study on jailbreak prompts in the wild, collecting 15,140 prompts from December 2022 to December 2023, including 1,405 jailbreak prompts. The dataset serves as the largest collection of in-the-wild jailbreak prompts. The repository contains examples of harmful language and is intended for research purposes only.
OpenAI-CLIP-Feature
This repository provides code for extracting image and text features using OpenAI CLIP models, supporting both global and local grid visual features. It aims to facilitate multi visual-and-language downstream tasks by allowing users to customize input and output grid resolution easily. The extracted features have shown comparable or superior results in image captioning tasks without hyperparameter tuning. The repo supports various CLIP models and provides detailed information on supported settings and results on MSCOCO image captioning. Users can get started by setting up experiments with the extracted features using X-modaler.
For similar tasks
spiceai
Spice is a portable runtime written in Rust that offers developers a unified SQL interface to materialize, accelerate, and query data from any database, data warehouse, or data lake. It connects, fuses, and delivers data to applications, machine-learning models, and AI-backends, functioning as an application-specific, tier-optimized Database CDN. Built with industry-leading technologies such as Apache DataFusion, Apache Arrow, Apache Arrow Flight, SQLite, and DuckDB. Spice makes it fast and easy to query data from one or more sources using SQL, co-locating a managed dataset with applications or machine learning models, and accelerating it with Arrow in-memory, SQLite/DuckDB, or attached PostgreSQL for fast, high-concurrency, low-latency queries.
Pathway-AI-Bootcamp
Welcome to the ΞΌLearn x Pathway Initiative, an exciting adventure into the world of Artificial Intelligence (AI)! This comprehensive course, developed in collaboration with Pathway, will empower you with the knowledge and skills needed to navigate the fascinating world of AI, with a special focus on Large Language Models (LLMs).
LLM-Agent-Survey
Autonomous agents are designed to achieve specific objectives through self-guided instructions. With the emergence and growth of large language models (LLMs), there is a growing trend in utilizing LLMs as fundamental controllers for these autonomous agents. This repository conducts a comprehensive survey study on the construction, application, and evaluation of LLM-based autonomous agents. It explores essential components of AI agents, application domains in natural sciences, social sciences, and engineering, and evaluation strategies. The survey aims to be a resource for researchers and practitioners in this rapidly evolving field.
genkit
Firebase Genkit (beta) is a framework with powerful tooling to help app developers build, test, deploy, and monitor AI-powered features with confidence. Genkit is cloud optimized and code-centric, integrating with many services that have free tiers to get started. It provides unified API for generation, context-aware AI features, evaluation of AI workflow, extensibility with plugins, easy deployment to Firebase or Google Cloud, observability and monitoring with OpenTelemetry, and a developer UI for prototyping and testing AI features locally. Genkit works seamlessly with Firebase or Google Cloud projects through official plugins and templates.
vector-cookbook
The Vector Cookbook is a collection of recipes and sample application starter kits for building AI applications with LLMs using PostgreSQL and Timescale Vector. Timescale Vector enhances PostgreSQL for AI applications by enabling the storage of vector, relational, and time-series data with faster search, higher recall, and more efficient time-based filtering. The repository includes resources, sample applications like TSV Time Machine, and guides for creating, storing, and querying OpenAI embeddings with PostgreSQL and pgvector. Users can learn about Timescale Vector, explore performance benchmarks, and access Python client libraries and tutorials.
cogai
The W3C Cognitive AI Community Group focuses on advancing Cognitive AI through collaboration on defining use cases, open source implementations, and application areas. The group aims to demonstrate the potential of Cognitive AI in various domains such as customer services, healthcare, cybersecurity, online learning, autonomous vehicles, manufacturing, and web search. They work on formal specifications for chunk data and rules, plausible knowledge notation, and neural networks for human-like AI. The group positions Cognitive AI as a combination of symbolic and statistical approaches inspired by human thought processes. They address research challenges including mimicry, emotional intelligence, natural language processing, and common sense reasoning. The long-term goal is to develop cognitive agents that are knowledgeable, creative, collaborative, empathic, and multilingual, capable of continual learning and self-awareness.
ai-hub
The Enterprise Azure OpenAI Hub is a comprehensive repository designed to guide users through the world of Generative AI on the Azure platform. It offers a structured learning experience to accelerate the transition from concept to production in an Enterprise context. The hub empowers users to explore various use cases with Azure services, ensuring security and compliance. It provides real-world examples and playbooks for practical insights into solving complex problems and developing cutting-edge AI solutions. The repository also serves as a library of proven patterns, aligning with industry standards and promoting best practices for secure and compliant AI development.
earth2studio
Earth2Studio is a Python-based package designed to enable users to quickly get started with AI weather and climate models. It provides access to pre-trained models, diagnostic tools, data sources, IO utilities, perturbation methods, and sample workflows for building custom weather prediction workflows. The package aims to empower users to explore AI-driven meteorology through modular components and seamless integration with other Nvidia packages like Modulus.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.