
graphiti
Build Real-Time Knowledge Graphs for AI Agents
Stars: 17737

Graphiti is a framework for building and querying temporally-aware knowledge graphs, tailored for AI agents in dynamic environments. It continuously integrates user interactions, structured and unstructured data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
README:
⭐ Help us reach more developers and grow the Graphiti community. Star this repo!
[!TIP] Check out the new MCP server for Graphiti! Give Claude, Cursor, and other MCP clients powerful Knowledge Graph-based memory.
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
Use Graphiti to:
- Integrate and maintain dynamic user interactions and business data.
- Facilitate state-based reasoning and task automation for agents.
- Query complex, evolving data with semantic, keyword, and graph-based search methods.
A knowledge graph is a network of interconnected facts, such as "Kendra loves Adidas shoes." Each fact is a "triplet" represented by two entities, or nodes ("Kendra", "Adidas shoes"), and their relationship, or edge ("loves"). Knowledge Graphs have been explored extensively for information retrieval. What makes Graphiti unique is its ability to autonomously build a knowledge graph while handling changing relationships and maintaining historical context.
Graphiti powers the core of Zep, a turn-key context engineering platform for AI Agents. Zep offers agent memory, Graph RAG for dynamic data, and context retrieval and assembly.
Using Graphiti, we've demonstrated Zep is the State of the Art in Agent Memory.
Read our paper: Zep: A Temporal Knowledge Graph Architecture for Agent Memory.
We're excited to open-source Graphiti, believing its potential reaches far beyond AI memory applications.
Traditional RAG approaches often rely on batch processing and static data summarization, making them inefficient for frequently changing data. Graphiti addresses these challenges by providing:
- Real-Time Incremental Updates: Immediate integration of new data episodes without batch recomputation.
- Bi-Temporal Data Model: Explicit tracking of event occurrence and ingestion times, allowing accurate point-in-time queries.
- Efficient Hybrid Retrieval: Combines semantic embeddings, keyword (BM25), and graph traversal to achieve low-latency queries without reliance on LLM summarization.
- Custom Entity Definitions: Flexible ontology creation and support for developer-defined entities through straightforward Pydantic models.
- Scalability: Efficiently manages large datasets with parallel processing, suitable for enterprise environments.
Aspect | GraphRAG | Graphiti |
---|---|---|
Primary Use | Static document summarization | Dynamic data management |
Data Handling | Batch-oriented processing | Continuous, incremental updates |
Knowledge Structure | Entity clusters & community summaries | Episodic data, semantic entities, communities |
Retrieval Method | Sequential LLM summarization | Hybrid semantic, keyword, and graph-based search |
Adaptability | Low | High |
Temporal Handling | Basic timestamp tracking | Explicit bi-temporal tracking |
Contradiction Handling | LLM-driven summarization judgments | Temporal edge invalidation |
Query Latency | Seconds to tens of seconds | Typically sub-second latency |
Custom Entity Types | No | Yes, customizable |
Scalability | Moderate | High, optimized for large datasets |
Graphiti is specifically designed to address the challenges of dynamic and frequently updated datasets, making it particularly suitable for applications requiring real-time interaction and precise historical queries.
Requirements:
- Python 3.10 or higher
- Neo4j 5.26 / FalkorDB 1.1.2 / Kuzu 0.11.2 / Amazon Neptune Database Cluster or Neptune Analytics Graph + Amazon OpenSearch Serverless collection (serves as the full text search backend)
- OpenAI API key (Graphiti defaults to OpenAI for LLM inference and embedding)
[!IMPORTANT] Graphiti works best with LLM services that support Structured Output (such as OpenAI and Gemini). Using other services may result in incorrect output schemas and ingestion failures. This is particularly problematic when using smaller models.
Optional:
- Google Gemini, Anthropic, or Groq API key (for alternative LLM providers)
[!TIP] The simplest way to install Neo4j is via Neo4j Desktop. It provides a user-friendly interface to manage Neo4j instances and databases. Alternatively, you can use FalkorDB on-premises via Docker and instantly start with the quickstart example:
docker run -p 6379:6379 -p 3000:3000 -it --rm falkordb/falkordb:latest
pip install graphiti-core
or
uv add graphiti-core
If you plan to use FalkorDB as your graph database backend, install with the FalkorDB extra:
pip install graphiti-core[falkordb]
# or with uv
uv add graphiti-core[falkordb]
If you plan to use Kuzu as your graph database backend, install with the Kuzu extra:
pip install graphiti-core[kuzu]
# or with uv
uv add graphiti-core[kuzu]
If you plan to use Amazon Neptune as your graph database backend, install with the Amazon Neptune extra:
pip install graphiti-core[neptune]
# or with uv
uv add graphiti-core[neptune]
# Install with Anthropic support
pip install graphiti-core[anthropic]
# Install with Groq support
pip install graphiti-core[groq]
# Install with Google Gemini support
pip install graphiti-core[google-genai]
# Install with multiple providers
pip install graphiti-core[anthropic,groq,google-genai]
# Install with FalkorDB and LLM providers
pip install graphiti-core[falkordb,anthropic,google-genai]
# Install with Amazon Neptune
pip install graphiti-core[neptune]
Graphiti's ingestion pipelines are designed for high concurrency. By default, concurrency is set low to avoid LLM Provider 429 Rate Limit Errors. If you find Graphiti slow, please increase concurrency as described below.
Concurrency controlled by the SEMAPHORE_LIMIT
environment variable. By default, SEMAPHORE_LIMIT
is set to 10
concurrent operations to help prevent 429
rate limit errors from your LLM provider. If you encounter such errors, try
lowering this value.
If your LLM provider allows higher throughput, you can increase SEMAPHORE_LIMIT
to boost episode ingestion
performance.
[!IMPORTANT] Graphiti defaults to using OpenAI for LLM inference and embedding. Ensure that an
OPENAI_API_KEY
is set in your environment. Support for Anthropic and Groq LLM inferences is available, too. Other LLM providers may be supported via OpenAI compatible APIs.
For a complete working example, see the Quickstart Example in the examples directory. The quickstart demonstrates:
- Connecting to a Neo4j, Amazon Neptune, FalkorDB, or Kuzu database
- Initializing Graphiti indices and constraints
- Adding episodes to the graph (both text and structured JSON)
- Searching for relationships (edges) using hybrid search
- Reranking search results using graph distance
- Searching for nodes using predefined search recipes
The example is fully documented with clear explanations of each functionality and includes a comprehensive README with setup instructions and next steps.
The mcp_server
directory contains a Model Context Protocol (MCP) server implementation for Graphiti. This server
allows AI assistants to interact with Graphiti's knowledge graph capabilities through the MCP protocol.
Key features of the MCP server include:
- Episode management (add, retrieve, delete)
- Entity management and relationship handling
- Semantic and hybrid search capabilities
- Group management for organizing related data
- Graph maintenance operations
The MCP server can be deployed using Docker with Neo4j, making it easy to integrate Graphiti into your AI assistant workflows.
For detailed setup instructions and usage examples, see the MCP server README.
The server
directory contains an API service for interacting with the Graphiti API. It is built using FastAPI.
Please see the server README for more information.
In addition to the Neo4j and OpenAi-compatible credentials, Graphiti also has a few optional environment variables. If you are using one of our supported models, such as Anthropic or Voyage models, the necessary environment variables must be set.
Database names are configured directly in the driver constructors:
-
Neo4j: Database name defaults to
neo4j
(hardcoded in Neo4jDriver) -
FalkorDB: Database name defaults to
default_db
(hardcoded in FalkorDriver)
As of v0.17.0, if you need to customize your database configuration, you can instantiate a database driver and pass it
to the Graphiti constructor using the graph_driver
parameter.
from graphiti_core import Graphiti
from graphiti_core.driver.neo4j_driver import Neo4jDriver
# Create a Neo4j driver with custom database name
driver = Neo4jDriver(
uri="bolt://localhost:7687",
user="neo4j",
password="password",
database="my_custom_database" # Custom database name
)
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
from graphiti_core import Graphiti
from graphiti_core.driver.falkordb_driver import FalkorDriver
# Create a FalkorDB driver with custom database name
driver = FalkorDriver(
host="localhost",
port=6379,
username="falkor_user", # Optional
password="falkor_password", # Optional
database="my_custom_graph" # Custom database name
)
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
from graphiti_core import Graphiti
from graphiti_core.driver.kuzu_driver import KuzuDriver
# Create a Kuzu driver
driver = KuzuDriver(db="/tmp/graphiti.kuzu")
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
from graphiti_core import Graphiti
from graphiti_core.driver.neptune_driver import NeptuneDriver
# Create a FalkorDB driver with custom database name
driver = NeptuneDriver(
host= < NEPTUNE
ENDPOINT >,
aoss_host = < Amazon
OpenSearch
Serverless
Host >,
port = < PORT > # Optional, defaults to 8182,
aoss_port = < PORT > # Optional, defaults to 443
)
driver = NeptuneDriver(host=neptune_uri, aoss_host=aoss_host, port=neptune_port)
# Pass the driver to Graphiti
graphiti = Graphiti(graph_driver=driver)
Graphiti supports Azure OpenAI for both LLM inference and embeddings. Azure deployments often require different endpoints for LLM and embedding services, and separate deployments for default and small models.
[!IMPORTANT] Azure OpenAI v1 API Opt-in Required for Structured Outputs
Graphiti uses structured outputs via the
client.beta.chat.completions.parse()
method, which requires Azure OpenAI deployments to opt into the v1 API. Without this opt-in, you'll encounter 404 Resource not found errors during episode ingestion.To enable v1 API support in your Azure OpenAI deployment, follow Microsoft's guide: Azure OpenAI API version lifecycle.
from openai import AsyncAzureOpenAI
from graphiti_core import Graphiti
from graphiti_core.llm_client import LLMConfig, OpenAIClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
# Azure OpenAI configuration - use separate endpoints for different services
api_key = "<your-api-key>"
api_version = "<your-api-version>"
llm_endpoint = "<your-llm-endpoint>" # e.g., "https://your-llm-resource.openai.azure.com/"
embedding_endpoint = "<your-embedding-endpoint>" # e.g., "https://your-embedding-resource.openai.azure.com/"
# Create separate Azure OpenAI clients for different services
llm_client_azure = AsyncAzureOpenAI(
api_key=api_key,
api_version=api_version,
azure_endpoint=llm_endpoint
)
embedding_client_azure = AsyncAzureOpenAI(
api_key=api_key,
api_version=api_version,
azure_endpoint=embedding_endpoint
)
# Create LLM Config with your Azure deployment names
azure_llm_config = LLMConfig(
small_model="gpt-4.1-nano",
model="gpt-4.1-mini",
)
# Initialize Graphiti with Azure OpenAI clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=OpenAIClient(
config=azure_llm_config,
client=llm_client_azure
),
embedder=OpenAIEmbedder(
config=OpenAIEmbedderConfig(
embedding_model="text-embedding-3-small-deployment" # Your Azure embedding deployment name
),
client=embedding_client_azure
),
cross_encoder=OpenAIRerankerClient(
config=LLMConfig(
model=azure_llm_config.small_model # Use small model for reranking
),
client=llm_client_azure
)
)
# Now you can use Graphiti with Azure OpenAI
Make sure to replace the placeholder values with your actual Azure OpenAI credentials and deployment names that match your Azure OpenAI service configuration.
Graphiti supports Google's Gemini models for LLM inference, embeddings, and cross-encoding/reranking. To use Gemini, you'll need to configure the LLM client, embedder, and the cross-encoder with your Google API key.
Install Graphiti:
uv add "graphiti-core[google-genai]"
# or
pip install "graphiti-core[google-genai]"
from graphiti_core import Graphiti
from graphiti_core.llm_client.gemini_client import GeminiClient, LLMConfig
from graphiti_core.embedder.gemini import GeminiEmbedder, GeminiEmbedderConfig
from graphiti_core.cross_encoder.gemini_reranker_client import GeminiRerankerClient
# Google API key configuration
api_key = "<your-google-api-key>"
# Initialize Graphiti with Gemini clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=GeminiClient(
config=LLMConfig(
api_key=api_key,
model="gemini-2.0-flash"
)
),
embedder=GeminiEmbedder(
config=GeminiEmbedderConfig(
api_key=api_key,
embedding_model="embedding-001"
)
),
cross_encoder=GeminiRerankerClient(
config=LLMConfig(
api_key=api_key,
model="gemini-2.5-flash-lite-preview-06-17"
)
)
)
# Now you can use Graphiti with Google Gemini for all components
The Gemini reranker uses the gemini-2.5-flash-lite-preview-06-17
model by default, which is optimized for
cost-effective and low-latency classification tasks. It uses the same boolean classification approach as the OpenAI
reranker, leveraging Gemini's log probabilities feature to rank passage relevance.
Graphiti supports Ollama for running local LLMs and embedding models via Ollama's OpenAI-compatible API. This is ideal for privacy-focused applications or when you want to avoid API costs.
Install the models:
ollama pull deepseek-r1:7b # LLM
ollama pull nomic-embed-text # embeddings
from graphiti_core import Graphiti
from graphiti_core.llm_client.config import LLMConfig
from graphiti_core.llm_client.openai_generic_client import OpenAIGenericClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
# Configure Ollama LLM client
llm_config = LLMConfig(
api_key="ollama", # Ollama doesn't require a real API key, but some placeholder is needed
model="deepseek-r1:7b",
small_model="deepseek-r1:7b",
base_url="http://localhost:11434/v1", # Ollama's OpenAI-compatible endpoint
)
llm_client = OpenAIGenericClient(config=llm_config)
# Initialize Graphiti with Ollama clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=llm_client,
embedder=OpenAIEmbedder(
config=OpenAIEmbedderConfig(
api_key="ollama", # Placeholder API key
embedding_model="nomic-embed-text",
embedding_dim=768,
base_url="http://localhost:11434/v1",
)
),
cross_encoder=OpenAIRerankerClient(client=llm_client, config=llm_config),
)
# Now you can use Graphiti with local Ollama models
Ensure Ollama is running (ollama serve
) and that you have pulled the models you want to use.
Graphiti collects anonymous usage statistics to help us understand how the framework is being used and improve it for everyone. We believe transparency is important, so here's exactly what we collect and why.
When you initialize a Graphiti instance, we collect:
-
Anonymous identifier: A randomly generated UUID stored locally in
~/.cache/graphiti/telemetry_anon_id
- System information: Operating system, Python version, and system architecture
- Graphiti version: The version you're using
-
Configuration choices:
- LLM provider type (OpenAI, Azure, Anthropic, etc.)
- Database backend (Neo4j, FalkorDB, Kuzu, Amazon Neptune Database or Neptune Analytics)
- Embedder provider (OpenAI, Azure, Voyage, etc.)
We are committed to protecting your privacy. We never collect:
- Personal information or identifiers
- API keys or credentials
- Your actual data, queries, or graph content
- IP addresses or hostnames
- File paths or system-specific information
- Any content from your episodes, nodes, or edges
This information helps us:
- Understand which configurations are most popular to prioritize support and testing
- Identify which LLM and database providers to focus development efforts on
- Track adoption patterns to guide our roadmap
- Ensure compatibility across different Python versions and operating systems
By sharing this anonymous information, you help us make Graphiti better for everyone in the community.
The Telemetry code may be found here.
Telemetry is opt-out and can be disabled at any time. To disable telemetry collection:
Option 1: Environment Variable
export GRAPHITI_TELEMETRY_ENABLED=false
Option 2: Set in your shell profile
# For bash users (~/.bashrc or ~/.bash_profile)
echo 'export GRAPHITI_TELEMETRY_ENABLED=false' >> ~/.bashrc
# For zsh users (~/.zshrc)
echo 'export GRAPHITI_TELEMETRY_ENABLED=false' >> ~/.zshrc
Option 3: Set for a specific Python session
import os
os.environ['GRAPHITI_TELEMETRY_ENABLED'] = 'false'
# Then initialize Graphiti as usual
from graphiti_core import Graphiti
graphiti = Graphiti(...)
Telemetry is automatically disabled during test runs (when pytest
is detected).
- Telemetry uses PostHog for anonymous analytics collection
- All telemetry operations are designed to fail silently - they will never interrupt your application or affect Graphiti functionality
- The anonymous ID is stored locally and is not tied to any personal information
Graphiti is under active development. We aim to maintain API stability while working on:
- [x] Supporting custom graph schemas:
- Allow developers to provide their own defined node and edge classes when ingesting episodes
- Enable more flexible knowledge representation tailored to specific use cases
- [x] Enhancing retrieval capabilities with more robust and configurable options
- [x] Graphiti MCP Server
- [ ] Expanding test coverage to ensure reliability and catch edge cases
We encourage and appreciate all forms of contributions, whether it's code, documentation, addressing GitHub Issues, or answering questions in the Graphiti Discord channel. For detailed guidelines on code contributions, please refer to CONTRIBUTING.
Join the Zep Discord server and make your way to the #Graphiti channel!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for graphiti
Similar Open Source Tools

graphiti
Graphiti is a framework for building and querying temporally-aware knowledge graphs, tailored for AI agents in dynamic environments. It continuously integrates user interactions, structured and unstructured data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.

Biomni
Biomni is a general-purpose biomedical AI agent designed to autonomously execute a wide range of research tasks across diverse biomedical subfields. By integrating cutting-edge large language model (LLM) reasoning with retrieval-augmented planning and code-based execution, Biomni helps scientists dramatically enhance research productivity and generate testable hypotheses.

exospherehost
Exosphere is an open source infrastructure designed to run AI agents at scale for large data and long running flows. It allows developers to define plug and playable nodes that can be run on a reliable backbone in the form of a workflow, with features like dynamic state creation at runtime, infinite parallel agents, persistent state management, and failure handling. This enables the deployment of production agents that can scale beautifully to build robust autonomous AI workflows.

RainbowGPT
RainbowGPT is a versatile tool that offers a range of functionalities, including Stock Analysis for financial decision-making, MySQL Management for database navigation, and integration of AI technologies like GPT-4 and ChatGlm3. It provides a user-friendly interface suitable for all skill levels, ensuring seamless information flow and continuous expansion of emerging technologies. The tool enhances adaptability, creativity, and insight, making it a valuable asset for various projects and tasks.

mmore
MMORE is an open-source, end-to-end pipeline for ingesting, processing, indexing, and retrieving knowledge from various file types such as PDFs, Office docs, images, audio, video, and web pages. It standardizes content into a unified multimodal format, supports distributed CPU/GPU processing, and offers hybrid dense+sparse retrieval with an integrated RAG service through CLI and APIs.

browser-use
Browser Use is a tool designed to make websites accessible for AI agents. It provides an easy way to connect AI agents with the browser, enabling users to perform tasks such as extracting vision and HTML elements, managing multiple tabs, and executing custom actions. The tool supports various language models and allows users to parallelize multiple agents for efficient processing. With features like self-correction and the ability to register custom actions, Browser Use offers a versatile solution for interacting with web content using AI technology.

sec-parser
The `sec-parser` project simplifies extracting meaningful information from SEC EDGAR HTML documents by organizing them into semantic elements and a tree structure. It helps in parsing SEC filings for financial and regulatory analysis, analytics and data science, AI and machine learning, causal AI, and large language models. The tool is especially beneficial for AI, ML, and LLM applications by streamlining data pre-processing and feature extraction.

ResumeFlow
ResumeFlow is an automated system that leverages Large Language Models (LLMs) to streamline the job application process. By integrating LLM technology, the tool aims to automate various stages of job hunting, making it easier for users to apply for jobs. Users can access ResumeFlow as a web tool, install it as a Python package, or download the source code from GitHub. The tool requires Python 3.11.6 or above and an LLM API key from OpenAI or Gemini Pro for usage. ResumeFlow offers functionalities such as generating curated resumes and cover letters based on job URLs and user's master resume data.

MetaGPT
MetaGPT is a multi-agent framework that enables GPT to work in a software company, collaborating to tackle more complex tasks. It assigns different roles to GPTs to form a collaborative entity for complex tasks. MetaGPT takes a one-line requirement as input and outputs user stories, competitive analysis, requirements, data structures, APIs, documents, etc. Internally, MetaGPT includes product managers, architects, project managers, and engineers. It provides the entire process of a software company along with carefully orchestrated SOPs. MetaGPT's core philosophy is "Code = SOP(Team)", materializing SOP and applying it to teams composed of LLMs.

koog
Koog is a Kotlin-based framework for building and running AI agents entirely in idiomatic Kotlin. It allows users to create agents that interact with tools, handle complex workflows, and communicate with users. Key features include pure Kotlin implementation, MCP integration, embedding capabilities, custom tool creation, ready-to-use components, intelligent history compression, powerful streaming API, persistent agent memory, comprehensive tracing, flexible graph workflows, modular feature system, scalable architecture, and multiplatform support.

evalverse
Evalverse is an open-source project designed to support Large Language Model (LLM) evaluation needs. It provides a standardized and user-friendly solution for processing and managing LLM evaluations, catering to AI research engineers and scientists. Evalverse supports various evaluation methods, insightful reports, and no-code evaluation processes. Users can access unified evaluation with submodules, request evaluations without code via Slack bot, and obtain comprehensive reports with scores, rankings, and visuals. The tool allows for easy comparison of scores across different models and swift addition of new evaluation tools.

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

lihil
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.

midscene
Midscene.js is an AI-powered automation SDK that allows users to control web pages, perform assertions, and extract data in JSON format using natural language. It offers features such as natural language interaction, understanding UI and providing responses in JSON, intuitive assertion based on AI understanding, compatibility with public multimodal LLMs like GPT-4o, visualization tool for easy debugging, and a brand new experience in automation development.

gpt-translate
Markdown Translation BOT is a GitHub action that translates markdown files into multiple languages using various AI models. It supports markdown, markdown-jsx, and json files only. The action can be executed by individuals with write permissions to the repository, preventing API abuse by non-trusted parties. Users can set up the action by providing their API key and configuring the workflow settings. The tool allows users to create comments with specific commands to trigger translations and automatically generate pull requests or add translated files to existing pull requests. It supports multiple file translations and can interpret any language supported by GPT-4 or GPT-3.5.

any-llm
The `any-llm` repository provides a unified API to access different LLM (Large Language Model) providers. It offers a simple and developer-friendly interface, leveraging official provider SDKs for compatibility and maintenance. The tool is framework-agnostic, actively maintained, and does not require a proxy or gateway server. It addresses challenges in API standardization and aims to provide a consistent interface for various LLM providers, overcoming limitations of existing solutions like LiteLLM, AISuite, and framework-specific integrations.
For similar tasks

Magick
Magick is a groundbreaking visual AIDE (Artificial Intelligence Development Environment) for no-code data pipelines and multimodal agents. Magick can connect to other services and comes with nodes and templates well-suited for intelligent agents, chatbots, complex reasoning systems and realistic characters.

danswer
Danswer is an open-source Gen-AI Chat and Unified Search tool that connects to your company's docs, apps, and people. It provides a Chat interface and plugs into any LLM of your choice. Danswer can be deployed anywhere and for any scale - on a laptop, on-premise, or to cloud. Since you own the deployment, your user data and chats are fully in your own control. Danswer is MIT licensed and designed to be modular and easily extensible. The system also comes fully ready for production usage with user authentication, role management (admin/basic users), chat persistence, and a UI for configuring Personas (AI Assistants) and their Prompts. Danswer also serves as a Unified Search across all common workplace tools such as Slack, Google Drive, Confluence, etc. By combining LLMs and team specific knowledge, Danswer becomes a subject matter expert for the team. Imagine ChatGPT if it had access to your team's unique knowledge! It enables questions such as "A customer wants feature X, is this already supported?" or "Where's the pull request for feature Y?"

semantic-kernel
Semantic Kernel is an SDK that integrates Large Language Models (LLMs) like OpenAI, Azure OpenAI, and Hugging Face with conventional programming languages like C#, Python, and Java. Semantic Kernel achieves this by allowing you to define plugins that can be chained together in just a few lines of code. What makes Semantic Kernel _special_ , however, is its ability to _automatically_ orchestrate plugins with AI. With Semantic Kernel planners, you can ask an LLM to generate a plan that achieves a user's unique goal. Afterwards, Semantic Kernel will execute the plan for the user.

floneum
Floneum is a graph editor that makes it easy to develop your own AI workflows. It uses large language models (LLMs) to run AI models locally, without any external dependencies or even a GPU. This makes it easy to use LLMs with your own data, without worrying about privacy. Floneum also has a plugin system that allows you to improve the performance of LLMs and make them work better for your specific use case. Plugins can be used in any language that supports web assembly, and they can control the output of LLMs with a process similar to JSONformer or guidance.

mindsdb
MindsDB is a platform for customizing AI from enterprise data. You can create, serve, and fine-tune models in real-time from your database, vector store, and application data. MindsDB "enhances" SQL syntax with AI capabilities to make it accessible for developers worldwide. With MindsDB’s nearly 200 integrations, any developer can create AI customized for their purpose, faster and more securely. Their AI systems will constantly improve themselves — using companies’ own data, in real-time.

aiscript
AiScript is a lightweight scripting language that runs on JavaScript. It supports arrays, objects, and functions as first-class citizens, and is easy to write without the need for semicolons or commas. AiScript runs in a secure sandbox environment, preventing infinite loops from freezing the host. It also allows for easy provision of variables and functions from the host.

activepieces
Activepieces is an open source replacement for Zapier, designed to be extensible through a type-safe pieces framework written in Typescript. It features a user-friendly Workflow Builder with support for Branches, Loops, and Drag and Drop. Activepieces integrates with Google Sheets, OpenAI, Discord, and RSS, along with 80+ other integrations. The list of supported integrations continues to grow rapidly, thanks to valuable contributions from the community. Activepieces is an open ecosystem; all piece source code is available in the repository, and they are versioned and published directly to npmjs.com upon contributions. If you cannot find a specific piece on the pieces roadmap, please submit a request by visiting the following link: Request Piece Alternatively, if you are a developer, you can quickly build your own piece using our TypeScript framework. For guidance, please refer to the following guide: Contributor's Guide

superagent-js
Superagent is an open source framework that enables any developer to integrate production ready AI Assistants into any application in a matter of minutes.
For similar jobs

Awesome-LLM-RAG-Application
Awesome-LLM-RAG-Application is a repository that provides resources and information about applications based on Large Language Models (LLM) with Retrieval-Augmented Generation (RAG) pattern. It includes a survey paper, GitHub repo, and guides on advanced RAG techniques. The repository covers various aspects of RAG, including academic papers, evaluation benchmarks, downstream tasks, tools, and technologies. It also explores different frameworks, preprocessing tools, routing mechanisms, evaluation frameworks, embeddings, security guardrails, prompting tools, SQL enhancements, LLM deployment, observability tools, and more. The repository aims to offer comprehensive knowledge on RAG for readers interested in exploring and implementing LLM-based systems and products.

ChatGPT-On-CS
ChatGPT-On-CS is an intelligent chatbot tool based on large models, supporting various platforms like WeChat, Taobao, Bilibili, Douyin, Weibo, and more. It can handle text, voice, and image inputs, access external resources through plugins, and customize enterprise AI applications based on proprietary knowledge bases. Users can set custom replies, utilize ChatGPT interface for intelligent responses, send images and binary files, and create personalized chatbots using knowledge base files. The tool also features platform-specific plugin systems for accessing external resources and supports enterprise AI applications customization.

call-gpt
Call GPT is a voice application that utilizes Deepgram for Speech to Text, elevenlabs for Text to Speech, and OpenAI for GPT prompt completion. It allows users to chat with ChatGPT on the phone, providing better transcription, understanding, and speaking capabilities than traditional IVR systems. The app returns responses with low latency, allows user interruptions, maintains chat history, and enables GPT to call external tools. It coordinates data flow between Deepgram, OpenAI, ElevenLabs, and Twilio Media Streams, enhancing voice interactions.

awesome-LLM-resourses
A comprehensive repository of resources for Chinese large language models (LLMs), including data processing tools, fine-tuning frameworks, inference libraries, evaluation platforms, RAG engines, agent frameworks, books, courses, tutorials, and tips. The repository covers a wide range of tools and resources for working with LLMs, from data labeling and processing to model fine-tuning, inference, evaluation, and application development. It also includes resources for learning about LLMs through books, courses, and tutorials, as well as insights and strategies from building with LLMs.

tappas
Hailo TAPPAS is a set of full application examples that implement pipeline elements and pre-trained AI tasks. It demonstrates Hailo's system integration scenarios on predefined systems, aiming to accelerate time to market, simplify integration with Hailo's runtime SW stack, and provide a starting point for customers to fine-tune their applications. The tool supports both Hailo-15 and Hailo-8, offering various example applications optimized for different common hosts. TAPPAS includes pipelines for single network, two network, and multi-stream processing, as well as high-resolution processing via tiling. It also provides example use case pipelines like License Plate Recognition and Multi-Person Multi-Camera Tracking. The tool is regularly updated with new features, bug fixes, and platform support.

cloudflare-rag
This repository provides a fullstack example of building a Retrieval Augmented Generation (RAG) app with Cloudflare. It utilizes Cloudflare Workers, Pages, D1, KV, R2, AI Gateway, and Workers AI. The app features streaming interactions to the UI, hybrid RAG with Full-Text Search and Vector Search, switchable providers using AI Gateway, per-IP rate limiting with Cloudflare's KV, OCR within Cloudflare Worker, and Smart Placement for workload optimization. The development setup requires Node, pnpm, and wrangler CLI, along with setting up necessary primitives and API keys. Deployment involves setting up secrets and deploying the app to Cloudflare Pages. The project implements a Hybrid Search RAG approach combining Full Text Search against D1 and Hybrid Search with embeddings against Vectorize to enhance context for the LLM.

pixeltable
Pixeltable is a Python library designed for ML Engineers and Data Scientists to focus on exploration, modeling, and app development without the need to handle data plumbing. It provides a declarative interface for working with text, images, embeddings, and video, enabling users to store, transform, index, and iterate on data within a single table interface. Pixeltable is persistent, acting as a database unlike in-memory Python libraries such as Pandas. It offers features like data storage and versioning, combined data and model lineage, indexing, orchestration of multimodal workloads, incremental updates, and automatic production-ready code generation. The tool emphasizes transparency, reproducibility, cost-saving through incremental data changes, and seamless integration with existing Python code and libraries.

wave-apps
Wave Apps is a directory of sample applications built on H2O Wave, allowing users to build AI apps faster. The apps cover various use cases such as explainable hotel ratings, human-in-the-loop credit risk assessment, mitigating churn risk, online shopping recommendations, and sales forecasting EDA. Users can download, modify, and integrate these sample apps into their own projects to learn about app development and AI model deployment.