
graphiti
Build Real-Time Knowledge Graphs for AI Agents
Stars: 2774

README:
⭐ Help us reach more developers and grow the Graphiti community. Star this repo!
Graphiti is a framework for building and querying temporally-aware knowledge graphs, specifically tailored for AI agents operating in dynamic environments. Unlike traditional retrieval-augmented generation (RAG) methods, Graphiti continuously integrates user interactions, structured and unstructured enterprise data, and external information into a coherent, queryable graph. The framework supports incremental data updates, efficient retrieval, and precise historical queries without requiring complete graph recomputation, making it suitable for developing interactive, context-aware AI applications.
Use Graphiti to:
- Integrate and maintain dynamic user interactions and business data.
- Facilitate state-based reasoning and task automation for agents.
- Query complex, evolving data with semantic, keyword, and graph-based search methods.
A knowledge graph is a network of interconnected facts, such as “Kendra loves Adidas shoes.” Each fact is a “triplet” represented by two entities, or nodes (”Kendra”, “Adidas shoes”), and their relationship, or edge (”loves”). Knowledge Graphs have been explored extensively for information retrieval. What makes Graphiti unique is its ability to autonomously build a knowledge graph while handling changing relationships and maintaining historical context.
Graphiti powers the core of Zep's memory layer for AI Agents.
Using Graphiti, we've demonstrated Zep is the State of the Art in Agent Memory.
Read our paper: Zep: A Temporal Knowledge Graph Architecture for Agent Memory.
We're excited to open-source Graphiti, believing its potential reaches far beyond AI memory applications.
Traditional RAG approaches often rely on batch processing and static data summarization, making them inefficient for frequently changing data. Graphiti addresses these challenges by providing:
- Real-Time Incremental Updates: Immediate integration of new data episodes without batch recomputation.
- Bi-Temporal Data Model: Explicit tracking of event occurrence and ingestion times, allowing accurate point-in-time queries.
- Efficient Hybrid Retrieval: Combines semantic embeddings, keyword (BM25), and graph traversal to achieve low-latency queries without reliance on LLM summarization.
- Custom Entity Definitions: Flexible ontology creation and support for developer-defined entities through straightforward Pydantic models.
- Scalability: Efficiently manages large datasets with parallel processing, suitable for enterprise environments.
Aspect | GraphRAG | Graphiti |
---|---|---|
Primary Use | Static document summarization | Dynamic data management |
Data Handling | Batch-oriented processing | Continuous, incremental updates |
Knowledge Structure | Entity clusters & community summaries | Episodic data, semantic entities, communities |
Retrieval Method | Sequential LLM summarization | Hybrid semantic, keyword, and graph-based search |
Adaptability | Low | High |
Temporal Handling | Basic timestamp tracking | Explicit bi-temporal tracking |
Contradiction Handling | LLM-driven summarization judgments | Temporal edge invalidation |
Query Latency | Seconds to tens of seconds | Typically sub-second latency |
Custom Entity Types | No | Yes, customizable |
Scalability | Moderate | High, optimized for large datasets |
Graphiti is specifically designed to address the challenges of dynamic and frequently updated datasets, making it particularly suitable for applications requiring real-time interaction and precise historical queries.
Requirements:
- Python 3.10 or higher
- Neo4j 5.26 or higher (serves as the embeddings storage backend)
- OpenAI API key (for LLM inference and embedding)
Optional:
- Anthropic or Groq API key (for alternative LLM providers)
[!TIP] The simplest way to install Neo4j is via Neo4j Desktop. It provides a user-friendly interface to manage Neo4j instances and databases.
pip install graphiti-core
or
poetry add graphiti-core
[!IMPORTANT] Graphiti uses OpenAI for LLM inference and embedding. Ensure that an
OPENAI_API_KEY
is set in your environment. Support for Anthropic and Groq LLM inferences is available, too. Other LLM providers may be supported via OpenAI compatible APIs.
from graphiti_core import Graphiti
from graphiti_core.nodes import EpisodeType
from datetime import datetime, timezone
# Initialize Graphiti as Your Memory Layer
graphiti = Graphiti("bolt://localhost:7687", "neo4j", "password")
# Initialize the graph database with Graphiti's indices. This only needs to be done once.
graphiti.build_indices_and_constraints()
# Add episodes
episodes = [
"Kamala Harris is the Attorney General of California. She was previously "
"the district attorney for San Francisco.",
"As AG, Harris was in office from January 3, 2011 – January 3, 2017",
]
for i, episode in enumerate(episodes):
await graphiti.add_episode(
name=f"Freakonomics Radio {i}",
episode_body=episode,
source=EpisodeType.text,
source_description="podcast",
reference_time=datetime.now(timezone.utc)
)
# Search the graph for semantic memory retrieval
# Execute a hybrid search combining semantic similarity and BM25 retrieval
# Results are combined and reranked using Reciprocal Rank Fusion
results = await graphiti.search('Who was the California Attorney General?')
[
EntityEdge(
│ uuid = '3133258f738e487383f07b04e15d4ac0',
│ source_node_uuid = '2a85789b318d4e418050506879906e62',
│ target_node_uuid = 'baf7781f445945989d6e4f927f881556',
│ created_at = datetime.datetime(2024, 8, 26, 13, 13, 24, 861097),
│ name = 'HELD_POSITION',
# the fact reflects the updated state that Harris is
# no longer the AG of California
│ fact = 'Kamala Harris was the Attorney General of California',
│ fact_embedding = [
│ │ -0.009955154731869698,
│ ...
│ │ 0.00784289836883545
│],
│ episodes = ['b43e98ad0a904088a76c67985caecc22'],
│ expired_at = datetime.datetime(2024, 8, 26, 20, 18, 1, 53812),
# These dates represent the date this edge was true.
│ valid_at = datetime.datetime(2011, 1, 3, 0, 0, tzinfo= < UTC >),
│ invalid_at = datetime.datetime(2017, 1, 3, 0, 0, tzinfo= < UTC >)
)
]
# Rerank search results based on graph distance
# Provide a node UUID to prioritize results closer to that node in the graph.
# Results are weighted by their proximity, with distant edges receiving lower scores.
await graphiti.search('Who was the California Attorney General?', center_node_uuid)
# Close the connection when chat state management is complete
graphiti.close()
The server
directory contains an API service for interacting with the Graphiti API. It is built using FastAPI.
Please see the server README for more information.
The mcp_server
directory contains a Model Context Protocol (MCP) server implementation for Graphiti. This server allows AI assistants to interact with Graphiti's knowledge graph capabilities through the MCP protocol.
Key features of the MCP server include:
- Episode management (add, retrieve, delete)
- Entity management and relationship handling
- Semantic and hybrid search capabilities
- Group management for organizing related data
- Graph maintenance operations
The MCP server can be deployed using Docker with Neo4j, making it easy to integrate Graphiti into your AI assistant workflows.
For detailed setup instructions and usage examples, see the MCP server README.
In addition to the Neo4j and OpenAi-compatible credentials, Graphiti also has a few optional environment variables. If you are using one of our supported models, such as Anthropic or Voyage models, the necessary environment variables must be set.
USE_PARALLEL_RUNTIME
is an optional boolean variable that can be set to true if you wish
to enable Neo4j's parallel runtime feature for several of our search queries.
Note that this feature is not supported for Neo4j Community edition or for smaller AuraDB instances,
as such this feature is off by default.
Graphiti supports Azure OpenAI for both LLM inference and embeddings. To use Azure OpenAI, you'll need to configure both the LLM client and embedder with your Azure OpenAI credentials.
from openai import AsyncAzureOpenAI
from graphiti_core import Graphiti
from graphiti_core.llm_client import OpenAIClient
from graphiti_core.embedder.openai import OpenAIEmbedder, OpenAIEmbedderConfig
from graphiti_core.cross_encoder.openai_reranker_client import OpenAIRerankerClient
# Azure OpenAI configuration
api_key = "<your-api-key>"
api_version = "<your-api-version>"
azure_endpoint = "<your-azure-endpoint>"
# Create Azure OpenAI client for LLM
azure_openai_client = AsyncAzureOpenAI(
api_key=api_key,
api_version=api_version,
azure_endpoint=azure_endpoint
)
# Initialize Graphiti with Azure OpenAI clients
graphiti = Graphiti(
"bolt://localhost:7687",
"neo4j",
"password",
llm_client=OpenAIClient(
client=azure_openai_client
),
embedder=OpenAIEmbedder(
config=OpenAIEmbedderConfig(
embedding_model="text-embedding-3-small" # Use your Azure deployed embedding model name
),
client=azure_openai_client
),
# Optional: Configure the OpenAI cross encoder with Azure OpenAI
cross_encoder=OpenAIRerankerClient(
client=azure_openai_client
)
)
# Now you can use Graphiti with Azure OpenAI
Make sure to replace the placeholder values with your actual Azure OpenAI credentials and specify the correct embedding model name that's deployed in your Azure OpenAI service.
Graphiti is under active development. We aim to maintain API stability while working on:
- [x] Supporting custom graph schemas:
- Allow developers to provide their own defined node and edge classes when ingesting episodes
- Enable more flexible knowledge representation tailored to specific use cases
- [x] Enhancing retrieval capabilities with more robust and configurable options
- [x] Graphiti MCP Server
- [ ] Expanding test coverage to ensure reliability and catch edge cases
We encourage and appreciate all forms of contributions, whether it's code, documentation, addressing GitHub Issues, or answering questions in the Graphiti Discord channel. For detailed guidelines on code contributions, please refer to CONTRIBUTING.
Join the Zep Discord server and make your way to the #Graphiti channel!
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for graphiti
Similar Open Source Tools

nous
Nous is an open-source TypeScript platform for autonomous AI agents and LLM based workflows. It aims to automate processes, support requests, review code, assist with refactorings, and more. The platform supports various integrations, multiple LLMs/services, CLI and web interface, human-in-the-loop interactions, flexible deployment options, observability with OpenTelemetry tracing, and specific agents for code editing, software engineering, and code review. It offers advanced features like reasoning/planning, memory and function call history, hierarchical task decomposition, and control-loop function calling options. Nous is designed to be a flexible platform for the TypeScript community to expand and support different use cases and integrations.

Upsonic
Upsonic offers a cutting-edge enterprise-ready framework for orchestrating LLM calls, agents, and computer use to complete tasks cost-effectively. It provides reliable systems, scalability, and a task-oriented structure for real-world cases. Key features include production-ready scalability, task-centric design, MCP server support, tool-calling server, computer use integration, and easy addition of custom tools. The framework supports client-server architecture and allows seamless deployment on AWS, GCP, or locally using Docker.

beeai-framework
BeeAI Framework is a versatile tool for building production-ready multi-agent systems. It offers flexibility in orchestrating agents, seamless integration with various models and tools, and production-grade controls for scaling. The framework supports Python and TypeScript libraries, enabling users to implement simple to complex multi-agent patterns, connect with AI services, and optimize token usage and resource management.

Reflection_Tuning
Reflection-Tuning is a project focused on improving the quality of instruction-tuning data through a reflection-based method. It introduces Selective Reflection-Tuning, where the student model can decide whether to accept the improvements made by the teacher model. The project aims to generate high-quality instruction-response pairs by defining specific criteria for the oracle model to follow and respond to. It also evaluates the efficacy and relevance of instruction-response pairs using the r-IFD metric. The project provides code for reflection and selection processes, along with data and model weights for both V1 and V2 methods.

bee-agent-framework
The Bee Agent Framework is an open-source tool for building, deploying, and serving powerful agentic workflows at scale. It provides AI agents, tools for creating workflows in Javascript/Python, a code interpreter, memory optimization strategies, serialization for pausing/resuming workflows, traceability features, production-level control, and upcoming features like model-agnostic support and a chat UI. The framework offers various modules for agents, llms, memory, tools, caching, errors, adapters, logging, serialization, and more, with a roadmap including MLFlow integration, JSON support, structured outputs, chat client, base agent improvements, guardrails, and evaluation.

superlinked
Superlinked is a compute framework for information retrieval and feature engineering systems, focusing on converting complex data into vector embeddings for RAG, Search, RecSys, and Analytics stack integration. It enables custom model performance in machine learning with pre-trained model convenience. The tool allows users to build multimodal vectors, define weights at query time, and avoid postprocessing & rerank requirements. Users can explore the computational model through simple scripts and python notebooks, with a future release planned for production usage with built-in data infra and vector database integrations.

pgai
pgai simplifies the process of building search and Retrieval Augmented Generation (RAG) AI applications with PostgreSQL. It brings embedding and generation AI models closer to the database, allowing users to create embeddings, retrieve LLM chat completions, reason over data for classification, summarization, and data enrichment directly from within PostgreSQL in a SQL query. The tool requires an OpenAI API key and a PostgreSQL client to enable AI functionality in the database. Users can install pgai from source, run it in a pre-built Docker container, or enable it in a Timescale Cloud service. The tool provides functions to handle API keys using psql or Python, and offers various AI functionalities like tokenizing, detokenizing, embedding, chat completion, and content moderation.

humanlayer
HumanLayer is a Python toolkit designed to enable AI agents to interact with humans in tool-based and asynchronous workflows. By incorporating humans-in-the-loop, agentic tools can access more powerful and meaningful tasks. The toolkit provides features like requiring human approval for function calls, human as a tool for contacting humans, omni-channel contact capabilities, granular routing, and support for various LLMs and orchestration frameworks. HumanLayer aims to ensure human oversight of high-stakes function calls, making AI agents more reliable and safe in executing impactful tasks.

raga-llm-hub
Raga LLM Hub is a comprehensive evaluation toolkit for Language and Learning Models (LLMs) with over 100 meticulously designed metrics. It allows developers and organizations to evaluate and compare LLMs effectively, establishing guardrails for LLMs and Retrieval Augmented Generation (RAG) applications. The platform assesses aspects like Relevance & Understanding, Content Quality, Hallucination, Safety & Bias, Context Relevance, Guardrails, and Vulnerability scanning, along with Metric-Based Tests for quantitative analysis. It helps teams identify and fix issues throughout the LLM lifecycle, revolutionizing reliability and trustworthiness.

lance
Lance is a modern columnar data format optimized for ML workflows and datasets. It offers high-performance random access, vector search, zero-copy automatic versioning, and ecosystem integrations with Apache Arrow, Pandas, Polars, and DuckDB. Lance is designed to address the challenges of the ML development cycle, providing a unified data format for collection, exploration, analytics, feature engineering, training, evaluation, deployment, and monitoring. It aims to reduce data silos and streamline the ML development process.

AIL-framework
AIL framework is a modular framework to analyze potential information leaks from unstructured data sources like pastes from Pastebin or similar services or unstructured data streams. AIL framework is flexible and can be extended to support other functionalities to mine or process sensitive information (e.g. data leak prevention).

ail-framework
AIL framework is a modular framework to analyze potential information leaks from unstructured data sources like pastes from Pastebin or similar services or unstructured data streams. AIL framework is flexible and can be extended to support other functionalities to mine or process sensitive information (e.g. data leak prevention).

arbigent
Arbigent (Arbiter-Agent) is an AI agent testing framework designed to make AI agent testing practical for modern applications. It addresses challenges faced by traditional UI testing frameworks and AI agents by breaking down complex tasks into smaller, dependent scenarios. The framework is customizable for various AI providers, operating systems, and form factors, empowering users with extensive customization capabilities. Arbigent offers an intuitive UI for scenario creation and a powerful code interface for seamless test execution. It supports multiple form factors, optimizes UI for AI interaction, and is cost-effective by utilizing models like GPT-4o mini. With a flexible code interface and open-source nature, Arbigent aims to revolutionize AI agent testing in modern applications.

vertex-ai-mlops
Vertex AI is a platform for end-to-end model development. It consist of core components that make the processes of MLOps possible for design patterns of all types.

repromodel
ReproModel is an open-source toolbox designed to boost AI research efficiency by enabling researchers to reproduce, compare, train, and test AI models faster. It provides standardized models, dataloaders, and processing procedures, allowing researchers to focus on new datasets and model development. With a no-code solution, users can access benchmark and SOTA models and datasets, utilize training visualizations, extract code for publication, and leverage an LLM-powered automated methodology description writer. The toolbox helps researchers modularize development, compare pipeline performance reproducibly, and reduce time for model development, computation, and writing. Future versions aim to facilitate building upon state-of-the-art research by loading previously published study IDs with verified code, experiments, and results stored in the system.