core

core

Your unified, shareable memory layer for AI apps. Compatible with Cursor, Claude Desktop, Claude Code, Gemini CLI, Windsurf, AWS's Kiro, VSCode, Cline

Stars: 608

Visit
 screenshot

CORE is an open-source unified, persistent memory layer for all AI tools, allowing developers to maintain context across different tools like Cursor, ChatGPT, and Claude. It aims to solve the issue of context switching and information loss between sessions by creating a knowledge graph that remembers conversations, decisions, and insights. With features like unified memory, temporal knowledge graph, browser extension, chat with memory, auto-sync from apps, and MCP integration hub, CORE provides a seamless experience for managing and recalling context. The tool's ingestion pipeline captures evolving context through normalization, extraction, resolution, and graph integration, resulting in a dynamic memory that grows and changes with the user. When recalling from memory, CORE utilizes search, re-ranking, filtering, and output to provide relevant and contextual answers. Security measures include data encryption, authentication, access control, and vulnerability reporting.

README:

🌐 Language
CORE logo

CORE: Unified Memory Layer for Claude, Cursor, ChatGPT & All AI Tools

DeepWiki Badge

DocumentationDiscord

🔥 Research Highlights

CORE memory achieves 88.24% average accuracy in Locomo dataset across all reasoning tasks, significantly outperforming other memory providers. Check out this blog for more info.

benchmark

(1) Single-hop questions require answers based on a single session; (2) Multi-hop questions require synthesizing information from multiple different sessions; (3) Open-domain knowledge questions can be answered by integrating a speaker’s provided information with external knowledge such as commonsense or world facts; (4) Temporal reasoning questions can be answered through temporal reasoning and capturing time-related data cues within the conversation;

Overview

Problem

Developers waste time re-explaining context to AI tools. Hit token limits in Claude? Start fresh and lose everything. Switch from ChatGPT/Claude to Cursor? Explain your context again. Your conversations, decisions, and insights vanish between sessions. With every new AI tool, the cost of context switching grows.

Solution - CORE (Contextual Observation & Recall Engine)

CORE is an open-source unified, persistent memory layer for all your AI tools. Your context follows you from Cursor to Claude to ChatGPT to Claude Code. One knowledge graph remembers who said what, when, and why. Connect once, remember everywhere. Stop managing context and start building.

🚀 Get Started

Build your unified memory graph in 5 minutes:

  1. Sign Up at core.heysol.ai and create your account

  2. Add your first memory - share context about yourself

    first-memory
  3. Visualize your memory graph and see how CORE automatically forms connections between facts

  4. Test it out - ask "What do you know about me?" in conversatio section

  5. Connect to your tools:

🧩 Key Features

🧠 Unified, Portable Memory:

Add and recall your memory across Cursor, Windsurf, Claude Desktop, Claude Code, Gemini CLI, AWS's Kiro, VS Code, and Roo Code via MCP

core-claude

🕸️ Temporal + Reified Knowledge Graph:

Remember the story behind every fact—track who said what, when, and why with rich relationships and full provenance, not just flat storage

core-memory-graph

🌐 Browser Extension:

Save conversations and content from ChatGPT, Grok, Gemini, Twitter, YouTube, blog posts, and any webpage directly into your CORE memory.

How to Use Extension

  1. Download the Extension from the Chrome Web Store.
  2. Login to CORE dashboard
    • Navigate to Settings (bottom left)
    • Go to API Key → Generate new key → Name it “extension.”
  3. Open the extension, paste your API key, and save.

https://github.com/user-attachments/assets/6e629834-1b9d-4fe6-ae58-a9068986036a

💬 Chat with Memory:

Ask questions like "What are my writing preferences?" with instant insights from your connected knowledge

chat-with-memory

Auto-Sync from Apps:

Automatically capture relevant context from Linear, Slack, Notion, GitHub and other connected apps into your CORE memory

core-slack

🔗 MCP Integration Hub:

Connect Linear, Slack, GitHub, Notion once to CORE—then use all their tools in Claude, Cursor, or any MCP client with a single URL

core-linear-claude

How CORE create memory

memory-ingest-diagram

CORE’s ingestion pipeline has four phases designed to capture evolving context:

  1. Normalization: Links new information to recent context, breaks long documents into coherent chunks while keeping cross-references, and standardizes terms so by the time CORE extracts knowledge, it’s working with clean, contextualized input instead of messy text.
  2. Extraction: Pulls meaning from normalized text by identifying entities (people, tools, projects, concepts), turning them into statements with context, source, and time, and mapping relationships. For example, “We wrote CORE in Next.js” becomes: Entities (Core, Next.js), Statement (CORE was developed using Next.js), and Relationship (was developed using).
  3. Resolution: Detects contradictions, tracks how preferences evolve, and preserves multiple perspectives with provenance instead of overwriting them so memory reflects your full journey, not just the latest snapshot.
  4. Graph Integration: Connects entities, statements, and episodes into a temporal knowledge graph that links facts to their context and history, turning isolated data into a living web of knowledge agents can actually use.

The Result: Instead of a flat database, CORE gives you a memory that grows and changes with you - preserving context, evolution, and ownership so agents can actually use it.

memory-ingest-eg

How CORE recalls from memory

memory-search-diagram

When you ask CORE a question, it doesn’t just look up text - it digs into your whole knowledge graph to find the most useful answers.

  1. Search: CORE looks through memory from multiple angles at once - keyword search for exact matches, semantic search for related ideas even if phrased differently, and graph traversal to follow links between connected concepts.
  2. Re-Rank: The retrieved results are reordered to highlight the most relevant and diverse ones, ensuring you don’t just see obvious matches but also deeper connections.
  3. Filtering: CORE applies smart filters based on time, reliability, and relationship strength, so only the most meaningful knowledge surfaces.
  4. Output: You get back both facts (clear statements) and episodes (the original context they came from), so recall is always grounded in context, time, and story.

The result: CORE doesn’t just recall facts - it recalls them in the right context, time, and story, so agents can respond the way you would remember.

Documentation

Explore our documentation to get the most out of CORE

🔒 Security

CORE takes security seriously. We implement industry-standard security practices to protect your data:

  • Data Encryption: All data in transit (TLS 1.3) and at rest (AES-256)
  • Authentication: OAuth 2.0 and magic link authentication
  • Access Control: Workspace-based isolation and role-based permissions
  • Vulnerability Reporting: Please report security issues to [email protected]

For detailed security information, see our Security Policy.

🧑‍💻 Support

Have questions or feedback? We're here to help:

Usage Guidelines

Store:

  • Conversation history
  • User preferences
  • Task context
  • Reference materials

Don't Store:

  • Sensitive data (PII)
  • Credentials
  • System logs
  • Temporary data

👥 Contributors

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for core

Similar Open Source Tools

For similar tasks

For similar jobs