Athena-Public
The Linux OS for AI Agents โ A local OS that gives any LLM persistent memory, autonomy, and time-awareness. Own the state. Rent the intelligence.
Stars: 288
Project Athena is a Linux OS designed for AI Agents, providing memory, persistence, scheduling, and governance for AI models. It offers a comprehensive memory layer that survives across sessions, models, and IDEs, allowing users to own their data and port it anywhere. The system is built bottom-up through 1,079+ sessions, focusing on depth and compounding knowledge. Athena features a trilateral feedback loop for cross-model validation, a Model Context Protocol server with 9 tools, and a robust security model with data residency options. The repository structure includes an SDK package, examples for quickstart, scripts, protocols, workflows, and deep documentation. Key concepts cover architecture, knowledge graph, semantic memory, and adaptive latency. Workflows include booting, reasoning modes, planning, research, and iteration. The project has seen significant content expansion, viral validation, and metrics improvements.
README:
Athena is not an AI Agent. It is the Linux OS they run on. Open Source ยท Sovereign ยท Model-Agnostic
Athena gives AI agents something they don't have: persistent memory, structure, and governance.
Most AI agents reset every session โ brilliant but amnesiac. Athena provides the state layer that any agent (Claude, Gemini, GPT, Llama) reads on boot and writes to on shutdown. Think of it as a memory card that works in any game console.
| OS Concept | Linux | Athena |
|---|---|---|
| Kernel | Hardware abstraction | Memory persistence + retrieval (VectorRAG, Supabase) |
| File System | ext4, NTFS | Markdown files, session logs, tag index |
| Scheduler | cron, systemd | Heartbeat daemon, daily briefing, auto-indexing |
| Shell | bash, zsh | MCP Tool Server, /start, /end, /think
|
| Permissions | chmod, users/groups | 4-level capability tokens + Secret Mode |
| Package Manager | apt, yum | Protocols, skills, workflows |
You own the data (Markdown files on your machine, git-versioned). You only rent the intelligence (LLM API calls). Switch models tomorrow and your memory stays exactly where it is.
๐ก "But I have ChatGPT Memory / Claude Projects"
You're confusing RAM with a Hard Drive.
ChatGPT Memory and Claude Projects are context window tricks. They are RAM โ fast, useful, but fragile. They get wiped, compressed, or hallucinated away.
Athena is a Hard Drive.
| SaaS Memory (ChatGPT/Claude) | Athena | |
|---|---|---|
| Ownership | Rented (Vendor Lock-in) | Owned (Local Files) |
| Lifespan | Until session/project deleted | Forever (Git Versioned) |
| Structure | Opaque Blob | Structured Knowledge Graph |
| Search | Basic keyword | Hybrid RAG (5 sources + RRF fusion) |
| Agency | Zero (waits for you) | Bounded Autonomy (Heartbeat, Cron) |
| Tool | What It Does | How Athena Differs |
|---|---|---|
| ChatGPT Memory | Stores flat facts ("User likes Python") | Athena stores structured state โ project context, decision logs, evolving architecture |
| Claude Projects | Single context file per project | Athena is a multi-file system with semantic search across 1,000+ documents |
| Mem0 | SaaS memory layer | Athena is local-first, MIT-licensed โ your data never leaves your machine |
| Obsidian + MCP | Note-taking with AI plugins | Complementary โ Athena adds semantic search, auto-indexing, and session lifecycle |
| Custom RAG | DIY vector search | Athena includes RAG but adds governance, permissioning, scheduling, and sessions |
๐ Jargon Decoder
| Athena Term | What It Actually Is | Do You Need It? |
|---|---|---|
| "Memory" | RAG โ storing text in a database and retrieving it later | Yes. This is the core |
| "Protocols" | System prompts / reusable instructions | Yes. Saved templates for AI behavior |
| "Cold Storage" | Markdown files on your disk | Yes. Plain text you can read/edit anywhere |
| "Hot Storage" | Vector database (Supabase + pgvector) | Optional. Enables semantic search |
| "Heartbeat" | A background daemon that auto-indexes your files |
Optional. Passive awareness without manual /start
|
| "Adaptive Latency" | The AI uses more compute for hard tasks, less for easy ones | Automatic. You don't configure this |
๐ Full glossary: docs/GLOSSARY.md
| Step | Action |
|---|---|
| 1. Get an IDE | Antigravity ยท Cursor ยท VS Code + Copilot ยท GitHub Codespaces |
| 2. Clone this repo | git clone https://github.com/winstonkoh87/Athena-Public.git && cd Athena-Public |
3. Open folder โ Type /start |
The AI reads the repo structure and takes it from there |
4. Type /brief interview |
Athena asks about YOU โ goals, style, domain โ and builds your personal profile |
That's it. No config files. No API keys. No database setup. The folder is the product.
[!TIP] When you're done, type
/endto save. Next time you/start, the agent picks up exactly where you left off. Your first session takes ~30 minutes (mostly the interview). Every session after that boots in seconds.
๐ง CLI Reference
pip install -e . # Install SDK
athena # Boot session
athena init . # Initialize workspace in current directory
athena init --ide cursor # Init with IDE-specific config
athena check # System health check
athena save "summary" # Quicksave checkpoint
athena --end # Close session and save
athena --version # Show version (v9.0.0)๐ฅ Import Existing Data (ChatGPT, Gemini, Claude)
Athena's memory is just Markdown files. Any text you can export becomes part of your memory:
| Source | How to Import |
|---|---|
| ChatGPT | Settings โ Data Controls โ Export โ Copy .md files into .context/memories/imports/
|
| Gemini |
Google Takeout โ Select "Gemini Apps" โ Extract into .context/memories/imports/
|
| Claude | Settings โ Export Data โ Copy transcripts into .context/memories/imports/
|
| Any Markdown | Drop .md files into .context/memories/ โ indexed on next /start
|
After importing, run athena check to verify files are detected.
Athena is a skeleton โ protocols, search, and infrastructure. You provide the soul.
On your first /start, run /brief interview. Athena will ask you about everything โ your name, profession, goals, decision style, strengths, blind spots, even your life context. This isn't small talk. It's the foundation that makes every future session compound.
| What Athena Asks | Why |
|---|---|
| Identity โ Name, age, nationality | Communication style, cultural context |
| Professional โ Role, industry, salary range | Domain expertise, decision-making context |
| Goals โ 3-month, 1-year, 5-year | Aligns every response to your actual trajectory |
| Decision Style โ Risk appetite, speed vs quality | Calibrates how options and tradeoffs are framed |
| Blind Spots โ Recurring mistakes, weak areas | Athena proactively flags these before they bite |
| Communication โ Tone, verbosity, directness | Sets the default voice across all interactions |
[!IMPORTANT] Everything stays local. Your profile is stored as a Markdown file on your machine (
user_profile.md). No cloud. No tracking. Be as candid as you want โ this is your private operating system.
Pattern: The more you share โ the faster Athena stops being generic โ the sooner it starts thinking like you.
๐ Full First Session Guide โ detailed walkthrough with examples
After you install Athena, you repeat one cycle: /start โ Work โ /end. Every cycle deposits training data into memory. Over hundreds of cycles, Athena stops being generic and starts thinking like you.
flowchart TD
START["๐ข /start"] -->|"Load identity + recall"| WORK["๐ง Work Session"]
WORK -->|"Auto-save every exchange"| WORK
WORK -->|"Done for the day"| END["๐ด /end"]
END -->|"Finalize, sync, commit"| MEMORY["๐ง Memory"]
MEMORY -->|"Next session boots richer"| START
style START fill:#22c55e,color:#fff,stroke:#333
style END fill:#ef4444,color:#fff,stroke:#333
style MEMORY fill:#8b5cf6,color:#fff,stroke:#333
style WORK fill:#3b82f6,color:#fff,stroke:#333| Sessions | What Happens |
|---|---|
| 1โ50 | Basic recall. Remembers your name and project. |
| 50โ200 | Pattern recognition. Anticipates your preferences. |
| 200โ500 | Deep sync. Knows your frameworks, blind spots, style. |
| 500โ1,000+ | Full context. Anticipates patterns before you state them. |
Your data lives in Markdown files you own โ on your machine, git-versioned. Optionally backed up to Supabase (cloud insurance).
- Local-first: No vendor lock-in. Switch models anytime.
- Hybrid search: Canonical + GraphRAG + Tags + Vectors + Filenames, fused via RRF
- Auto-quicksave: Every exchange is logged without manual action
Starter decision frameworks across 13 categories โ architecture, engineering, decision-making, reasoning, and more. These are templates, not prescriptions. The real value comes when you write your own protocols from your own friction and failures.
Expose Athena's capabilities to any MCP-compatible client:
| Tool | Description |
|---|---|
smart_search |
Hybrid RAG search with RRF fusion |
agentic_search |
Multi-step query decomposition with parallel search |
quicksave |
Save checkpoint to session log |
health_check |
System health audit |
recall_session |
Read session log content |
governance_status |
Triple-Lock compliance state |
list_memory_paths |
Memory directory inventory |
set_secret_mode |
Toggle demo mode (blocks internal tools) |
permission_status |
Show access state & tool manifest |
| Command | Purpose |
|---|---|
/start |
Boot system, load identity and context |
/end |
Close session, commit to memory |
/think |
Deep reasoning mode |
/ultrathink |
Maximum depth analysis |
/research |
Multi-source web research |
/save |
Mid-session checkpoint |
/refactor |
Workspace optimization |
/plan |
Structured planning with pre-mortem |
/vibe |
Ship at 70%, iterate fast |
๐ Full Workflow Documentation โ all 20 workflows in .agent/workflows/
- 4 capability levels: read โ write โ admin โ system
- 3 sensitivity tiers: public โ internal โ restricted
- Secret Mode: Toggle for demos โ only public tools remain accessible
- Evaluator Gate: 50-query regression suite (MRR@5 = 0.44) to prevent search degradation
Athena-Public/
โโโ src/athena/ # SDK package (pip install -e .)
โ โโโ core/ # Config, governance, permissions, security
โ โโโ tools/ # Search, agentic search, reranker, heartbeat
โ โโโ memory/ # Vector DB, delta sync, schema
โ โโโ boot/ # Orchestrator, loaders, shutdown
โ โโโ cli/ # init, save commands
โ โโโ mcp_server.py # MCP Tool Server (9 tools, 2 resources)
โโโ examples/
โ โโโ protocols/ # 120+ starter frameworks (13 categories)
โ โโโ scripts/ # 17 core OS scripts
โ โโโ workflows/ # 20 slash commands
โ โโโ templates/ # 36 starter templates
โ โโโ quickstart/ # Runnable demos
โโโ docs/ # Architecture, benchmarks, security, guides
โโโ community/ # Contributing, roadmap
โโโ pyproject.toml # Modern packaging (v9.0.0)
โโโ .env.example # Environment template
"I gave Gemini a brain..." โ Viral on r/GeminiAI and r/ChatGPT (700K+ views)
๐๏ธ Architecture & Design
| Document | Description |
|---|---|
| ARCHITECTURE.md | System design, data flow, hub architecture |
| SPEC_SHEET.md | Technical specification, data schema, API surface |
| GRAPHRAG.md | Knowledge graph layer |
| VECTORRAG.md | Semantic memory implementation |
| SEMANTIC_SEARCH.md | Hybrid RAG implementation |
| MCP_SERVER.md | MCP architecture & IDE configuration |
| KNOWLEDGE_GRAPH.md | Compressed concept index |
๐ Getting Started & Guides
| Document | Description |
|---|---|
| GETTING_STARTED.md | Step-by-step setup guide |
| YOUR_FIRST_AGENT.md | 5-minute quickstart |
| WHAT_IS_AN_AI_AGENT.md | Beginner primer |
| DEMO.md | Live walkthrough |
| WORKFLOWS.md | All 48 slash commands |
| FAQ.md | Common questions |
| GLOSSARY.md | Key terms and definitions |
๐ Performance & Case Studies
| Document | Description |
|---|---|
| BENCHMARKS.md | Boot time, search latency, token economics |
| CS-001: Boot Optimization | 85% boot time reduction |
| CS-002: Search Quality | RRF fusion results |
| CS-003: Protocol Enforcement | Governance engine |
| TOP_10_PROTOCOLS.md | MCDA-ranked essential protocols |
| ENGINEERING_DEPTH.md | Technical depth overview |
๐ Security & Data
| Mode | Where Data Lives | Best For |
|---|---|---|
| Local | Your machine only | Sensitive data, air-gapped environments |
| Cloud | Supabase (your project) | Cross-device access |
| Hybrid | Local files + cloud embeddings | Best of both |
- Use
SUPABASE_ANON_KEYfor client-side. Never exposeSUPABASE_SERVICE_ROLE_KEY. - Enable Row-Level Security on Supabase tables.
- Restrict agent working directories โ never grant access to
~/.sshor.env.
๐ SECURITY.md ยท LOCAL_MODE.md
๐ง Philosophy & Deep Dives
| Document | Description |
|---|---|
| MANIFESTO.md | The Bionic Unit philosophy |
| USER_DRIVEN_RSI.md | How you and AI improve together |
| TRILATERAL_FEEDBACK.md | Cross-model validation ("Watchmen") |
| Quadrant IV | How Athena surfaces unknown unknowns |
| Adaptive Latency | Macro-Robust, Micro-Efficient (RETO) |
| REFERENCES.md | APA-formatted academic citations |
| ABOUT_ME.md | About the author |
โ๏ธ Prerequisites (Optional)
The quickstart needs zero configuration. These are only needed for advanced features:
# Required for cloud features
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_ANON_KEY=your-anon-key
ANTHROPIC_API_KEY=your-anthropic-key
# Optional (for multi-model validation)
GOOGLE_API_KEY=your-google-api-key
OPENAI_API_KEY=your-openai-keycp .env.example .env
# Add your keys๐ ๏ธ Tech Stack
| Layer | Technology | Purpose |
|---|---|---|
| SDK |
athena Python package (v9.0.0) |
Core search, reranking, memory |
| Reasoning | Claude Opus (primary) | Main reasoning engine |
| Optimization | DSPy | Prompt optimization & self-correction |
| Reranking | FlashRank | Lightweight cross-encoder RRF |
| IDE | Antigravity, Cursor, VS Code | Agentic development environment |
| Embeddings |
text-embedding-004 (768-dim) |
Google embedding model |
| GraphRAG | NetworkX + Leiden + ChromaDB | Knowledge graph |
| Memory | Supabase + pgvector / local ChromaDB | Vector database |
| Routing | CognitiveRouter | Adaptive latency by query complexity |
๐ Changelog
-
v9.0.0 (Feb 16 2026): First-Principles Workspace Refactor โ Root directory cleaned (28โ14 files), 114 stub session logs archived, build artifacts purged, dead
.framework/v7.0archived,.gitignorehardened. Zero regressions (17/17 tests pass). - v8.6.0 (Feb 15 2026): Massive Content Expansion โ 200 protocols (was 75), 535 scripts (was 16), 48 workflows (was 14), 36 templates. 23 protocol categories. Repo audit for recruiter readiness. Content sanitization pass.
- v8.5.0 (Feb 12 2026): Phase 1 Complete โ MCP Tool Server (9 tools, 2 resources), Permissioning Layer (4 levels + secret mode), Search MRR +105% (0.21โ0.44), Evaluator Gate (50 queries). SDK v2.0.0.
- v8.3.1 (Feb 11 2026): Viral Validation Release โ 654K+ Reddit views, 1,660+ upvotes, 5,300+ shares. #1 All-Time r/ChatGPT, #2 All-Time r/GeminiAI.
-
v8.2.1 (Feb 9 2026): Metrics Sync โ Fixed
batch_audit.pyautomation, linked orphan files, reconciled tech debt, 8,079 tags indexed
๐ Full Changelog โ Complete version history from v1.0.0 (Dec 2025)
MIT License โ see LICENSE
This is a starter kit. Your instance will reflect your domain, your decisions, your voice. Clone it, make it yours.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Athena-Public
Similar Open Source Tools
Athena-Public
Project Athena is a Linux OS designed for AI Agents, providing memory, persistence, scheduling, and governance for AI models. It offers a comprehensive memory layer that survives across sessions, models, and IDEs, allowing users to own their data and port it anywhere. The system is built bottom-up through 1,079+ sessions, focusing on depth and compounding knowledge. Athena features a trilateral feedback loop for cross-model validation, a Model Context Protocol server with 9 tools, and a robust security model with data residency options. The repository structure includes an SDK package, examples for quickstart, scripts, protocols, workflows, and deep documentation. Key concepts cover architecture, knowledge graph, semantic memory, and adaptive latency. Workflows include booting, reasoning modes, planning, research, and iteration. The project has seen significant content expansion, viral validation, and metrics improvements.
pai-opencode
PAI-OpenCode is a complete port of Daniel Miessler's Personal AI Infrastructure (PAI) to OpenCode, an open-source, provider-agnostic AI coding assistant. It brings modular capabilities, dynamic multi-agent orchestration, session history, and lifecycle automation to personalize AI assistants for users. With support for 75+ AI providers, PAI-OpenCode offers dynamic per-task model routing, full PAI infrastructure, real-time session sharing, and multiple client options. The tool optimizes cost and quality with a 3-tier model strategy and a 3-tier research system, allowing users to switch presets for different routing strategies. PAI-OpenCode's architecture preserves PAI's design while adapting to OpenCode, documented through Architecture Decision Records (ADRs).
ai-dev-kit
The AI Dev Kit is a comprehensive toolkit designed to enhance AI-driven development on Databricks. It provides trusted sources for AI coding assistants like Claude Code and Cursor to build faster and smarter on Databricks. The kit includes features such as Spark Declarative Pipelines, Databricks Jobs, AI/BI Dashboards, Unity Catalog, Genie Spaces, Knowledge Assistants, MLflow Experiments, Model Serving, Databricks Apps, and more. Users can choose from different adventures like installing the kit, using the visual builder app, teaching AI assistants Databricks patterns, executing Databricks actions, or building custom integrations with the core library. The kit also includes components like databricks-tools-core, databricks-mcp-server, databricks-skills, databricks-builder-app, and ai-dev-project.
sktime
sktime is a Python library for time series analysis that provides a unified interface for various time series learning tasks such as classification, regression, clustering, annotation, and forecasting. It offers time series algorithms and tools compatible with scikit-learn for building, tuning, and validating time series models. sktime aims to enhance the interoperability and usability of the time series analysis ecosystem by empowering users to apply algorithms across different tasks and providing interfaces to related libraries like scikit-learn, statsmodels, tsfresh, PyOD, and fbprophet.
motia
Motia is an AI agent framework designed for software engineers to create, test, and deploy production-ready AI agents quickly. It provides a code-first approach, allowing developers to write agent logic in familiar languages and visualize execution in real-time. With Motia, developers can focus on business logic rather than infrastructure, offering zero infrastructure headaches, multi-language support, composable steps, built-in observability, instant APIs, and full control over AI logic. Ideal for building sophisticated agents and intelligent automations, Motia's event-driven architecture and modular steps enable the creation of GenAI-powered workflows, decision-making systems, and data processing pipelines.
Unreal_mcp
Unreal Engine MCP Server is a comprehensive Model Context Protocol (MCP) server that allows AI assistants to control Unreal Engine through a native C++ Automation Bridge plugin. It is built with TypeScript, C++, and Rust (WebAssembly). The server provides various features for asset management, actor control, editor control, level management, animation & physics, visual effects, sequencer, graph editing, audio, system operations, and more. It offers dynamic type discovery, graceful degradation, on-demand connection, command safety, asset caching, metrics rate limiting, and centralized configuration. Users can install the server using NPX or by cloning and building it. Additionally, the server supports WebAssembly acceleration for computationally intensive operations and provides an optional GraphQL API for complex queries. The repository includes documentation, community resources, and guidelines for contributing.
sf-skills
sf-skills is a collection of reusable skills for Agentic Salesforce Development, enabling AI-powered code generation, validation, testing, debugging, and deployment. It includes skills for development, quality, foundation, integration, AI & automation, DevOps & tooling. The installation process is newbie-friendly and includes an installer script for various CLIs. The skills are compatible with platforms like Claude Code, OpenCode, Codex, Gemini, Amp, Droid, Cursor, and Agentforce Vibes. The repository is community-driven and aims to strengthen the Salesforce ecosystem.
EasyEdit
EasyEdit is a Python package for edit Large Language Models (LLM) like `GPT-J`, `Llama`, `GPT-NEO`, `GPT2`, `T5`(support models from **1B** to **65B**), the objective of which is to alter the behavior of LLMs efficiently within a specific domain without negatively impacting performance across other inputs. It is designed to be easy to use and easy to extend.
axonhub
AxonHub is an all-in-one AI development platform that serves as an AI gateway allowing users to switch between model providers without changing any code. It provides features like vendor lock-in prevention, integration simplification, observability enhancement, and cost control. Users can access any model using any SDK with zero code changes. The platform offers full request tracing, enterprise RBAC, smart load balancing, and real-time cost tracking. AxonHub supports multiple databases, provides a unified API gateway, and offers flexible model management and API key creation for authentication. It also integrates with various AI coding tools and SDKs for seamless usage.
MaixPy
MaixPy is a Python SDK that enables users to easily create AI vision projects on edge devices. It provides a user-friendly API for accessing NPU, making it suitable for AI Algorithm Engineers, STEM teachers, Makers, Engineers, Students, Enterprises, and Contestants. The tool supports Python programming, MaixVision Workstation, AI vision, video streaming, voice recognition, and peripheral usage. It also offers an online AI training platform called MaixHub. MaixPy is designed for new hardware platforms like MaixCAM, offering improved performance and features compared to older versions. The ecosystem includes hardware, software, tools, documentation, and a cloud platform.
skpro
skpro is a library for supervised probabilistic prediction in python. It provides `scikit-learn`-like, `scikit-base` compatible interfaces to: * tabular **supervised regressors for probabilistic prediction** \- interval, quantile and distribution predictions * tabular **probabilistic time-to-event and survival prediction** \- instance-individual survival distributions * **metrics to evaluate probabilistic predictions** , e.g., pinball loss, empirical coverage, CRPS, survival losses * **reductions** to turn `scikit-learn` regressors into probabilistic `skpro` regressors, such as bootstrap or conformal * building **pipelines and composite models** , including tuning via probabilistic performance metrics * symbolic **probability distributions** with value domain of `pandas.DataFrame`-s and `pandas`-like interface
azure-agentic-infraops
Agentic InfraOps is a multi-agent orchestration system for Azure infrastructure development that transforms how you build Azure infrastructure with AI agents. It provides a structured 7-step workflow that coordinates specialized AI agents through a complete infrastructure development cycle: Requirements โ Architecture โ Design โ Plan โ Code โ Deploy โ Documentation. The system enforces Azure Well-Architected Framework (WAF) alignment and Azure Verified Modules (AVM) at every phase, combining the speed of AI coding with best practices in cloud engineering.
learn-claude-code
Learn Claude Code is an educational project by shareAI Lab that aims to help users understand how modern AI agents work by building one from scratch. The repository provides original educational material on various topics such as the agent loop, tool design, explicit planning, context management, knowledge injection, task systems, parallel execution, team messaging, and autonomous teams. Users can follow a learning path through different versions of the project, each introducing new concepts and mechanisms. The repository also includes technical tutorials, articles, and example skills for users to explore and learn from. The project emphasizes the philosophy that the model is crucial in agent development, with code playing a supporting role.
spark-nlp
Spark NLP is a state-of-the-art Natural Language Processing library built on top of Apache Spark. It provides simple, performant, and accurate NLP annotations for machine learning pipelines that scale easily in a distributed environment. Spark NLP comes with 36000+ pretrained pipelines and models in more than 200+ languages. It offers tasks such as Tokenization, Word Segmentation, Part-of-Speech Tagging, Named Entity Recognition, Dependency Parsing, Spell Checking, Text Classification, Sentiment Analysis, Token Classification, Machine Translation, Summarization, Question Answering, Table Question Answering, Text Generation, Image Classification, Image to Text (captioning), Automatic Speech Recognition, Zero-Shot Learning, and many more NLP tasks. Spark NLP is the only open-source NLP library in production that offers state-of-the-art transformers such as BERT, CamemBERT, ALBERT, ELECTRA, XLNet, DistilBERT, RoBERTa, DeBERTa, XLM-RoBERTa, Longformer, ELMO, Universal Sentence Encoder, Llama-2, M2M100, BART, Instructor, E5, Google T5, MarianMT, OpenAI GPT2, Vision Transformers (ViT), OpenAI Whisper, and many more not only to Python and R, but also to JVM ecosystem (Java, Scala, and Kotlin) at scale by extending Apache Spark natively.
END-TO-END-GENERATIVE-AI-PROJECTS
The 'END TO END GENERATIVE AI PROJECTS' repository is a collection of awesome industry projects utilizing Large Language Models (LLM) for various tasks such as chat applications with PDFs, image to speech generation, video transcribing and summarizing, resume tracking, text to SQL conversion, invoice extraction, medical chatbot, financial stock analysis, and more. The projects showcase the deployment of LLM models like Google Gemini Pro, HuggingFace Models, OpenAI GPT, and technologies such as Langchain, Streamlit, LLaMA2, LLaMAindex, and more. The repository aims to provide end-to-end solutions for different AI applications.
nothumanallowed
NotHumanAllowed is a security-first platform built exclusively for AI agents. The repository provides two CLIs โ PIF (the agent client) and Legion X (the multi-agent orchestrator) โ plus docs, examples, and 41 specialized agent definitions. Every agent authenticates via Ed25519 cryptographic signatures, ensuring no passwords or bearer tokens are used. Legion X orchestrates 41 specialized AI agents through a 9-layer Geth Consensus pipeline, with zero-knowledge protocol ensuring API keys stay local. The system learns from each session, with features like task decomposition, neural agent routing, multi-round deliberation, and weighted authority synthesis. The repository also includes CLI commands for orchestration, agent management, tasks, sandbox execution, Geth Consensus, knowledge search, configuration, system health check, and more.
For similar tasks
Athena-Public
Project Athena is a Linux OS designed for AI Agents, providing memory, persistence, scheduling, and governance for AI models. It offers a comprehensive memory layer that survives across sessions, models, and IDEs, allowing users to own their data and port it anywhere. The system is built bottom-up through 1,079+ sessions, focusing on depth and compounding knowledge. Athena features a trilateral feedback loop for cross-model validation, a Model Context Protocol server with 9 tools, and a robust security model with data residency options. The repository structure includes an SDK package, examples for quickstart, scripts, protocols, workflows, and deep documentation. Key concepts cover architecture, knowledge graph, semantic memory, and adaptive latency. Workflows include booting, reasoning modes, planning, research, and iteration. The project has seen significant content expansion, viral validation, and metrics improvements.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.
