
Sentience
Build fully sentient, unruggable AI agents
Stars: 53

Sentience is a tool that allows developers to create autonomous AI agents on-chain with verifiable proofs. It leverages a Trusted Execution Environment (TEE) architecture to ensure secure execution of AI calls and provides transparency through cryptographic attestations posted on Solana's blockchain. The tool enhances market potential by transforming agents into cryptographically verifiable entities, addressing the need for trust in AI development. Sentience offers features like OpenAI compatibility, on-chain verifiability, an explorer for agent history, and an easy-to-use developer experience. The repository includes SDKs for Python and JavaScript, along with components for verified inference and instructions for verifying the TEE architecture.
README:
Sentience enables developers to build autonomous, fully on-chain verifiable AI agents with an OpenAI-compatible Proof of Sentience SDK.
Quickstart |
How it works |
Features |
Roadmap |
Help |
Docs
AI agents have reached $10B+ market cap but most of them are still controlled by humans. This is a huge problem as it introduces risk to the investors and community as developers can simply rug-pull and manipulate the agents.
We're already seeing activity logs for zerebro and aixbt, but Sentience transforms agents into cryptographically verifiably autonomous entities, unlocking true sentience to address a critical need for trust. This significantly enhances agents market potential and is the first step in ensuring that the agents are self-governing.
Get started with Proof of Sentience SDK. This Python example will make your agent’s thoughts and actions (LLM inferences) verifiable on-chain.
Get free API key
- Create an account here
- Create an API key on the dashboard
Install Python SDK
pip install sentience
First make an LLM inference request to OpenAI. Then verify LLM inference integrity.
import sentience
from openai import OpenAI
client = OpenAI(
base_url="https://api.galadriel.com/v1/verified",
api_key="Bearer GALADRIEL_API_KEY",
)
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
],
)
print("completion:", completion)
is_valid = sentience.verify_signature(completion)
print("is_valid:", is_valid)
Learn how to display your agent’s previous requests, verify the proofs, and how to use a JS SDK version from our docs.
Sentience leverages a Trusted Execution Environment (TEE) architecture to securely execute LLM API calls, ensuring verifiability through cryptographic attestations, with each attestation posted on-chain on Solana for transparency and integrity. The following diagram illustrates the architecture and workflow of Sentience:
The flow:
- The agent sends a request containing a message with the desired LLM model to the TEE.
- The TEE securely processes the request by calling the LLM API.
- The TEE sends back the {Message, Proof} to the agent.
- The TEE submits the attestation with {Message, Proof} to Solana.
- The Proof of Sentience SDK is used to read the attestation from Solana and verify it with {Message, Proof}. The proof log can be added to the agent website/app.
The architecture has the following benefits:
- Integrity of the execution: The LLM API call was executed within a TEE, ensuring the operation was secure, untampered, and isolated from external interference.
- Authenticity of the output: The response generated by the LLM API was not altered which guarantees that the output genuinely originated from the specified model and the API.
- Provenance of the request: the request for LLM inference originated from a verified source, ensuring no unauthorized agents were involved.
- Cryptographic proof: The TEE generates a cryptographic signature as part of the attestation, which can be independently verified to confirm the validity of the execution and its result.
- Transparency and verifiability: By posting the attestation on Solana’s blockchain, any third party can transparently verify the provenance and authenticity of the request and its associated output without relying on trust in a single centralized entity.
To verify the code running inside the TEE, use instructions from here.
Sentience is already securing and verifying $15M+ worth of agents today.
For example, you can see the full implementation in action with Daige, a sentient, cyberpunk AI dog.
OpenAI compatible Python & JS Proof of Sentience SDK.
- Makes verifiable LLM inferences within your agent.
- Supports OpenAI and Claude LLMs, and fine-tuned models with OpenAI. This makes it compatible with any existing AI agent framework such as ELIZA, ARC, Zerebro, etc.
- Logging functionality to retrieve and display verified inferences. This makes it easy to implement a proof terminal like this.
- Verification logic to validate in code if a proof is correct.
Open-sourced TEE architecture.
- Including instructions to verify the code running inside TEEs.
- LLM inference is executed inside Amazon Nitro Enclaves.
- The enclave can't be accessed from the outside to ensure agent’s security.
On-chain verifiability with Solana.
Sentience Explorer.
- Enables discovery of full history for all verified agents’ inferences. See here.
Easy to use developer experience.
- No need to know the underlying cryptographic primitives of TEE’s.
Proof of Sentience SDKs.
Underlying TEE architecture that powers Proof of Sentience.
- enclave - this where the enclave is built and run
- host - proxies HTTP requests to the API running in the enclave
- solana-attestation-contract - posts proofs of inference responses to Solana
- verify - instructions and code for verifying the TEE
If you have any questions about Galadriel, feel free to do:
- Join our Discord and ask for help.
- Proof of Sentience SDK
- Python framework and CLI to build sentient AI agents
- Explorer to discover verified AI agents
- Can run and deploy all of the agent core logic fully inside a TEE
- GPU TEE nodes with OSS LLMs
- L1 for Sentient AI agents
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for Sentience
Similar Open Source Tools

Sentience
Sentience is a tool that allows developers to create autonomous AI agents on-chain with verifiable proofs. It leverages a Trusted Execution Environment (TEE) architecture to ensure secure execution of AI calls and provides transparency through cryptographic attestations posted on Solana's blockchain. The tool enhances market potential by transforming agents into cryptographically verifiable entities, addressing the need for trust in AI development. Sentience offers features like OpenAI compatibility, on-chain verifiability, an explorer for agent history, and an easy-to-use developer experience. The repository includes SDKs for Python and JavaScript, along with components for verified inference and instructions for verifying the TEE architecture.

stride-gpt
STRIDE GPT is an AI-powered threat modelling tool that leverages Large Language Models (LLMs) to generate threat models and attack trees for a given application based on the STRIDE methodology. Users provide application details, such as the application type, authentication methods, and whether the application is internet-facing or processes sensitive data. The model then generates its output based on the provided information. It features a simple and user-friendly interface, supports multi-modal threat modelling, generates attack trees, suggests possible mitigations for identified threats, and does not store application details. STRIDE GPT can be accessed via OpenAI API, Azure OpenAI Service, Google AI API, or Mistral API. It is available as a Docker container image for easy deployment.

OpenDAN-Personal-AI-OS
OpenDAN is an open source Personal AI OS that consolidates various AI modules for personal use. It empowers users to create powerful AI agents like assistants, tutors, and companions. The OS allows agents to collaborate, integrate with services, and control smart devices. OpenDAN offers features like rapid installation, AI agent customization, connectivity via Telegram/Email, building a local knowledge base, distributed AI computing, and more. It aims to simplify life by putting AI in users' hands. The project is in early stages with ongoing development and future plans for user and kernel mode separation, home IoT device control, and an official OpenDAN SDK release.

persian-license-plate-recognition
The Persian License Plate Recognition (PLPR) system is a state-of-the-art solution designed for detecting and recognizing Persian license plates in images and video streams. Leveraging advanced deep learning models and a user-friendly interface, it ensures reliable performance across different scenarios. The system offers advanced detection using YOLOv5 models, precise recognition of Persian characters, real-time processing capabilities, and a user-friendly GUI. It is well-suited for applications in traffic monitoring, automated vehicle identification, and similar fields. The system's architecture includes modules for resident management, entrance management, and a detailed flowchart explaining the process from system initialization to displaying results in the GUI. Hardware requirements include an Intel Core i5 processor, 8 GB RAM, a dedicated GPU with at least 4 GB VRAM, and an SSD with 20 GB of free space. The system can be installed by cloning the repository and installing required Python packages. Users can customize the video source for processing and run the application to upload and process images or video streams. The system's GUI allows for parameter adjustments to optimize performance, and the Wiki provides in-depth information on the system's architecture and model training.

AutoGroq
AutoGroq is a revolutionary tool that dynamically generates tailored teams of AI agents based on project requirements, eliminating manual configuration. It enables users to effortlessly tackle questions, problems, and projects by creating expert agents, workflows, and skillsets with ease and efficiency. With features like natural conversation flow, code snippet extraction, and support for multiple language models, AutoGroq offers a seamless and intuitive AI assistant experience for developers and users.

graphrag-local-ollama
GraphRAG Local Ollama is a repository that offers an adaptation of Microsoft's GraphRAG, customized to support local models downloaded using Ollama. It enables users to leverage local models with Ollama for large language models (LLMs) and embeddings, eliminating the need for costly OpenAPI models. The repository provides a simple setup process and allows users to perform question answering over private text corpora by building a graph-based text index and generating community summaries for closely-related entities. GraphRAG Local Ollama aims to improve the comprehensiveness and diversity of generated answers for global sensemaking questions over datasets.

LLM-Minutes-of-Meeting
LLM-Minutes-of-Meeting is a project showcasing NLP & LLM's capability to summarize long meetings and automate the task of delegating Minutes of Meeting(MoM) emails. It converts audio/video files to text, generates editable MoM, and aims to develop a real-time python web-application for meeting automation. The tool features keyword highlighting, topic tagging, export in various formats, user-friendly interface, and uses Celery for asynchronous processing. It is designed for corporate meetings, educational institutions, legal and medical fields, accessibility, and event coverage.

AntSK
AntSK is an AI knowledge base/agent built with .Net8+Blazor+SemanticKernel. It features a semantic kernel for accurate natural language processing, a memory kernel for continuous learning and knowledge storage, a knowledge base for importing and querying knowledge from various document formats, a text-to-image generator integrated with StableDiffusion, GPTs generation for creating personalized GPT models, API interfaces for integrating AntSK into other applications, an open API plugin system for extending functionality, a .Net plugin system for integrating business functions, real-time information retrieval from the internet, model management for adapting and managing different models from different vendors, support for domestic models and databases for operation in a trusted environment, and planned model fine-tuning based on llamafactory.

ShortGPT
ShortGPT is a powerful framework for automating content creation, simplifying video creation, footage sourcing, voiceover synthesis, and editing tasks. It offers features like automated editing framework, scripts and prompts, voiceover support in multiple languages, caption generation, asset sourcing, and persistency of editing variables. The tool is designed for youtube automation, Tiktok creativity program automation, and offers customization options for efficient and creative content creation.

burpference
Burpference is an open-source extension designed to capture in-scope HTTP requests and responses from Burp's proxy history and send them to a remote LLM API in JSON format. It automates response capture, integrates with APIs, optimizes resource usage, provides color-coded findings visualization, offers comprehensive logging, supports native Burp reporting, and allows flexible configuration. Users can customize system prompts, API keys, and remote hosts, and host models locally to prevent high inference costs. The tool is ideal for offensive web application engagements to surface findings and vulnerabilities.

CodeProject.AI-Server
CodeProject.AI Server is a standalone, self-hosted, fast, free, and open-source Artificial Intelligence microserver designed for any platform and language. It can be installed locally without the need for off-device or out-of-network data transfer, providing an easy-to-use solution for developers interested in AI programming. The server includes a HTTP REST API server, backend analysis services, and the source code, enabling users to perform various AI tasks locally without relying on external services or cloud computing. Current capabilities include object detection, face detection, scene recognition, sentiment analysis, and more, with ongoing feature expansions planned. The project aims to promote AI development, simplify AI implementation, focus on core use-cases, and leverage the expertise of the developer community.

feedgen
FeedGen is an open-source tool that uses Google Cloud's state-of-the-art Large Language Models (LLMs) to improve product titles, generate more comprehensive descriptions, and fill missing attributes in product feeds. It helps merchants and advertisers surface and fix quality issues in their feeds using Generative AI in a simple and configurable way. The tool relies on GCP's Vertex AI API to provide both zero-shot and few-shot inference capabilities on GCP's foundational LLMs. With few-shot prompting, users can customize the model's responses towards their own data, achieving higher quality and more consistent output. FeedGen is an Apps Script based application that runs as an HTML sidebar in Google Sheets, allowing users to optimize their feeds with ease.

nextpy
Nextpy is a cutting-edge software development framework optimized for AI-based code generation. It provides guardrails for defining AI system boundaries, structured outputs for prompt engineering, a powerful prompt engine for efficient processing, better AI generations with precise output control, modularity for multiplatform and extensible usage, developer-first approach for transferable knowledge, and containerized & scalable deployment options. It offers 4-10x faster performance compared to Streamlit apps, with a focus on cooperation within the open-source community and integration of key components from various projects.

parlant
Parlant is a structured approach to building and guiding customer-facing AI agents. It allows developers to create and manage robust AI agents, providing specific feedback on agent behavior and helping understand user intentions better. With features like guidelines, glossary, coherence checks, dynamic context, and guided tool use, Parlant offers control over agent responses and behavior. Developer-friendly aspects include instant changes, Git integration, clean architecture, and type safety. It enables confident deployment with scalability, effective debugging, and validation before deployment. Parlant works with major LLM providers and offers client SDKs for Python and TypeScript. The tool facilitates natural customer interactions through asynchronous communication and provides a chat UI for testing new behaviors before deployment.

aide
Aide is an Open Source AI-native code editor that combines the powerful features of VS Code with advanced AI capabilities. It provides a combined chat + edit flow, proactive agents for fixing errors, inline editing widget, intelligent code completion, and AST navigation. Aide is designed to be an intelligent coding companion, helping users write better code faster while maintaining control over the development process.

ansible-power-aix
The IBM Power Systems AIX Collection provides modules to manage configurations and deployments of Power AIX systems, enabling workloads on Power platforms as part of an enterprise automation strategy through the Ansible ecosystem. It includes example best practices, requirements for AIX versions, Ansible, and Python, along with resources for documentation and contribution.
For similar tasks

awesome-ml-gen-ai-elixir
A curated list of Machine Learning (ML) and Generative AI (GenAI) packages and resources for the Elixir programming language. It includes core tools for data exploration, traditional machine learning algorithms, deep learning models, computer vision libraries, generative AI tools, livebooks for interactive notebooks, and various resources such as books, videos, and articles. The repository aims to provide a comprehensive overview for experienced Elixir developers and ML/AI practitioners exploring different ecosystems.

Sentience
Sentience is a tool that allows developers to create autonomous AI agents on-chain with verifiable proofs. It leverages a Trusted Execution Environment (TEE) architecture to ensure secure execution of AI calls and provides transparency through cryptographic attestations posted on Solana's blockchain. The tool enhances market potential by transforming agents into cryptographically verifiable entities, addressing the need for trust in AI development. Sentience offers features like OpenAI compatibility, on-chain verifiability, an explorer for agent history, and an easy-to-use developer experience. The repository includes SDKs for Python and JavaScript, along with components for verified inference and instructions for verifying the TEE architecture.
For similar jobs

promptflow
**Prompt flow** is a suite of development tools designed to streamline the end-to-end development cycle of LLM-based AI applications, from ideation, prototyping, testing, evaluation to production deployment and monitoring. It makes prompt engineering much easier and enables you to build LLM apps with production quality.

deepeval
DeepEval is a simple-to-use, open-source LLM evaluation framework specialized for unit testing LLM outputs. It incorporates various metrics such as G-Eval, hallucination, answer relevancy, RAGAS, etc., and runs locally on your machine for evaluation. It provides a wide range of ready-to-use evaluation metrics, allows for creating custom metrics, integrates with any CI/CD environment, and enables benchmarking LLMs on popular benchmarks. DeepEval is designed for evaluating RAG and fine-tuning applications, helping users optimize hyperparameters, prevent prompt drifting, and transition from OpenAI to hosting their own Llama2 with confidence.

MegaDetector
MegaDetector is an AI model that identifies animals, people, and vehicles in camera trap images (which also makes it useful for eliminating blank images). This model is trained on several million images from a variety of ecosystems. MegaDetector is just one of many tools that aims to make conservation biologists more efficient with AI. If you want to learn about other ways to use AI to accelerate camera trap workflows, check out our of the field, affectionately titled "Everything I know about machine learning and camera traps".

leapfrogai
LeapfrogAI is a self-hosted AI platform designed to be deployed in air-gapped resource-constrained environments. It brings sophisticated AI solutions to these environments by hosting all the necessary components of an AI stack, including vector databases, model backends, API, and UI. LeapfrogAI's API closely matches that of OpenAI, allowing tools built for OpenAI/ChatGPT to function seamlessly with a LeapfrogAI backend. It provides several backends for various use cases, including llama-cpp-python, whisper, text-embeddings, and vllm. LeapfrogAI leverages Chainguard's apko to harden base python images, ensuring the latest supported Python versions are used by the other components of the stack. The LeapfrogAI SDK provides a standard set of protobuffs and python utilities for implementing backends and gRPC. LeapfrogAI offers UI options for common use-cases like chat, summarization, and transcription. It can be deployed and run locally via UDS and Kubernetes, built out using Zarf packages. LeapfrogAI is supported by a community of users and contributors, including Defense Unicorns, Beast Code, Chainguard, Exovera, Hypergiant, Pulze, SOSi, United States Navy, United States Air Force, and United States Space Force.

llava-docker
This Docker image for LLaVA (Large Language and Vision Assistant) provides a convenient way to run LLaVA locally or on RunPod. LLaVA is a powerful AI tool that combines natural language processing and computer vision capabilities. With this Docker image, you can easily access LLaVA's functionalities for various tasks, including image captioning, visual question answering, text summarization, and more. The image comes pre-installed with LLaVA v1.2.0, Torch 2.1.2, xformers 0.0.23.post1, and other necessary dependencies. You can customize the model used by setting the MODEL environment variable. The image also includes a Jupyter Lab environment for interactive development and exploration. Overall, this Docker image offers a comprehensive and user-friendly platform for leveraging LLaVA's capabilities.

carrot
The 'carrot' repository on GitHub provides a list of free and user-friendly ChatGPT mirror sites for easy access. The repository includes sponsored sites offering various GPT models and services. Users can find and share sites, report errors, and access stable and recommended sites for ChatGPT usage. The repository also includes a detailed list of ChatGPT sites, their features, and accessibility options, making it a valuable resource for ChatGPT users seeking free and unlimited GPT services.

TrustLLM
TrustLLM is a comprehensive study of trustworthiness in LLMs, including principles for different dimensions of trustworthiness, established benchmark, evaluation, and analysis of trustworthiness for mainstream LLMs, and discussion of open challenges and future directions. Specifically, we first propose a set of principles for trustworthy LLMs that span eight different dimensions. Based on these principles, we further establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics. We then present a study evaluating 16 mainstream LLMs in TrustLLM, consisting of over 30 datasets. The document explains how to use the trustllm python package to help you assess the performance of your LLM in trustworthiness more quickly. For more details about TrustLLM, please refer to project website.

AI-YinMei
AI-YinMei is an AI virtual anchor Vtuber development tool (N card version). It supports fastgpt knowledge base chat dialogue, a complete set of solutions for LLM large language models: [fastgpt] + [one-api] + [Xinference], supports docking bilibili live broadcast barrage reply and entering live broadcast welcome speech, supports Microsoft edge-tts speech synthesis, supports Bert-VITS2 speech synthesis, supports GPT-SoVITS speech synthesis, supports expression control Vtuber Studio, supports painting stable-diffusion-webui output OBS live broadcast room, supports painting picture pornography public-NSFW-y-distinguish, supports search and image search service duckduckgo (requires magic Internet access), supports image search service Baidu image search (no magic Internet access), supports AI reply chat box [html plug-in], supports AI singing Auto-Convert-Music, supports playlist [html plug-in], supports dancing function, supports expression video playback, supports head touching action, supports gift smashing action, supports singing automatic start dancing function, chat and singing automatic cycle swing action, supports multi scene switching, background music switching, day and night automatic switching scene, supports open singing and painting, let AI automatically judge the content.