
daydreams
Daydreams is a generative agent framework for executing anything onchain
Stars: 163

Daydreams is a generative agent library designed for playing onchain games by injecting context. It is chain agnostic and allows users to perform onchain tasks, including playing any onchain game. The tool is lightweight and powerful, enabling users to define game context, register actions, set goals, monitor progress, and integrate with external agents. Daydreams aims to be 'lite' and 'composable', dynamically generating code needed to play games. It is currently in pre-alpha stage, seeking feedback and collaboration for further development.
README:
Daydreams is a generative agent library for executing anything onchain. It is chain agnostic and can be used to perform onchain tasks - including play any onchain game - by simply injecting context. Base, Solana, Ethereum, Starknet, etc.
It is designed to be as lite as possible while remaining powerful and flexible.
Prerequisites:
- Node.js 16+
- pnpm
- Bun
- Docker Desktop
# Install dependencies
pnpm install
# Copy environment variables
cp .env.example .env
# Start Docker services (ChromaDB)
sh ./docker.sh
The project includes several example implementations demonstrating different use cases:
A simple CLI agent that can execute tasks using Chain of Thought:
# Run basic example
bun task
Demonstrates hierarchical goal planning and execution:
# Run goal-based example
bun goal
A Twitter bot that can monitor mentions and generate autonomous thoughts:
# Run Twitter bot example
bun twitter
Shows how to integrate with external APIs:
# Run API example
bun api
Daydreams is built around the following concepts:
- Orchestrator
- Handlers
- Goals
- Memory
- Chain of Thought
The Orchestrator is the central component that manages the flow of data through the system. It is responsible for:
- Registering handlers
- Routing data through the system
- Scheduling recurring tasks
- Maintaining the autonomous flow
- Calling the Chain of Thought
Handlers are the building blocks of the system. They are responsible for processing data and producing outputs. They are registered with the Orchestrator and are chained together in an autonomous flow.
Register handlers for inputs, outputs, and actions using registerIOHandler
. Each handler has a role, description, schema, and handler function:
- Input Handlers: Process incoming data (e.g., user messages, API webhooks)
- Action Handlers: Execute operations and return results (e.g., API calls, database queries)
- Output Handlers: Produce side effects (e.g., sending messages, updating UI)
// Register an action handler
orchestrator.registerIOHandler({
name: "universalApiCall",
role: "action",
schema: z.object({
method: z.enum(["GET", "POST", "PUT", "PATCH", "DELETE"]),
url: z.string().url(),
headers: z.record(z.string()).optional(),
body: z.union([z.string(), z.record(z.any())]).optional(),
}),
handler: async (payload) => {
// Handler implementation
const response = await fetch(/* ... */);
return response;
},
});
// Register an input handler
orchestrator.registerIOHandler({
name: "user_chat",
role: "input",
schema: z.object({
content: z.string(),
userId: z.string().optional(),
}),
handler: async (payload) => {
return payload;
},
});
// Register an output handler
orchestrator.registerIOHandler({
name: "ui_chat_reply",
role: "output",
schema: z.object({
userId: z.string().optional(),
message: z.string(),
}),
handler: async (payload) => {
console.log(`Reply to user ${payload.userId}: ${payload.message}`);
},
});
The agent uses Chain of Thought processing to:
- Plan strategies for achieving goals
- Break down complex goals into subgoals
- Execute actions to accomplish goals
- Learn from experiences and store knowledge
Subscribe to events to track the agent's thinking and actions:
dreams.on("think:start", ({ query }) => {
console.log("🧠 Starting to think about:", query);
});
dreams.on("action:complete", ({ action, result }) => {
console.log("✅ Action complete:", {
type: action.type,
result,
});
});
The system consists of several key components:
-
Context Layers
- Game/Application State
- Historical Data
- Execution Context
-
Chain of Thought Kernel
- Reasoning Engine
- Memory Integration
- Action Planning
-
Vector Database
- Experience Storage
- Knowledge Retrieval
- Similarity Search
-
Swarm Rooms
- Multi-Agent Collaboration
- Knowledge Sharing
- Federated Learning
graph TD
subgraph Orchestrator
subgraph Handlers
I[Input Handlers]
A[Action Handlers]
O[Output Handlers]
end
I --> P[Processor]
P --> A
P --> O
A --> CoT[Chain of Thought]
CoT --> A
A --> O
O --> I
A --> I
end
subgraph Memory System
VM[Vector Memory] <--> CoT
RM[Room Manager] <--> VM
end
subgraph Goal System
GM[Goal Manager] --> CoT
CoT --> GM
end
subgraph External Systems
API[APIs] <--> A
UI[User Interface] --> I
UI <--> O
end
style Orchestrator fill:#abf,stroke:#333,stroke-width:4px
style Memory System fill:#abf,stroke:#333,stroke-width:2px
The system works through several coordinated components:
-
Orchestrator: The central coordinator that:
- Manages input/output/action handlers
- Routes data through the system
- Schedules recurring tasks
- Maintains the autonomous flow
-
Chain of Thought (CoT): The reasoning engine that:
- Processes complex queries asked - it can be called directly like in the examples or through the orchestrator
- Makes decisions based on goals
- Determines required actions
- Learns from outcomes
-
Memory System:
- Vector Memory stores experiences and knowledge
- Room Manager organizes conversations and contexts
- Enables retrieval of relevant past experiences
-
Goal System:
- Breaks down complex objectives
- Manages dependencies between goals
- Tracks progress and completion
- Adapts strategies based on outcomes
This architecture allows for:
- Flexible composition of handlers
- Autonomous decision-making
- Contextual memory and learning
- Goal-oriented behavior
Each component can be used independently or composed together for more complex behaviors. The system is designed to be extensible, allowing new handlers and components to be added as needed.
Design principles:
- Lightweight: Keep the codebase small and focused
- Composable: Easy to combine functions and tools
- Extensible: Simple to add new capabilities
- [x] Chain of Thought
- [ ] Context Layers
- [ ] Graph memory system
- [ ] Swarm Rooms
- [ ] Create 'sleeves' abstract for dynamic context generation
⚠️ Warning: Daydreams is currently in pre-alpha stage, we are looking for feedback and collaboration.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for daydreams
Similar Open Source Tools

daydreams
Daydreams is a generative agent library designed for playing onchain games by injecting context. It is chain agnostic and allows users to perform onchain tasks, including playing any onchain game. The tool is lightweight and powerful, enabling users to define game context, register actions, set goals, monitor progress, and integrate with external agents. Daydreams aims to be 'lite' and 'composable', dynamically generating code needed to play games. It is currently in pre-alpha stage, seeking feedback and collaboration for further development.

Agentarium
Agentarium is a powerful Python framework for managing and orchestrating AI agents with ease. It provides a flexible and intuitive way to create, manage, and coordinate interactions between multiple AI agents in various environments. The framework offers advanced agent management, robust interaction management, a checkpoint system for saving and restoring agent states, data generation through agent interactions, performance optimization, flexible environment configuration, and an extensible architecture for customization.

rag-chat
The `@upstash/rag-chat` package simplifies the development of retrieval-augmented generation (RAG) chat applications by providing Next.js compatibility with streaming support, built-in vector store, optional Redis compatibility for fast chat history management, rate limiting, and disableRag option. Users can easily set up the environment variables and initialize RAGChat to interact with AI models, manage knowledge base, chat history, and enable debugging features. Advanced configuration options allow customization of RAGChat instance with built-in rate limiting, observability via Helicone, and integration with Next.js route handlers and Vercel AI SDK. The package supports OpenAI models, Upstash-hosted models, and custom providers like TogetherAi and Replicate.

gpustack
GPUStack is an open-source GPU cluster manager designed for running large language models (LLMs). It supports a wide variety of hardware, scales with GPU inventory, offers lightweight Python package with minimal dependencies, provides OpenAI-compatible APIs, simplifies user and API key management, enables GPU metrics monitoring, and facilitates token usage and rate metrics tracking. The tool is suitable for managing GPU clusters efficiently and effectively.

nodetool
NodeTool is a platform designed for AI enthusiasts, developers, and creators, providing a visual interface to access a variety of AI tools and models. It simplifies access to advanced AI technologies, offering resources for content creation, data analysis, automation, and more. With features like a visual editor, seamless integration with leading AI platforms, model manager, and API integration, NodeTool caters to both newcomers and experienced users in the AI field.

gpt-computer-assistant
GPT Computer Assistant (GCA) is an open-source framework designed to build vertical AI agents that can automate tasks on Windows, macOS, and Ubuntu systems. It leverages the Model Context Protocol (MCP) and its own modules to mimic human-like actions and achieve advanced capabilities. With GCA, users can empower themselves to accomplish more in less time by automating tasks like updating dependencies, analyzing databases, and configuring cloud security settings.

VITA
VITA is an open-source interactive omni multimodal Large Language Model (LLM) capable of processing video, image, text, and audio inputs simultaneously. It stands out with features like Omni Multimodal Understanding, Non-awakening Interaction, and Audio Interrupt Interaction. VITA can respond to user queries without a wake-up word, track and filter external queries in real-time, and handle various query inputs effectively. The model utilizes state tokens and a duplex scheme to enhance the multimodal interactive experience.

lionagi
LionAGI is a powerful intelligent workflow automation framework that introduces advanced ML models into any existing workflows and data infrastructure. It can interact with almost any model, run interactions in parallel for most models, produce structured pydantic outputs with flexible usage, automate workflow via graph based agents, use advanced prompting techniques, and more. LionAGI aims to provide a centralized agent-managed framework for "ML-powered tools coordination" and to dramatically lower the barrier of entries for creating use-case/domain specific tools. It is designed to be asynchronous only and requires Python 3.10 or higher.

WordLlama
WordLlama is a fast, lightweight NLP toolkit optimized for CPU hardware. It recycles components from large language models to create efficient word representations. It offers features like Matryoshka Representations, low resource requirements, binarization, and numpy-only inference. The tool is suitable for tasks like semantic matching, fuzzy deduplication, ranking, and clustering, making it a good option for NLP-lite tasks and exploratory analysis.

solana-agent-kit
Solana Agent Kit is an open-source toolkit designed for connecting AI agents to Solana protocols. It enables agents, regardless of the model used, to autonomously perform various Solana actions such as trading tokens, launching new tokens, lending assets, sending compressed airdrops, executing blinks, and more. The toolkit integrates core blockchain features like token operations, NFT management via Metaplex, DeFi integration, Solana blinks, AI integration features with LangChain, autonomous modes, and AI tools. It provides ready-to-use tools for blockchain operations, supports autonomous agent actions, and offers features like memory management, real-time feedback, and error handling. Solana Agent Kit facilitates tasks such as deploying tokens, creating NFT collections, swapping tokens, lending tokens, staking SOL, and sending SPL token airdrops via ZK compression. It also includes functionalities for fetching price data from Pyth and relies on key Solana and Metaplex libraries for its operations.

acte
Acte is a framework designed to build GUI-like tools for AI Agents. It aims to address the issues of cognitive load and freedom degrees when interacting with multiple APIs in complex scenarios. By providing a graphical user interface (GUI) for Agents, Acte helps reduce cognitive load and constraints interaction, similar to how humans interact with computers through GUIs. The tool offers APIs for starting new sessions, executing actions, and displaying screens, accessible via HTTP requests or the SessionManager class.

polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI apps directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files to text, generate simple text, create a long-term memory, and generate images with Dall-E. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.

Mercury
Mercury is a code efficiency benchmark designed for code synthesis tasks. It includes 1,889 programming tasks of varying difficulty levels and provides test case generators for comprehensive evaluation. The benchmark aims to assess the efficiency of large language models in generating code solutions.

aio-pika
Aio-pika is a wrapper around aiormq for asyncio and humans. It provides a completely asynchronous API, object-oriented API, transparent auto-reconnects with complete state recovery, Python 3.7+ compatibility, transparent publisher confirms support, transactions support, and complete type-hints coverage.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

vecs
vecs is a Python client for managing and querying vector stores in PostgreSQL with the pgvector extension. It allows users to create collections of vectors with associated metadata, index the collections for fast search performance, and query the collections based on specified filters. The tool simplifies the process of working with vector data in a PostgreSQL database, making it easier to store, retrieve, and analyze vector information.
For similar tasks

daydreams
Daydreams is a generative agent library designed for playing onchain games by injecting context. It is chain agnostic and allows users to perform onchain tasks, including playing any onchain game. The tool is lightweight and powerful, enabling users to define game context, register actions, set goals, monitor progress, and integrate with external agents. Daydreams aims to be 'lite' and 'composable', dynamically generating code needed to play games. It is currently in pre-alpha stage, seeking feedback and collaboration for further development.

airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.

MateCat
Matecat is an enterprise-level, web-based CAT tool designed to make post-editing and outsourcing easy and to provide a complete set of features to manage and monitor translation projects.

Genshin-Party-Builder
Party Builder for Genshin Impact is an AI-assisted team creation tool that helps players assemble well-rounded teams by analyzing characters' attributes, constellation levels, weapon types, elemental reactions, roles, and community scores. It allows users to optimize their team compositions for better gameplay experiences. The tool provides a user-friendly interface for easy team customization and strategy planning, enhancing the overall gaming experience for Genshin Impact players.

agno
Agno is a lightweight library for building multi-modal Agents. It is designed with core principles of simplicity, uncompromising performance, and agnosticism, allowing users to create blazing fast agents with minimal memory footprint. Agno supports any model, any provider, and any modality, making it a versatile container for AGI. Users can build agents with lightning-fast agent creation, model agnostic capabilities, native support for text, image, audio, and video inputs and outputs, memory management, knowledge stores, structured outputs, and real-time monitoring. The library enables users to create autonomous programs that use language models to solve problems, improve responses, and achieve tasks with varying levels of agency and autonomy.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.