sre
The Operating System for Agents
Stars: 915
SmythOS is an operating system designed for building, deploying, and managing intelligent AI agents at scale. It provides a unified SDK and resource abstraction layer for various AI services, making it easy to scale and flexible. With an agent-first design, developer-friendly SDK, modular architecture, and enterprise security features, SmythOS offers a robust foundation for AI workloads. The system is built with a philosophy inspired by traditional operating system kernels, ensuring autonomy, control, and security for AI agents. SmythOS aims to make shipping production-ready AI agents accessible and open for everyone in the coming Internet of Agents era.
README:
Everything you need to build, deploy, and manage intelligent AI agents at scale. SmythOS is designed with a philosophy inspired by operating system kernels, ensuring a robust and scalable foundation for AI agents.
SDK Documentation | SRE Core Documentation | Code Examples
- Shipping production-ready AI agents shouldn’t feel like rocket science.
- Autonomy and control can, and must, coexist.
- Security isn’t an add-on; it’s built-in.
- The coming Internet of Agents must stay open and accessible to everyone.
SmythOS provides a complete Operating System for Agentic AI. Just as traditional operating systems manage resources and provide APIs for applications, SmythOS manages AI resources and provides a unified SDK that works from development to production.
SmythOS provides a unified interface for all resources, ensuring consistency and simplicity across your entire AI platform. Whether you're storing a file locally, on S3, or any other storage provider, you don't need to worry about the underlying implementation details. SmythOS offers a powerful abstraction layer where all providers expose the same functions and APIs.
This principle applies to all services - not just storage. Whether you're working with VectorDBs, cache (Redis, RAM), LLMs (OpenAI, Anthropic), or any other resource, the interface remains consistent across providers.
This approach makes your AI platform easy to scale and incredibly flexible. You can seamlessly swap between different providers to test performance, optimize costs, or meet specific requirements without changing a single line of your business logic.
Key Benefits:
- Agent-First Design: Built specifically for AI agent workloads
- Developer-Friendly: Simple SDK that scales from development to production
- Modular Architecture: Extensible connector system for any infrastructure
- Production-Ready: Scalable, observable, and battle-tested
- Enterprise Security: Built-in access control and secure credential management
We made a great tutorial that's really worth watching:
Install the CLI globally and create a new project:
npm i -g @smythos/cli
sre createThe CLI will guide you step-by-step to create your SDK project with the right configuration for your needs.
Add the SDK directly to your existing project:
npm install @smythos/sdkCheck the Examples, documentation and Code Templates to get started.
Note: If you face an issue with the CLI or with your code, set environment variable LOG_LEVEL="debug" and run your code again. Then share the logs with us, it will help diagnose the problem.
This monorepo contains three main packages:
The SRE is the core runtime environment that powers SmythOS. Think of it as the kernel of the AI agent operating system.
Features:
- Modular Architecture: Pluggable connectors for every service (Storage, LLM, VectorDB, Cache, etc.)
- Security-First: Built-in Candidate/ACL system for secure resource access
- Resource Management: Intelligent memory, storage, and compute management
- Agent Orchestration: Complete agent lifecycle management
- 40+ Components: Production-ready components for AI, data processing, and integrations
Supported Connectors:
- Storage: Local, S3, Google Cloud, Azure
- LLM: OpenAI, Anthropic, Google AI, AWS Bedrock, Groq, Perplexity
- VectorDB: Pinecone, Milvus, RAMVec
- Cache: RAM, Redis
- Vault: JSON File, AWS Secrets Manager, HashiCorp
The SDK provides a clean, developer-friendly abstraction layer over the SRE runtime. It's designed for simplicity without sacrificing power.
Why Use the SDK:
- Simple API: Clean, intuitive interface that's easy to learn
- Type-Safe: Full TypeScript support with IntelliSense
- Production-Ready: Same code works in development and production
- Configuration-Independent: Business logic stays unchanged as infrastructure scales
The SRE CLI helps you get started quickly with scaffolding and project management.
The SDK allows you to build agents with code or load and run a .smyth file. .smyth is the extension of agents built with our SmythOS builder.
async function main() {
const agentPath = path.resolve(__dirname, 'my-agent.smyth');
//Importing the agent workflow
const agent = Agent.import(agentPath, {
model: Model.OpenAI('gpt-4o'),
});
//query the agent and get the full response
const result = await agent.prompt('Hello, how are you ?');
console.log(result);
}Want stream mode ? easy
Click to expand: Stream Mode Example - Real-time response streaming with events
const events = await agent.prompt('Hello, how are you ?').stream();
events.on('content', (text) => {
console.log('content');
});
events.on('end', /*... handle end ... */)
events.on('usage', /*... collect agent usage data ... */)
events.on('toolCall', /*... ... */)
events.on('toolResult', /*... ... */)
...Want chat mode ? easy
Click to expand: Chat Mode Example - Conversational agent with memory
const chat = agent.chat();
//from there you can use the prompt or prompt.stream to handle it
let result = await chat.prompt("Hello, I'm Smyth")
console.log(result);
result = await chat.prompt('Do you remember my name ?");
console.log(result);
//the difference between agent.prompt() and chat.prompt() is that the later remembers the conversationIn this example we are coding the agent logic with the help of the SDK elements.
Click to expand: Complete Article Writer Agent - Full example using LLM + VectorDB + Storage
import { Agent, Model } from '@smythos/sdk';
async function main() {
// Create an intelligent agent
const agent = new Agent({
name: 'Article Writer',
model: 'gpt-4o',
behavior: 'You are a copy writing assistant. The user will provide a topic and you have to write an article about it and store it.',
});
// Add a custom skill that combines multiple AI capabilities
agent.addSkill({
id: 'AgentWriter_001',
name: 'WriteAndStoreArticle',
description: 'Writes an article about a given topic and stores it',
process: async ({ topic }) => {
// VectorDB - Search for relevant context
const vec = agent.vectordb.Pinecone({
namespace: 'myNameSpace',
indexName: 'demo-vec',
pineconeApiKey: process.env.PINECONE_API_KEY,
embeddings: Model.OpenAI('text-embedding-3-large'),
});
const searchResult = await vec.search(topic, {
topK: 10,
includeMetadata: true,
});
const context = searchResult.map((e) => e?.metadata?.text).join('\n');
// LLM - Generate the article
const llm = agent.llm.OpenAI('gpt-4o-mini');
const result = await llm.prompt(`Write an article about ${topic} using the following context: ${context}`);
// Storage - Save the article
const storage = agent.storage.S3({
/*... S3 Config ...*/
});
const uri = await storage.write('article.txt', result);
return `The article has been generated and stored. Internal URI: ${uri}`;
},
});
// Use the agent
const result = await agent.prompt('Write an article about Sakura trees');
console.log(result);
}
main().catch(console.error);Security is a core tenant of SRE. Every operation requires proper authorization through the Candidate/ACL system, ensuring that agents only access resources they are permitted to.
const candidate = AccessCandidate.agent(agentId);
const storage = ConnectorService.getStorageConnector().user(candidate);
await storage.write('data.json', content);Your business logic stays identical while infrastructure scales: When you use the SDK, SmythOS Runtime Environment will be implicitly initialized with general connectors that covers standard agent use cases.
Click to expand: Basic SRE Setup - Default development configuration
// you don't need to explicitly initialize SRE
// we are just showing you how it is initialized internally
// const sre = SRE.init({
// Cache: { Connector: 'RAM' },
// Storage: { Connector: 'Local' },
// Log: { Connector: 'ConsoleLog' },
// });
async function main() {
// your agent logic goes here
}
main();But you can explicitly initialize SRE with other built-in connectors, or make your own Use cases :
- You want to use a custom agents store
- You want to store your API keys and other credentials in a more secure vault
- You need enterprise grade security and data isolation
- ...
Click to expand: Production SRE Setup - Enterprise-grade configuration with custom connectors
const sre = SRE.init({
Account: { Connector: 'EnterpriseAccountConnector', Settings: { ... } },
Vault: { Connector: 'Hashicorp', Settings: { url: 'https://vault.company.com' } },
Cache: { Connector: 'Redis', Settings: { url: 'redis://prod-cluster' } },
Storage: { Connector: 'S3', Settings: { bucket: 'company-ai-agents' } },
VectorDB: { Connector: 'Pinecone', Settings: { indexName: 'company-ai-agents' } },
Log: { Connector: 'CustomLogStore'},
});
async function main() {
// your agent logic goes here
}
main();40+ production-ready components for every AI use case. These components can be invoked programmatically or through the symbolic representation of the agent workflow (the .smyth file).
-
AI/LLM:
GenAILLM,ImageGen,LLMAssistant -
External:
APICall,WebSearch,WebScrape,HuggingFace -
Data:
DataSourceIndexer,DataSourceLookupJSONFilter -
Logic:
LogicAND,LogicOR,Classifier,ForEach -
Storage:
LocalStorage,S3 -
Code:
ECMAScript,ServerlessCode
| Feature | Description |
|---|---|
| Agent-Centric | Built specifically for AI agent workloads and patterns |
| Secure by Default | Enterprise-grade security with data isolation |
| High Performance | Optimized for high-throughput AI operations |
| Modular | Swap any component without breaking your system |
| Observable | Built-in monitoring, logging, and debugging tools |
| Cloud-Native | Runs anywhere - local, cloud, edge, or hybrid |
| Scalable | From development to enterprise production |
We welcome contributions! Please see our Contributing Guide and Code of Conduct.
This project is licensed under the MIT License.
- We will release an open source visual agent IDE later this year.
- Support us at SmythOS
- Join our community to stay updated on new features, connectors, and capabilities.
/smɪθ oʊ ɛs/
Ride the llama. Skip the drama.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for sre
Similar Open Source Tools
sre
SmythOS is an operating system designed for building, deploying, and managing intelligent AI agents at scale. It provides a unified SDK and resource abstraction layer for various AI services, making it easy to scale and flexible. With an agent-first design, developer-friendly SDK, modular architecture, and enterprise security features, SmythOS offers a robust foundation for AI workloads. The system is built with a philosophy inspired by traditional operating system kernels, ensuring autonomy, control, and security for AI agents. SmythOS aims to make shipping production-ready AI agents accessible and open for everyone in the coming Internet of Agents era.
trpc-agent-go
A powerful Go framework for building intelligent agent systems with large language models (LLMs), hierarchical planners, memory, telemetry, and a rich tool ecosystem. tRPC-Agent-Go enables the creation of autonomous or semi-autonomous agents that reason, call tools, collaborate with sub-agents, and maintain long-term state. The framework provides detailed documentation, examples, and tools for accelerating the development of AI applications.
sdk-typescript
Strands Agents - TypeScript SDK is a lightweight and flexible SDK that takes a model-driven approach to building and running AI agents in TypeScript/JavaScript. It brings key features from the Python Strands framework to Node.js environments, enabling type-safe agent development for various applications. The SDK supports model agnostic development with first-class support for Amazon Bedrock and OpenAI, along with extensible architecture for custom providers. It also offers built-in MCP support, real-time response streaming, extensible hooks, and conversation management features. With tools for interaction with external systems and seamless integration with MCP servers, the SDK provides a comprehensive solution for developing AI agents.
crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.
monty
Monty is a minimal, secure Python interpreter written in Rust for use by AI. It allows safe execution of Python code written by an LLM embedded in your agent, with fast startup times and performance similar to CPython. Monty supports running a subset of Python code, blocking access to the host environment, calling host functions, typechecking, snapshotting interpreter state, controlling resource usage, collecting stdout and stderr, and running async or sync code. It is designed for running code written by agents, providing a sandboxed environment without the complexity of a full container-based solution.
agentfield
AgentField is an open-source control plane designed for autonomous AI agents, providing infrastructure for agents to make decisions beyond chatbots. It offers features like scaling infrastructure, routing & discovery, async execution, durable state, observability, trust infrastructure with cryptographic identity, verifiable credentials, and policy enforcement. Users can write agents in Python, Go, TypeScript, or interact via REST APIs. The tool enables the creation of AI backends that reason autonomously within defined boundaries, offering predictability and flexibility. AgentField aims to bridge the gap between AI frameworks and production-ready infrastructure for AI agents.
nanocoder
Nanocoder is a local-first CLI coding agent that supports multiple AI providers with tool support for file operations and command execution. It focuses on privacy and control, allowing users to code locally with AI tools. The tool is designed to bring the power of agentic coding tools to local models or controlled APIs like OpenRouter, promoting community-led development and inclusive collaboration in the AI coding space.
chat
deco.chat is an open-source foundation for building AI-native software, providing developers, engineers, and AI enthusiasts with robust tools to rapidly prototype, develop, and deploy AI-powered applications. It empowers Vibecoders to prototype ideas and Agentic engineers to deploy scalable, secure, and sustainable production systems. The core capabilities include an open-source runtime for composing tools and workflows, MCP Mesh for secure integration of models and APIs, a unified TypeScript stack for backend logic and custom frontends, global modular infrastructure built on Cloudflare, and a visual workspace for building agents and orchestrating everything in code.
mem0
Mem0 is a tool that provides a smart, self-improving memory layer for Large Language Models, enabling personalized AI experiences across applications. It offers persistent memory for users, sessions, and agents, self-improving personalization, a simple API for easy integration, and cross-platform consistency. Users can store memories, retrieve memories, search for related memories, update memories, get the history of a memory, and delete memories using Mem0. It is designed to enhance AI experiences by enabling long-term memory storage and retrieval.
BrowserAI
BrowserAI is a tool that allows users to run large language models (LLMs) directly in the browser, providing a simple, fast, and open-source solution. It prioritizes privacy by processing data locally, is cost-effective with no server costs, works offline after initial download, and offers WebGPU acceleration for high performance. It is developer-friendly with a simple API, supports multiple engines, and comes with pre-configured models for easy use. Ideal for web developers, companies needing privacy-conscious AI solutions, researchers experimenting with browser-based AI, and hobbyists exploring AI without infrastructure overhead.
CyberStrikeAI
CyberStrikeAI is an AI-native security testing platform built in Go that integrates 100+ security tools, an intelligent orchestration engine, role-based testing with predefined security roles, a skills system with specialized testing skills, and comprehensive lifecycle management capabilities. It enables end-to-end automation from conversational commands to vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualization, delivering an auditable, traceable, and collaborative testing environment for security teams. The platform features an AI decision engine with OpenAI-compatible models, native MCP implementation with various transports, prebuilt tool recipes, large-result pagination, attack-chain graph, password-protected web UI, knowledge base with vector search, vulnerability management, batch task management, role-based testing, and skills system.
code_puppy
Code Puppy is an AI-powered code generation agent designed to understand programming tasks, generate high-quality code, and explain its reasoning. It supports multi-language code generation, interactive CLI, and detailed code explanations. The tool requires Python 3.9+ and API keys for various models like GPT, Google's Gemini, Cerebras, and Claude. It also integrates with MCP servers for advanced features like code search and documentation lookups. Users can create custom JSON agents for specialized tasks and access a variety of tools for file management, code execution, and reasoning sharing.
hayhooks
Hayhooks is a tool that simplifies the deployment and serving of Haystack pipelines as REST APIs. It allows users to wrap their pipelines with custom logic and expose them via HTTP endpoints, including OpenAI-compatible chat completion endpoints. With Hayhooks, users can easily convert their Haystack pipelines into API services with minimal boilerplate code.
VITA
VITA is an open-source interactive omni multimodal Large Language Model (LLM) capable of processing video, image, text, and audio inputs simultaneously. It stands out with features like Omni Multimodal Understanding, Non-awakening Interaction, and Audio Interrupt Interaction. VITA can respond to user queries without a wake-up word, track and filter external queries in real-time, and handle various query inputs effectively. The model utilizes state tokens and a duplex scheme to enhance the multimodal interactive experience.
indexify
Indexify is an open-source engine for building fast data pipelines for unstructured data (video, audio, images, and documents) using reusable extractors for embedding, transformation, and feature extraction. LLM Applications can query transformed content friendly to LLMs by semantic search and SQL queries. Indexify keeps vector databases and structured databases (PostgreSQL) updated by automatically invoking the pipelines as new data is ingested into the system from external data sources. **Why use Indexify** * Makes Unstructured Data **Queryable** with **SQL** and **Semantic Search** * **Real-Time** Extraction Engine to keep indexes **automatically** updated as new data is ingested. * Create **Extraction Graph** to describe **data transformation** and extraction of **embedding** and **structured extraction**. * **Incremental Extraction** and **Selective Deletion** when content is deleted or updated. * **Extractor SDK** allows adding new extraction capabilities, and many readily available extractors for **PDF**, **Image**, and **Video** indexing and extraction. * Works with **any LLM Framework** including **Langchain**, **DSPy**, etc. * Runs on your laptop during **prototyping** and also scales to **1000s of machines** on the cloud. * Works with many **Blob Stores**, **Vector Stores**, and **Structured Databases** * We have even **Open Sourced Automation** to deploy to Kubernetes in production.
AgC
AgC is an open-core platform designed for deploying, running, and orchestrating AI agents at scale. It treats agents as first-class compute units, providing a modular, observable, cloud-neutral, and production-ready environment. Open Agentic Compute empowers developers and organizations to run agents like cloud-native workloads without lock-in.
For similar tasks
OpenAGI
OpenAGI is an AI agent creation package designed for researchers and developers to create intelligent agents using advanced machine learning techniques. The package provides tools and resources for building and training AI models, enabling users to develop sophisticated AI applications. With a focus on collaboration and community engagement, OpenAGI aims to facilitate the integration of AI technologies into various domains, fostering innovation and knowledge sharing among experts and enthusiasts.
GPTSwarm
GPTSwarm is a graph-based framework for LLM-based agents that enables the creation of LLM-based agents from graphs and facilitates the customized and automatic self-organization of agent swarms with self-improvement capabilities. The library includes components for domain-specific operations, graph-related functions, LLM backend selection, memory management, and optimization algorithms to enhance agent performance and swarm efficiency. Users can quickly run predefined swarms or utilize tools like the file analyzer. GPTSwarm supports local LM inference via LM Studio, allowing users to run with a local LLM model. The framework has been accepted by ICML2024 and offers advanced features for experimentation and customization.
AgentForge
AgentForge is a low-code framework tailored for the rapid development, testing, and iteration of AI-powered autonomous agents and Cognitive Architectures. It is compatible with a range of LLM models and offers flexibility to run different models for different agents based on specific needs. The framework is designed for seamless extensibility and database-flexibility, making it an ideal playground for various AI projects. AgentForge is a beta-testing ground and future-proof hub for crafting intelligent, model-agnostic autonomous agents.
atomic_agents
Atomic Agents is a modular and extensible framework designed for creating powerful applications. It follows the principles of Atomic Design, emphasizing small and single-purpose components. Leveraging Pydantic for data validation and serialization, the framework offers a set of tools and agents that can be combined to build AI applications. It depends on the Instructor package and supports various APIs like OpenAI, Cohere, Anthropic, and Gemini. Atomic Agents is suitable for developers looking to create AI agents with a focus on modularity and flexibility.
LongRoPE
LongRoPE is a method to extend the context window of large language models (LLMs) beyond 2 million tokens. It identifies and exploits non-uniformities in positional embeddings to enable 8x context extension without fine-tuning. The method utilizes a progressive extension strategy with 256k fine-tuning to reach a 2048k context. It adjusts embeddings for shorter contexts to maintain performance within the original window size. LongRoPE has been shown to be effective in maintaining performance across various tasks from 4k to 2048k context lengths.
ax
Ax is a Typescript library that allows users to build intelligent agents inspired by agentic workflows and the Stanford DSP paper. It seamlessly integrates with multiple Large Language Models (LLMs) and VectorDBs to create RAG pipelines or collaborative agents capable of solving complex problems. The library offers advanced features such as streaming validation, multi-modal DSP, and automatic prompt tuning using optimizers. Users can easily convert documents of any format to text, perform smart chunking, embedding, and querying, and ensure output validation while streaming. Ax is production-ready, written in Typescript, and has zero dependencies.
Awesome-AI-Agents
Awesome-AI-Agents is a curated list of projects, frameworks, benchmarks, platforms, and related resources focused on autonomous AI agents powered by Large Language Models (LLMs). The repository showcases a wide range of applications, multi-agent task solver projects, agent society simulations, and advanced components for building and customizing AI agents. It also includes frameworks for orchestrating role-playing, evaluating LLM-as-Agent performance, and connecting LLMs with real-world applications through platforms and APIs. Additionally, the repository features surveys, paper lists, and blogs related to LLM-based autonomous agents, making it a valuable resource for researchers, developers, and enthusiasts in the field of AI.
CodeFuse-muAgent
CodeFuse-muAgent is a Multi-Agent framework designed to streamline Standard Operating Procedure (SOP) orchestration for agents. It integrates toolkits, code libraries, knowledge bases, and sandbox environments for rapid construction of complex Multi-Agent interactive applications. The framework enables efficient execution and handling of multi-layered and multi-dimensional tasks.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.


