
magma
An AI agent framework built to get your agents into an application as fast as possible. Deploy on MagmaDeploy.com or self-host
Stars: 64

Magma is a powerful and flexible framework for building scalable and efficient machine learning pipelines. It provides a simple interface for creating complex workflows, enabling users to easily experiment with different models and data processing techniques. With Magma, users can streamline the development and deployment of machine learning projects, saving time and resources.
README:
Magma is a framework that lets you create AI agents without the headache. No complex chains, no confusing abstractions - just write the logic you want your agent to have.
Want to try it out? Chat with Dialog, our user research agent built with Magma!
- Install Magma:
npm i @pompeii-labs/magma
- Create your first agent:
import { MagmaAgent } from "@pompeii-labs/magma";
// Magma Agents are class based, so you can extend them with your own methods
class MyAgent extends MagmaAgent {
// Want to give it some personality? Add system prompts:
getSystemPrompts() {
return [{
role: "system",
content: "You are a friendly assistant who loves dad jokes"
}];
}
}
// That's it! You've got a working agent
const myAgent = new MyAgent();
// Run it:
const reply = await myAgent.main();
console.log(reply.content);
- Simple: Build agents in minutes with minimal code
- Flexible: Use any AI provider (OpenAI, Anthropic, Groq)
- Hosted: Deploy your agents in seconds with the MagmaDeploy platform
- Powerful: Add tools and middleware when you need them
- Observable: See exactly what your agent is doing
Tools give your agent the ability to perform actions. Any method decorated with @tool and @toolparam will be available for the agent to use.
Important Notes:
- Every tool method must return a string
- Every tool has
call
as a required parameter, which is theMagmaToolCall
object - Tools are executed in sequence
import { MagmaAgent } from "@pompeii-labs/magma";
import { tool, toolparam } from "@pompeii-labs/magma/decorators";
/** Decorate any agent class method with @toolparam or @tool.
* @tool is used to define the tool itself
* @toolparam is used to define the parameters of the tool (key, type, description, required)
*/
class MyAgent extends MagmaAgent {
@tool({ name: "search_database", description: "Search the database for records" })
@toolparam({
key: "query",
type: "string",
description: "Search query",
required: true
})
@toolparam({
key: "filters",
type: "object",
properties: [
{ key: "date", type: "string" },
{ key: "category", type: "string", enum: ["A", "B", "C"] }
]
})
async searchDatabase(call: MagmaToolCall) {
const { query, filters } = call.fn_args;
const results = await this.searchDatabase(query, filters);
return "Here are the results of your search: " + JSON.stringify(results);
}
}
Middleware is a novel concept to Magma. It allows you to add custom logic to your agent before or after a tool is executed.
This is a great way to add custom logging, validation, data sanitization, etc.
Types:
- "preCompletion": Runs before the LLM call is made, takes in a MagmaUserMessage
- "onCompletion": Runs after the agent generates a text response, takes in a MagmaAssistantMessage
- "preToolExecution": Runs before a tool is executed, takes in a MagmaToolCall
- "onToolExecution": Runs after a tool is executed, takes in a MagmaToolResult
Important Notes:
- You can have unlimited middleware methods
- Middleware methods can manipulate the message they take in
- Middleware methods can throw errors to adjust the flow of the agent
Error Handling:
- If preCompletion middleware throws an error, the error message is supplied as if it were the assistant message. The user and assistant messages are also removed from the conversation history
- If onCompletion middleware throws an error, the error message is supplied to the LLM, and it tries to regenerate a response. The assistant message is not added to the conversation history
- If preToolExecution middleware throws an error, the error message is supplied as if it were the response from the tool
- If onToolExecution middleware throws an error, the error message is supplied as if it were the response from the tool
import { MagmaAgent } from "@pompeii-labs/magma";
import { middleware } from "@pompeii-labs/magma/decorators";
/**
* Decorate any agent class method with @middleware to add custom logging, validation, etc.
* Types: "preCompletion", "onCompletion", "preToolExecution", "onToolExecution"
*/
class MyAgent extends MagmaAgent {
@middleware("onCompletion")
async logBeforeCompletion(message) {
if (message.content.includes("bad word")) {
throw new Error("You just used a bad word, please try again.");
}
}
}
Jobs allow you to schedule functions within your agent. Jobs conform to the standard UNIX cron syntax (https://crontab.guru/).
Important Notes:
- Jobs should be static methods, so they can run without instantiating the agent.
- Jobs do not take in any parameters, and they do not return anything.
import { MagmaAgent } from "@pompeii-labs/magma";
import { job } from "@pompeii-labs/magma/decorators";
class MyAgent extends MagmaAgent {
// Run every day at midnight
@job("0 0 * * *")
static async dailyCleanup() {
await this.cleanDatabase();
}
// Run every hour with timezone
@job("0 * * * *", { timezone: "America/New_York" })
static async hourlySync() {
await this.syncData();
}
}
Hooks allow you to expose your agent as an API. Any method decorated with @hook will be exposed as an endpoint.
Important Notes:
- Hooks are static methods, so they can run without instantiating the agent.
- Hooks are exposed at
/hooks/{hook_name}
in the Magma API - The only parameter to hook functions is the request object, which is an instance of
express.Request
import { MagmaAgent } from "@pompeii-labs/magma";
import { hook } from "@pompeii-labs/magma/decorators";
import { Request } from "express";
class MyAgent extends MagmaAgent {
@hook('notification')
static async handleNotification(req: Request) {
await this.processNotification(req.body);
}
}
You can use any supported provider by setting the providerConfig.
Important Notes:
- You can set the providerConfig in the constructor, or by calling
setProviderConfig
- You do not need to adjust any of your tools, middleware, jobs, or hooks to use a different provider. Magma will handle the rest.
class Agent extends MagmaAgent {
constructor() {
// Use OpenAI (default)
super({
providerConfig: {
provider: "openai",
model: "gpt-4o"
}
});
// Use Anthropic
this.setProviderConfig({
provider: "anthropic",
model: "claude-3.5-sonnet-20240620"
});
// Use Groq
this.setProviderConfig({
provider: "groq",
model: "llama-3.1-70b-versatile"
});
}
}
Every agent has a state object that you can use to store data. You can store any data type, and it will be persisted between calls. You can also choose to use fields on the agent class to store data.
State does not get passed into LLM calls, so it's a good place to store data that you want to persist between calls / sensitive data.
class MyAgent extends MagmaAgent {
// Using a field to store data
myQuery = "Hello, world!";
async setup() {
// Initialize state
this.state.set("counter", 0);
this.state.set("access_token", "1234567890");
}
@tool({ name: "increment" })
async increment() {
const counter = this.state.get("counter") || 0;
this.state.set("counter", counter + 1);
return `Counter is now ${counter + 1}`;
}
@tool({ name: "api_call" })
async apiCall() {
const access_token = this.state.get("access_token");
const response = await fetch("https://myapi.com/data", {
headers: {
"Authorization": `Bearer ${access_token}`
},
body: JSON.stringify({
query: this.myQuery
})
});
return JSON.stringify(response.json());
}
}
import { MagmaAgent } from "@pompeii-labs/magma";
class MyAgent extends MagmaAgent {
// Initialize your agent
async setup() {
// Load resources, connect to databases, etc.
await this.loadDatabase();
return "I'm ready to help!";
}
// Handle incoming messages
async receive(message: any) {
// Process user input before main() is called
if (message.type === 'image') {
await this.processImage(message.content);
}
}
// Clean up resources
async cleanup();
// Manually trigger a specific tool
async trigger({ name: "get_weather" });
// Stop the current execution
kill();
}
Event handlers are optional methods that allow you to tack on custom logic to various events in the agent lifecycle.
import { MagmaAgent } from "@pompeii-labs/magma";
class MyAgent extends MagmaAgent {
// Handle agent shutdown
async onCleanup() {
console.log("Agent shutting down...");
}
// Handle errors
async onError(error: Error) {
console.error("Something went wrong:", error);
await this.notifyAdmin(error);
}
// Track token usage
async onUsageUpdate(usage: MagmaUsage) {
await this.saveUsageMetrics(usage);
}
// Process streaming responses
async onStreamChunk(chunk: MagmaStreamChunk) {
console.log("Received chunk:", chunk.content);
}
}
- Join our Slack Community
- Star us on GitHub
Magma is Apache 2.0 licensed.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for magma
Similar Open Source Tools

magma
Magma is a powerful and flexible framework for building scalable and efficient machine learning pipelines. It provides a simple interface for creating complex workflows, enabling users to easily experiment with different models and data processing techniques. With Magma, users can streamline the development and deployment of machine learning projects, saving time and resources.

AI_Spectrum
AI_Spectrum is a versatile machine learning library that provides a wide range of tools and algorithms for building and deploying AI models. It offers a user-friendly interface for data preprocessing, model training, and evaluation. With AI_Spectrum, users can easily experiment with different machine learning techniques and optimize their models for various tasks. The library is designed to be flexible and scalable, making it suitable for both beginners and experienced data scientists.

deeppowers
Deeppowers is a powerful Python library for deep learning applications. It provides a wide range of tools and utilities to simplify the process of building and training deep neural networks. With Deeppowers, users can easily create complex neural network architectures, perform efficient training and optimization, and deploy models for various tasks. The library is designed to be user-friendly and flexible, making it suitable for both beginners and experienced deep learning practitioners.

PulsarRPAPro
PulsarRPAPro is a powerful robotic process automation (RPA) tool designed to automate repetitive tasks and streamline business processes. It offers a user-friendly interface for creating and managing automation workflows, allowing users to easily automate tasks without the need for extensive programming knowledge. With features such as task scheduling, data extraction, and integration with various applications, PulsarRPAPro helps organizations improve efficiency and productivity by reducing manual work and human errors. Whether you are a small business looking to automate simple tasks or a large enterprise seeking to optimize complex processes, PulsarRPAPro provides the flexibility and scalability to meet your automation needs.

open-ai
Open AI is a powerful tool for artificial intelligence research and development. It provides a wide range of machine learning models and algorithms, making it easier for developers to create innovative AI applications. With Open AI, users can explore cutting-edge technologies such as natural language processing, computer vision, and reinforcement learning. The platform offers a user-friendly interface and comprehensive documentation to support users in building and deploying AI solutions. Whether you are a beginner or an experienced AI practitioner, Open AI offers the tools and resources you need to accelerate your AI projects and stay ahead in the rapidly evolving field of artificial intelligence.

chatmcp
Chatmcp is a chatbot framework for building conversational AI applications. It provides a flexible and extensible platform for creating chatbots that can interact with users in a natural language. With Chatmcp, developers can easily integrate chatbot functionality into their applications, enabling users to communicate with the system through text-based conversations. The framework supports various natural language processing techniques and allows for the customization of chatbot behavior and responses. Chatmcp simplifies the development of chatbots by providing a set of pre-built components and tools that streamline the creation process. Whether you are building a customer support chatbot, a virtual assistant, or a chat-based game, Chatmcp offers the necessary features and capabilities to bring your conversational AI ideas to life.

trubrics-sdk
Trubrics-sdk is a software development kit designed to facilitate the integration of analytics features into applications. It provides a set of tools and functionalities that enable developers to easily incorporate analytics capabilities, such as data collection, analysis, and reporting, into their software products. The SDK streamlines the process of implementing analytics solutions, allowing developers to focus on building and enhancing their applications' functionality and user experience. By leveraging trubrics-sdk, developers can quickly and efficiently integrate robust analytics features, gaining valuable insights into user behavior and application performance.

atomic-agents
The Atomic Agents framework is a modular and extensible tool designed for creating powerful applications. It leverages Pydantic for data validation and serialization. The framework follows the principles of Atomic Design, providing small and single-purpose components that can be combined. It integrates with Instructor for AI agent architecture and supports various APIs like Cohere, Anthropic, and Gemini. The tool includes documentation, examples, and testing features to ensure smooth development and usage.

swirl-search
Swirl is an open-source software that allows users to simultaneously search multiple content sources and receive AI-ranked results. It connects to various data sources, including databases, public data services, and enterprise sources, and utilizes AI and LLMs to generate insights and answers based on the user's data. Swirl is easy to use, requiring only the download of a YML file, starting in Docker, and searching with Swirl. Users can add credentials to preloaded SearchProviders to access more sources. Swirl also offers integration with ChatGPT as a configured AI model. It adapts and distributes user queries to anything with a search API, re-ranking the unified results using Large Language Models without extracting or indexing anything. Swirl includes five Google Programmable Search Engines (PSEs) to get users up and running quickly. Key features of Swirl include Microsoft 365 integration, SearchProvider configurations, query adaptation, synchronous or asynchronous search federation, optional subscribe feature, pipelining of Processor stages, results stored in SQLite3 or PostgreSQL, built-in Query Transformation support, matching on word stems and handling of stopwords, duplicate detection, re-ranking of unified results using Cosine Vector Similarity, result mixers, page through all results requested, sample data sets, optional spell correction, optional search/result expiration service, easily extensible Connector and Mixer objects, and a welcoming community for collaboration and support.

simple-ai
Simple AI is a lightweight Python library for implementing basic artificial intelligence algorithms. It provides easy-to-use functions and classes for tasks such as machine learning, natural language processing, and computer vision. With Simple AI, users can quickly prototype and deploy AI solutions without the complexity of larger frameworks.

aiounifi
Aiounifi is a Python library that provides a simple interface for interacting with the Unifi Controller API. It allows users to easily manage their Unifi network devices, such as access points, switches, and gateways, through automated scripts or applications. With Aiounifi, users can retrieve device information, perform configuration changes, monitor network performance, and more, all through a convenient and efficient API wrapper. This library simplifies the process of integrating Unifi network management into custom solutions, making it ideal for network administrators, developers, and enthusiasts looking to automate and streamline their network operations.

nmed2024
Nmed2024 is a GitHub repository that contains code for a neural network model designed for medical image analysis. The repository includes scripts for training the model, as well as pre-trained weights for quick deployment. The model is specifically tailored for detecting abnormalities in medical images, such as tumors or fractures. It utilizes deep learning techniques to achieve high accuracy and can be easily integrated into existing medical imaging systems. Researchers and developers in the healthcare industry can leverage this tool to enhance the efficiency and accuracy of medical image analysis tasks.

sciml.ai
SciML.ai is an open source software organization dedicated to unifying packages for scientific machine learning. It focuses on developing modular scientific simulation support software, including differential equation solvers, inverse problems methodologies, and automated model discovery. The organization aims to provide a diverse set of tools with a common interface, creating a modular, easily-extendable, and highly performant ecosystem for scientific simulations. The website serves as a platform to showcase SciML organization's packages and share news within the ecosystem. Pull requests are encouraged for contributions.

God-Level-AI
A drill of scientific methods, processes, algorithms, and systems to build stories & models. An in-depth learning resource for humans. This repository is designed for individuals aiming to excel in the field of Data and AI, providing video sessions and text content for learning. It caters to those in leadership positions, professionals, and students, emphasizing the need for dedicated effort to achieve excellence in the tech field. The content covers various topics with a focus on practical application.

OAD
OAD is a powerful open-source tool for analyzing and visualizing data. It provides a user-friendly interface for exploring datasets, generating insights, and creating interactive visualizations. With OAD, users can easily import data from various sources, clean and preprocess data, perform statistical analysis, and create customizable visualizations to communicate findings effectively. Whether you are a data scientist, analyst, or researcher, OAD can help you streamline your data analysis workflow and uncover valuable insights from your data.

spring-ai
The Spring AI project provides a Spring-friendly API and abstractions for developing AI applications. It offers a portable client API for interacting with generative AI models, enabling developers to easily swap out implementations and access various models like OpenAI, Azure OpenAI, and HuggingFace. Spring AI also supports prompt engineering, providing classes and interfaces for creating and parsing prompts, as well as incorporating proprietary data into generative AI without retraining the model. This is achieved through Retrieval Augmented Generation (RAG), which involves extracting, transforming, and loading data into a vector database for use by AI models. Spring AI's VectorStore abstraction allows for seamless transitions between different vector database implementations.
For similar tasks

indie-hacker-tools-plus
Indie Hacker Tools Plus is a curated repository of essential tools and technology stacks for independent developers. The repository aims to help developers enhance efficiency, save costs, and mitigate risks by using popular and validated tools. It provides a collection of tools recognized by the industry to empower developers with the most refined technical support. Developers can contribute by submitting articles, software, or resources through issues or pull requests.

magma
Magma is a powerful and flexible framework for building scalable and efficient machine learning pipelines. It provides a simple interface for creating complex workflows, enabling users to easily experiment with different models and data processing techniques. With Magma, users can streamline the development and deployment of machine learning projects, saving time and resources.

aws-genai-llm-chatbot
This repository provides code to deploy a chatbot powered by Multi-Model and Multi-RAG using AWS CDK on AWS. Users can experiment with various Large Language Models and Multimodal Language Models from different providers. The solution supports Amazon Bedrock, Amazon SageMaker self-hosted models, and third-party providers via API. It also offers additional resources like AWS Generative AI CDK Constructs and Project Lakechain for building generative AI solutions and document processing. The roadmap and authors are listed, along with contributors. The library is licensed under the MIT-0 License with information on changelog, code of conduct, and contributing guidelines. A legal disclaimer advises users to conduct their own assessment before using the content for production purposes.

gemini-pro-vision-playground
Gemini Pro Vision Playground is a simple project aimed at assisting developers in utilizing the Gemini Pro Vision and Gemini Pro AI models for building applications. It provides a playground environment for experimenting with these models and integrating them into apps. The project includes instructions for setting up the Google AI API key and running the development server to visualize the results. Developers can learn more about the Gemini API documentation and Next.js framework through the provided resources. The project encourages contributions and feedback from the community.

uvadlc_notebooks
The UvA Deep Learning Tutorials repository contains a series of Jupyter notebooks designed to help understand theoretical concepts from lectures by providing corresponding implementations. The notebooks cover topics such as optimization techniques, transformers, graph neural networks, and more. They aim to teach details of the PyTorch framework, including PyTorch Lightning, with alternative translations to JAX+Flax. The tutorials are integrated as official tutorials of PyTorch Lightning and are relevant for graded assignments and exams.

simpleAI
SimpleAI is a self-hosted alternative to the not-so-open AI API, focused on replicating main endpoints for LLM such as text completion, chat, edits, and embeddings. It allows quick experimentation with different models, creating benchmarks, and handling specific use cases without relying on external services. Users can integrate and declare models through gRPC, query endpoints using Swagger UI or API, and resolve common issues like CORS with FastAPI middleware. The project is open for contributions and welcomes PRs, issues, documentation, and more.

react-native-executorch
React Native ExecuTorch is a framework that allows developers to run AI models on mobile devices using React Native. It bridges the gap between React Native and native platform capabilities, providing high-performance AI model execution without requiring deep knowledge of native code or machine learning internals. The tool supports ready-made models in `.pte` format and offers a Python API for custom models. It is designed to simplify the integration of AI features into React Native apps.

shell-ai
Shell-AI (`shai`) is a CLI utility that enables users to input commands in natural language and receive single-line command suggestions. It leverages natural language understanding and interactive CLI tools to enhance command line interactions. Users can describe tasks in plain English and receive corresponding command suggestions, making it easier to execute commands efficiently. Shell-AI supports cross-platform usage and is compatible with Azure OpenAI deployments, offering a user-friendly and efficient way to interact with the command line.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.