
magma
An AI agent framework built to get your agents into an application as fast as possible. Deploy on MagmaDeploy.com or self-host
Stars: 69

Magma is a powerful and flexible framework for building scalable and efficient machine learning pipelines. It provides a simple interface for creating complex workflows, enabling users to easily experiment with different models and data processing techniques. With Magma, users can streamline the development and deployment of machine learning projects, saving time and resources.
README:
Magma is a framework that lets you create AI agents without the headache. No complex chains, no confusing abstractions - just write the logic you want your agent to have.
Want to try it out? Chat with Dialog, our user research agent built with Magma!
- Install Magma:
npm i @pompeii-labs/magma
- Create your first agent:
import { MagmaAgent } from "@pompeii-labs/magma";
// Magma Agents are class based, so you can extend them with your own methods
class MyAgent extends MagmaAgent {
// Want to give it some personality? Add system prompts:
getSystemPrompts() {
return [{
role: "system",
content: "You are a friendly assistant who loves dad jokes"
}];
}
}
// That's it! You've got a working agent
const myAgent = new MyAgent();
// Run it:
const reply = await myAgent.main();
console.log(reply.content);
- Simple: Build agents in minutes with minimal code
- Flexible: Use any AI provider (OpenAI, Anthropic, Groq)
- Hosted: Deploy your agents in seconds with the MagmaDeploy platform
- Powerful: Add tools and middleware when you need them
- Observable: See exactly what your agent is doing
Tools give your agent the ability to perform actions. Any method decorated with @tool and @toolparam will be available for the agent to use.
Important Notes:
- Every tool method must return a string
- Every tool has
call
as a required parameter, which is theMagmaToolCall
object - Tools are executed in sequence
import { MagmaAgent } from "@pompeii-labs/magma";
import { tool, toolparam } from "@pompeii-labs/magma/decorators";
/** Decorate any agent class method with @toolparam or @tool.
* @tool is used to define the tool itself
* @toolparam is used to define the parameters of the tool (key, type, description, required)
*/
class MyAgent extends MagmaAgent {
@tool({ name: "search_database", description: "Search the database for records" })
@toolparam({
key: "query",
type: "string",
description: "Search query",
required: true
})
@toolparam({
key: "filters",
type: "object",
properties: [
{ key: "date", type: "string" },
{ key: "category", type: "string", enum: ["A", "B", "C"] }
]
})
async searchDatabase(call: MagmaToolCall) {
const { query, filters } = call.fn_args;
const results = await this.searchDatabase(query, filters);
return "Here are the results of your search: " + JSON.stringify(results);
}
}
Middleware is a novel concept to Magma. It allows you to add custom logic to your agent before or after a tool is executed.
This is a great way to add custom logging, validation, data sanitization, etc.
Types:
- "preCompletion": Runs before the LLM call is made, takes in a MagmaUserMessage
- "onCompletion": Runs after the agent generates a text response, takes in a MagmaAssistantMessage
- "preToolExecution": Runs before a tool is executed, takes in a MagmaToolCall
- "onToolExecution": Runs after a tool is executed, takes in a MagmaToolResult
Important Notes:
- You can have unlimited middleware methods
- Middleware methods can manipulate the message they take in
- Middleware methods can throw errors to adjust the flow of the agent
Error Handling:
- If preCompletion middleware throws an error, the error message is supplied as if it were the assistant message. The user and assistant messages are also removed from the conversation history
- If onCompletion middleware throws an error, the error message is supplied to the LLM, and it tries to regenerate a response. The assistant message is not added to the conversation history
- If preToolExecution middleware throws an error, the error message is supplied as if it were the response from the tool
- If onToolExecution middleware throws an error, the error message is supplied as if it were the response from the tool
import { MagmaAgent } from "@pompeii-labs/magma";
import { middleware } from "@pompeii-labs/magma/decorators";
/**
* Decorate any agent class method with @middleware to add custom logging, validation, etc.
* Types: "preCompletion", "onCompletion", "preToolExecution", "onToolExecution"
*/
class MyAgent extends MagmaAgent {
@middleware("onCompletion")
async logBeforeCompletion(message) {
if (message.content.includes("bad word")) {
throw new Error("You just used a bad word, please try again.");
}
}
}
Jobs allow you to schedule functions within your agent. Jobs conform to the standard UNIX cron syntax (https://crontab.guru/).
Important Notes:
- Jobs should be static methods, so they can run without instantiating the agent.
- Jobs do not take in any parameters, and they do not return anything.
import { MagmaAgent } from "@pompeii-labs/magma";
import { job } from "@pompeii-labs/magma/decorators";
class MyAgent extends MagmaAgent {
// Run every day at midnight
@job("0 0 * * *")
static async dailyCleanup() {
await this.cleanDatabase();
}
// Run every hour with timezone
@job("0 * * * *", { timezone: "America/New_York" })
static async hourlySync() {
await this.syncData();
}
}
Hooks allow you to expose your agent as an API. Any method decorated with @hook will be exposed as an endpoint.
Important Notes:
- Hooks are static methods, so they can run without instantiating the agent.
- Hooks are exposed at
/hooks/{hook_name}
in the Magma API - The only parameter to hook functions is the request object, which is an instance of
express.Request
import { MagmaAgent } from "@pompeii-labs/magma";
import { hook } from "@pompeii-labs/magma/decorators";
import { Request } from "express";
class MyAgent extends MagmaAgent {
@hook('notification')
static async handleNotification(req: Request) {
await this.processNotification(req.body);
}
}
You can use any supported provider by setting the providerConfig.
Important Notes:
- You can set the providerConfig in the constructor, or by calling
setProviderConfig
- You do not need to adjust any of your tools, middleware, jobs, or hooks to use a different provider. Magma will handle the rest.
class Agent extends MagmaAgent {
constructor() {
// Use OpenAI (default)
super({
providerConfig: {
provider: "openai",
model: "gpt-4o"
}
});
// Use Anthropic
this.setProviderConfig({
provider: "anthropic",
model: "claude-3.5-sonnet-20240620"
});
// Use Groq
this.setProviderConfig({
provider: "groq",
model: "llama-3.1-70b-versatile"
});
}
}
Every agent has a state object that you can use to store data. You can store any data type, and it will be persisted between calls. You can also choose to use fields on the agent class to store data.
State does not get passed into LLM calls, so it's a good place to store data that you want to persist between calls / sensitive data.
class MyAgent extends MagmaAgent {
// Using a field to store data
myQuery = "Hello, world!";
async setup() {
// Initialize state
this.state.set("counter", 0);
this.state.set("access_token", "1234567890");
}
@tool({ name: "increment" })
async increment() {
const counter = this.state.get("counter") || 0;
this.state.set("counter", counter + 1);
return `Counter is now ${counter + 1}`;
}
@tool({ name: "api_call" })
async apiCall() {
const access_token = this.state.get("access_token");
const response = await fetch("https://myapi.com/data", {
headers: {
"Authorization": `Bearer ${access_token}`
},
body: JSON.stringify({
query: this.myQuery
})
});
return JSON.stringify(response.json());
}
}
import { MagmaAgent } from "@pompeii-labs/magma";
class MyAgent extends MagmaAgent {
// Initialize your agent
async setup() {
// Load resources, connect to databases, etc.
await this.loadDatabase();
return "I'm ready to help!";
}
// Handle incoming messages
async receive(message: any) {
// Process user input before main() is called
if (message.type === 'image') {
await this.processImage(message.content);
}
}
// Clean up resources
async cleanup();
// Manually trigger a specific tool
async trigger({ name: "get_weather" });
// Stop the current execution
kill();
}
Event handlers are optional methods that allow you to tack on custom logic to various events in the agent lifecycle.
import { MagmaAgent } from "@pompeii-labs/magma";
class MyAgent extends MagmaAgent {
// Handle agent shutdown
async onCleanup() {
console.log("Agent shutting down...");
}
// Handle errors
async onError(error: Error) {
console.error("Something went wrong:", error);
await this.notifyAdmin(error);
}
// Track token usage
async onUsageUpdate(usage: MagmaUsage) {
await this.saveUsageMetrics(usage);
}
// Process streaming responses
async onStreamChunk(chunk: MagmaStreamChunk) {
console.log("Received chunk:", chunk.content);
}
}
- Join our Slack Community
- Star us on GitHub
Magma is Apache 2.0 licensed.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for magma
Similar Open Source Tools

magma
Magma is a powerful and flexible framework for building scalable and efficient machine learning pipelines. It provides a simple interface for creating complex workflows, enabling users to easily experiment with different models and data processing techniques. With Magma, users can streamline the development and deployment of machine learning projects, saving time and resources.

agent-toolkit
The Stripe Agent Toolkit enables popular agent frameworks to integrate with Stripe APIs through function calling. It includes support for Python and TypeScript, built on top of Stripe Python and Node SDKs. The toolkit provides tools for LangChain, CrewAI, and Vercel's AI SDK, allowing users to configure actions like creating payment links, invoices, refunds, and more. Users can pass the toolkit as a list of tools to agents for integration with Stripe. Context values can be provided for making requests, such as specifying connected accounts for API calls. The toolkit also supports metered billing for Vercel's AI SDK, enabling billing events submission based on customer ID and input/output meters.

parea-sdk-py
Parea AI provides a SDK to evaluate & monitor AI applications. It allows users to test, evaluate, and monitor their AI models by defining and running experiments. The SDK also enables logging and observability for AI applications, as well as deploying prompts to facilitate collaboration between engineers and subject-matter experts. Users can automatically log calls to OpenAI and Anthropic, create hierarchical traces of their applications, and deploy prompts for integration into their applications.

swarmgo
SwarmGo is a Go package designed to create AI agents capable of interacting, coordinating, and executing tasks. It focuses on lightweight agent coordination and execution, offering powerful primitives like Agents and handoffs. SwarmGo enables building scalable solutions with rich dynamics between tools and networks of agents, all while keeping the learning curve low. It supports features like memory management, streaming support, concurrent agent execution, LLM interface, and structured workflows for organizing and coordinating multiple agents.

UniChat
UniChat is a pipeline tool for creating online and offline chat-bots in Unity. It leverages Unity.Sentis and text vector embedding technology to enable offline mode text content search based on vector databases. The tool includes a chain toolkit for embedding LLM and Agent in games, along with middleware components for Text to Speech, Speech to Text, and Sub-classifier functionalities. UniChat also offers a tool for invoking tools based on ReActAgent workflow, allowing users to create personalized chat scenarios and character cards. The tool provides a comprehensive solution for designing flexible conversations in games while maintaining developer's ideas.

modelfusion
ModelFusion is an abstraction layer for integrating AI models into JavaScript and TypeScript applications, unifying the API for common operations such as text streaming, object generation, and tool usage. It provides features to support production environments, including observability hooks, logging, and automatic retries. You can use ModelFusion to build AI applications, chatbots, and agents. ModelFusion is a non-commercial open source project that is community-driven. You can use it with any supported provider. ModelFusion supports a wide range of models including text generation, image generation, vision, text-to-speech, speech-to-text, and embedding models. ModelFusion infers TypeScript types wherever possible and validates model responses. ModelFusion provides an observer framework and logging support. ModelFusion ensures seamless operation through automatic retries, throttling, and error handling mechanisms. ModelFusion is fully tree-shakeable, can be used in serverless environments, and only uses a minimal set of dependencies.

lmstudio.js
lmstudio.js is a pre-release alpha client SDK for LM Studio, allowing users to use local LLMs in JS/TS/Node. It is currently undergoing rapid development with breaking changes expected. Users can follow LM Studio's announcements on Twitter and Discord. The SDK provides API usage for loading models, predicting text, setting up the local LLM server, and more. It supports features like custom loading progress tracking, model unloading, structured output prediction, and cancellation of predictions. Users can interact with LM Studio through the CLI tool 'lms' and perform tasks like text completion, conversation, and getting prediction statistics.

hydraai
Generate React components on-the-fly at runtime using AI. Register your components, and let Hydra choose when to show them in your App. Hydra development is still early, and patterns for different types of components and apps are still being developed. Join the discord to chat with the developers. Expects to be used in a NextJS project. Components that have function props do not work.

FlashLearn
FlashLearn is a tool that provides a simple interface and orchestration for incorporating Agent LLMs into workflows and ETL pipelines. It allows data transformations, classifications, summarizations, rewriting, and custom multi-step tasks using LLMs. Each step and task has a compact JSON definition, making pipelines easy to understand and maintain. FlashLearn supports LiteLLM, Ollama, OpenAI, DeepSeek, and other OpenAI-compatible clients.

genaiscript
GenAIScript is a scripting environment designed to facilitate file ingestion, prompt development, and structured data extraction. Users can define metadata and model configurations, specify data sources, and define tasks to extract specific information. The tool provides a convenient way to analyze files and extract desired content in a structured format. It offers a user-friendly interface for working with data and automating data extraction processes, making it suitable for various data processing tasks.

sparkle
Sparkle is a tool that streamlines the process of building AI-driven features in applications using Large Language Models (LLMs). It guides users through creating and managing agents, defining tools, and interacting with LLM providers like OpenAI. Sparkle allows customization of LLM provider settings, model configurations, and provides a seamless integration with Sparkle Server for exposing agents via an OpenAI-compatible chat API endpoint.

deepgram-js-sdk
Deepgram JavaScript SDK. Power your apps with world-class speech and Language AI models.

OpenAI
OpenAI is a Swift community-maintained implementation over OpenAI public API. It is a non-profit artificial intelligence research organization founded in San Francisco, California in 2015. OpenAI's mission is to ensure safe and responsible use of AI for civic good, economic growth, and other public benefits. The repository provides functionalities for text completions, chats, image generation, audio processing, edits, embeddings, models, moderations, utilities, and Combine extensions.

memobase
Memobase is a user profile-based memory system designed to enhance Generative AI applications by enabling them to remember, understand, and evolve with users. It provides structured user profiles, scalable profiling, easy integration with existing LLM stacks, batch processing for speed, and is production-ready. Users can manage users, insert data, get memory profiles, and track user preferences and behaviors. Memobase is ideal for applications that require user analysis, tracking, and personalized interactions.

OpenAI-DotNet
OpenAI-DotNet is a simple C# .NET client library for OpenAI to use through their RESTful API. It is independently developed and not an official library affiliated with OpenAI. Users need an OpenAI API account to utilize this library. The library targets .NET 6.0 and above, working across various platforms like console apps, winforms, wpf, asp.net, etc., and on Windows, Linux, and Mac. It provides functionalities for authentication, interacting with models, assistants, threads, chat, audio, images, files, fine-tuning, embeddings, and moderations.

LLM.swift
LLM.swift is a simple and readable library that allows you to interact with large language models locally with ease for macOS, iOS, watchOS, tvOS, and visionOS. It's a lightweight abstraction layer over `llama.cpp` package, so that it stays as performant as possible while is always up to date. Theoretically, any model that works on `llama.cpp` should work with this library as well. It's only a single file library, so you can copy, study and modify the code however you want.
For similar tasks

indie-hacker-tools-plus
Indie Hacker Tools Plus is a curated repository of essential tools and technology stacks for independent developers. The repository aims to help developers enhance efficiency, save costs, and mitigate risks by using popular and validated tools. It provides a collection of tools recognized by the industry to empower developers with the most refined technical support. Developers can contribute by submitting articles, software, or resources through issues or pull requests.

magma
Magma is a powerful and flexible framework for building scalable and efficient machine learning pipelines. It provides a simple interface for creating complex workflows, enabling users to easily experiment with different models and data processing techniques. With Magma, users can streamline the development and deployment of machine learning projects, saving time and resources.

aws-genai-llm-chatbot
This repository provides code to deploy a chatbot powered by Multi-Model and Multi-RAG using AWS CDK on AWS. Users can experiment with various Large Language Models and Multimodal Language Models from different providers. The solution supports Amazon Bedrock, Amazon SageMaker self-hosted models, and third-party providers via API. It also offers additional resources like AWS Generative AI CDK Constructs and Project Lakechain for building generative AI solutions and document processing. The roadmap and authors are listed, along with contributors. The library is licensed under the MIT-0 License with information on changelog, code of conduct, and contributing guidelines. A legal disclaimer advises users to conduct their own assessment before using the content for production purposes.

gemini-pro-vision-playground
Gemini Pro Vision Playground is a simple project aimed at assisting developers in utilizing the Gemini Pro Vision and Gemini Pro AI models for building applications. It provides a playground environment for experimenting with these models and integrating them into apps. The project includes instructions for setting up the Google AI API key and running the development server to visualize the results. Developers can learn more about the Gemini API documentation and Next.js framework through the provided resources. The project encourages contributions and feedback from the community.

uvadlc_notebooks
The UvA Deep Learning Tutorials repository contains a series of Jupyter notebooks designed to help understand theoretical concepts from lectures by providing corresponding implementations. The notebooks cover topics such as optimization techniques, transformers, graph neural networks, and more. They aim to teach details of the PyTorch framework, including PyTorch Lightning, with alternative translations to JAX+Flax. The tutorials are integrated as official tutorials of PyTorch Lightning and are relevant for graded assignments and exams.

simpleAI
SimpleAI is a self-hosted alternative to the not-so-open AI API, focused on replicating main endpoints for LLM such as text completion, chat, edits, and embeddings. It allows quick experimentation with different models, creating benchmarks, and handling specific use cases without relying on external services. Users can integrate and declare models through gRPC, query endpoints using Swagger UI or API, and resolve common issues like CORS with FastAPI middleware. The project is open for contributions and welcomes PRs, issues, documentation, and more.

react-native-executorch
React Native ExecuTorch is a framework that allows developers to run AI models on mobile devices using React Native. It bridges the gap between React Native and native platform capabilities, providing high-performance AI model execution without requiring deep knowledge of native code or machine learning internals. The tool supports ready-made models in `.pte` format and offers a Python API for custom models. It is designed to simplify the integration of AI features into React Native apps.

shell-ai
Shell-AI (`shai`) is a CLI utility that enables users to input commands in natural language and receive single-line command suggestions. It leverages natural language understanding and interactive CLI tools to enhance command line interactions. Users can describe tasks in plain English and receive corresponding command suggestions, making it easier to execute commands efficiently. Shell-AI supports cross-platform usage and is compatible with Azure OpenAI deployments, offering a user-friendly and efficient way to interact with the command line.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.