![inngest](/statics/github-mark.png)
inngest
The leading workflow orchestration platform. Run stateful step functions and AI workflows on serverless, servers, or the edge.
Stars: 2398
![screenshot](/screenshots_githubs/inngest-inngest.jpg)
Inngest is a platform that offers durable functions to replace queues, state management, and scheduling for developers. It allows writing reliable step functions faster without dealing with infrastructure. Developers can create durable functions using various language SDKs, run a local development server, deploy functions to their infrastructure, sync functions with the Inngest Platform, and securely trigger functions via HTTPS. Inngest Functions support retrying, scheduling, and coordinating operations through triggers, flow control, and steps, enabling developers to build reliable workflows with robust support for various operations.
README:
Inngest's durable functions replace queues, state management, and scheduling to enable any developer to write reliable step functions faster without touching infrastructure.
- Write durable functions using any of our language SDKs
- Run the Inngest Dev Server for a complete local development experience, with production parity.
- Deploy your functions to your own infrastructure
- Sync your application's functions with the Inngest Platform or a self-hosted Inngest server.
- Inngest invokes your functions securely via HTTPS whenever triggering events are received.
Inngest Functions enable developers to run reliable background logic, from background jobs to complex workflows. An Inngest Function is composed of three key parts that provide robust support for retrying, scheduling, and coordinating complex sequences of operations:
- Triggers - Events, Cron schedules or webhook events that trigger the function.
- Flow Control - Configure how the function runs are enqueued and executed including concurrency, throttling, debouncing, rate limiting, and prioritization.
- Steps - Steps are fundamental building blocks of Inngest, turning your Inngest Functions into reliable workflows that can runs for months and recover from failures.
Here is an example function that limits concurrency for each unique user id and performs two steps that will be retried on error:
export default inngest.createFunction(
{
id: "import-product-images",
concurrency: {
key: "event.data.userId",
limit: 10
}
},
{ event: "shop/product.imported" },
async ({ event, step }) => {
// Here goes the business logic
// By wrapping code in steps, each will be retried automatically on failure
const s3Urls = await step.run("copy-images-to-s3", async () => {
return copyAllImagesToS3(event.data.imageURLs);
});
// You can include numerous steps in your function
await step.run("resize-images", async () => {
await resizer.bulk({ urls: s3Urls, quality: 0.9, maxWidth: 1024 });
})
};
);
// Elsewhere in your code (e.g. in your API endpoint):
await inngest.send({
name: "shop/product.imported",
data: {
userId: "01J8G44701QYGE0DH65PZM8DPM",
imageURLs: [
"https://useruploads.acme.com/q2345678/1094.jpg",
"https://useruploads.acme.com/q2345678/1095.jpg"
],
},
});
Run the Inngest Dev Server using our CLI:
npx inngest-cli@latest dev
Open the Inngest Dev Server dashboard at http://localhost:8288:
Follow our Next.js, Node.js or Python quick start guides.
- TypeScript / JavaScript (inngest-js) - Reference
- Python (inngest-py) - Reference
- Go (inngestgo) - Reference
- Kotlin / Java (inngest-kt)
To understand how self-hosting works, it's valuable to understand the architecture and system components at a high level. We'll take a look at a simplified architecture diagram and walk through the system.
- Event API - Receives events from SDKs via HTTP requests. Authenticates client requests via Event Keys. The Event API publishes event payloads to an internal event stream.
- Event stream - Acts as buffer between the Event API and the Runner.
-
Runner - Consumes incoming events and performs several actions:
- Scheduling of new “function runs” (aka jobs) given the event type, creating initial run state in the State store database. Runs are added to queues given the function's flow control configuration.
- Resume functions paused via
waitForEvent
with matching expressions. - Cancels running functions with matching
cancelOn
expressions - Writes ingested events to a database for historical record and future replay.
- Queue - A multi-tenant aware, multi-tier queue designed for fairness and various flow control methods (concurrency, throttling, prioritization, debouncing, rate limiting) and batching.
- Executor - Responsible for executing functions, from initial execution, step execution, writing incremental function run state to the State store, and retries after failures.
- State store (database) - Persists data for pending and ongoing function runs. Data includes initial triggering event(s), step output and step errors.
- Database - Persists system data and history including Apps, Functions, Events, Function run results.
- API - GraphQL and REST APIs for programmatic access and management of system resources.
- Dashboard UI - The UI to manage apps, functions and view function run history.
- Join our Discord community for support, to give us feedback, or chat with us.
- Post a question or idea to our GitHub discussion board
- Read the documentation
- Explore our public roadmap
- Follow us on Twitter
- Join our mailing list for release notes and project updates
We embrace contributions in many forms, including documentation, typos, bug reports or fixes. Check out our contributing guide to get started. Each of our open source SDKs are open to contributions as well.
Additionally, Inngest's website documentation is available for contribution in the inngest/website
repo.
Self-hosting the Inngest server is possible and easy to get started with. Learn more about self-hosting Inngest in our docs guide.
The Inngest server and CLI are available under the Server Side Public License and delayed open source publication (DOSP) under Apache 2.0. View the license here.
All Inngest SDKs are all available under the Apache 2.0 license.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for inngest
Similar Open Source Tools
![inngest Screenshot](/screenshots_githubs/inngest-inngest.jpg)
inngest
Inngest is a platform that offers durable functions to replace queues, state management, and scheduling for developers. It allows writing reliable step functions faster without dealing with infrastructure. Developers can create durable functions using various language SDKs, run a local development server, deploy functions to their infrastructure, sync functions with the Inngest Platform, and securely trigger functions via HTTPS. Inngest Functions support retrying, scheduling, and coordinating operations through triggers, flow control, and steps, enabling developers to build reliable workflows with robust support for various operations.
![TaskingAI Screenshot](/screenshots_githubs/TaskingAI-TaskingAI.jpg)
TaskingAI
TaskingAI brings Firebase's simplicity to **AI-native app development**. The platform enables the creation of GPTs-like multi-tenant applications using a wide range of LLMs from various providers. It features distinct, modular functions such as Inference, Retrieval, Assistant, and Tool, seamlessly integrated to enhance the development process. TaskingAI’s cohesive design ensures an efficient, intelligent, and user-friendly experience in AI application development.
![petals Screenshot](/screenshots_githubs/bigscience-workshop-petals.jpg)
petals
Petals is a tool that allows users to run large language models at home in a BitTorrent-style manner. It enables fine-tuning and inference up to 10x faster than offloading. Users can generate text with distributed models like Llama 2, Falcon, and BLOOM, and fine-tune them for specific tasks directly from their desktop computer or Google Colab. Petals is a community-run system that relies on people sharing their GPUs to increase its capacity and offer a distributed network for hosting model layers.
![deepflow Screenshot](/screenshots_githubs/deepflowio-deepflow.jpg)
deepflow
DeepFlow is an open-source project that provides deep observability for complex cloud-native and AI applications. It offers Zero Code data collection with eBPF for metrics, distributed tracing, request logs, and function profiling. DeepFlow is integrated with SmartEncoding to achieve Full Stack correlation and efficient access to all observability data. With DeepFlow, cloud-native and AI applications automatically gain deep observability, removing the burden of developers continually instrumenting code and providing monitoring and diagnostic capabilities covering everything from code to infrastructure for DevOps/SRE teams.
![neptune-client Screenshot](/screenshots_githubs/neptune-ai-neptune-client.jpg)
neptune-client
Neptune is a scalable experiment tracker for teams training foundation models. Log millions of runs, effortlessly monitor and visualize model training, and deploy on your infrastructure. Track 100% of metadata to accelerate AI breakthroughs. Log and display any framework and metadata type from any ML pipeline. Organize experiments with nested structures and custom dashboards. Compare results, visualize training, and optimize models quicker. Version models, review stages, and access production-ready models. Share results, manage users, and projects. Integrate with 25+ frameworks. Trusted by great companies to improve workflow.
![chatnio Screenshot](/screenshots_githubs/zmh-program-chatnio.jpg)
chatnio
Chat Nio is a next-generation AIGC one-stop business solution that combines the advantages of frontend-oriented lightweight deployment projects with powerful API distribution systems. It offers rich model support, beautiful UI design, complete Markdown support, multi-theme support, internationalization support, text-to-image support, powerful conversation sync, model market & preset system, rich file parsing, full model internet search, Progressive Web App (PWA) support, comprehensive backend management, multiple billing methods, innovative model caching, and additional features. The project aims to address limitations in conversation synchronization, billing, file parsing, conversation URL sharing, channel management, and API call support found in existing AIGC commercial sites, while also providing a user-friendly interface design and C-end features.
![llm-answer-engine Screenshot](/screenshots_githubs/developersdigest-llm-answer-engine.jpg)
llm-answer-engine
This repository contains the code and instructions needed to build a sophisticated answer engine that leverages the capabilities of Groq, Mistral AI's Mixtral, Langchain.JS, Brave Search, Serper API, and OpenAI. Designed to efficiently return sources, answers, images, videos, and follow-up questions based on user queries, this project is an ideal starting point for developers interested in natural language processing and search technologies.
![unify Screenshot](/screenshots_githubs/unifyai-unify.jpg)
unify
The Unify Python Package provides access to the Unify REST API, allowing users to query Large Language Models (LLMs) from any Python 3.7.1+ application. It includes Synchronous and Asynchronous clients with Streaming responses support. Users can easily use any endpoint with a single key, route to the best endpoint for optimal throughput, cost, or latency, and customize prompts to interact with the models. The package also supports dynamic routing to automatically direct requests to the top-performing provider. Additionally, users can enable streaming responses and interact with the models asynchronously for handling multiple user requests simultaneously.
![agent-zero Screenshot](/screenshots_githubs/frdel-agent-zero.jpg)
agent-zero
Agent Zero is a personal and organic AI framework designed to be dynamic, organically growing, and learning as you use it. It is fully transparent, readable, comprehensible, customizable, and interactive. The framework uses the computer as a tool to accomplish tasks, with no single-purpose tools pre-programmed. It emphasizes multi-agent cooperation, complete customization, and extensibility. Communication is key in this framework, allowing users to give proper system prompts and instructions to achieve desired outcomes. Agent Zero is capable of dangerous actions and should be run in an isolated environment. The framework is prompt-based, highly customizable, and requires a specific environment to run effectively.
![postgresml Screenshot](/screenshots_githubs/postgresml-postgresml.jpg)
postgresml
PostgresML is a powerful Postgres extension that seamlessly combines data storage and machine learning inference within your database. It enables running machine learning and AI operations directly within PostgreSQL, leveraging GPU acceleration for faster computations, integrating state-of-the-art large language models, providing built-in functions for text processing, enabling efficient similarity search, offering diverse ML algorithms, ensuring high performance, scalability, and security, supporting a wide range of NLP tasks, and seamlessly integrating with existing PostgreSQL tools and client libraries.
![cleanlab Screenshot](/screenshots_githubs/cleanlab-cleanlab.jpg)
cleanlab
Cleanlab helps you **clean** data and **lab** els by automatically detecting issues in a ML dataset. To facilitate **machine learning with messy, real-world data** , this data-centric AI package uses your _existing_ models to estimate dataset problems that can be fixed to train even _better_ models.
![clearml-server Screenshot](/screenshots_githubs/allegroai-clearml-server.jpg)
clearml-server
ClearML Server is a backend service infrastructure for ClearML, facilitating collaboration and experiment management. It includes a web app, RESTful API, and file server for storing images and models. Users can deploy ClearML Server using Docker, AWS EC2 AMI, or Kubernetes. The system design supports single IP or sub-domain configurations with specific open ports. ClearML-Agent Services container allows launching long-lasting jobs and various use cases like auto-scaler service, controllers, optimizer, and applications. Advanced functionality includes web login authentication and non-responsive experiments watchdog. Upgrading ClearML Server involves stopping containers, backing up data, downloading the latest docker-compose.yml file, configuring ClearML-Agent Services, and spinning up docker containers. Community support is available through ClearML FAQ, Stack Overflow, GitHub issues, and email contact.
![Director Screenshot](/screenshots_githubs/video-db-Director.jpg)
Director
Director is a framework to build video agents that can reason through complex video tasks like search, editing, compilation, generation, etc. It enables users to summarize videos, search for specific moments, create clips instantly, integrate GenAI projects and APIs, add overlays, generate thumbnails, and more. Built on VideoDB's 'video-as-data' infrastructure, Director is perfect for developers, creators, and teams looking to simplify media workflows and unlock new possibilities.
![gemini-android Screenshot](/screenshots_githubs/skydoves-gemini-android.jpg)
gemini-android
Gemini Android is a repository showcasing Google's Generative AI on Android using Stream Chat SDK for Compose. It demonstrates the Gemini API for Android, implements UI elements with Jetpack Compose, utilizes Android architecture components like Hilt and AppStartup, performs background tasks with Kotlin Coroutines, and integrates chat systems with Stream Chat Compose SDK for real-time event handling. The project also provides technical content, instructions on building the project, tech stack details, architecture overview, modularization strategies, and a contribution guideline. It follows Google's official architecture guidance and offers a real-world example of app architecture implementation.
![superduper Screenshot](/screenshots_githubs/superduper-io-superduper.jpg)
superduper
superduper.io is a Python framework that integrates AI models, APIs, and vector search engines directly with existing databases. It allows hosting of models, streaming inference, and scalable model training/fine-tuning. Key features include integration of AI with data infrastructure, inference via change-data-capture, scalable model training, model chaining, simple Python interface, Python-first approach, working with difficult data types, feature storing, and vector search capabilities. The tool enables users to turn their existing databases into centralized repositories for managing AI model inputs and outputs, as well as conducting vector searches without the need for specialized databases.
![extractous Screenshot](/screenshots_githubs/yobix-ai-extractous.jpg)
extractous
Extractous offers a fast and efficient solution for extracting content and metadata from various document types such as PDF, Word, HTML, and many other formats. It is built with Rust, providing high performance, memory safety, and multi-threading capabilities. The tool eliminates the need for external services or APIs, making data processing pipelines faster and more efficient. It supports multiple file formats, including Microsoft Office, OpenOffice, PDF, spreadsheets, web documents, e-books, text files, images, and email formats. Extractous provides a clear and simple API for extracting text and metadata content, with upcoming support for JavaScript/TypeScript. It is free for commercial use under the Apache 2.0 License.
For similar tasks
![inngest Screenshot](/screenshots_githubs/inngest-inngest.jpg)
inngest
Inngest is a platform that offers durable functions to replace queues, state management, and scheduling for developers. It allows writing reliable step functions faster without dealing with infrastructure. Developers can create durable functions using various language SDKs, run a local development server, deploy functions to their infrastructure, sync functions with the Inngest Platform, and securely trigger functions via HTTPS. Inngest Functions support retrying, scheduling, and coordinating operations through triggers, flow control, and steps, enabling developers to build reliable workflows with robust support for various operations.
![celery-aio-pool Screenshot](/screenshots_githubs/the-wondersmith-celery-aio-pool.jpg)
celery-aio-pool
Celery AsyncIO Pool is a free software tool licensed under GNU Affero General Public License v3+. It provides an AsyncIO worker pool for Celery, enabling users to leverage the power of AsyncIO in their Celery applications. The tool allows for easy installation using Poetry, pip, or directly from GitHub. Users can configure Celery to use the AsyncIO pool provided by celery-aio-pool, or they can wait for the upcoming support for out-of-tree worker pools in Celery 5.3. The tool is actively maintained and welcomes contributions from the community.
![ai-controller-jobs Screenshot](/screenshots_githubs/aimeos-ai-controller-jobs.jpg)
ai-controller-jobs
Aimeos job controllers is a repository containing controllers for scheduled tasks in e-commerce projects. It provides a set of tools to manage and execute various jobs related to e-commerce operations. The controllers are designed to streamline the process of handling scheduled tasks within e-commerce platforms, ensuring efficient and reliable task execution.
For similar jobs
![resonance Screenshot](/screenshots_githubs/distantmagic-resonance.jpg)
resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.
![aiogram_bot_template Screenshot](/screenshots_githubs/wakaree-aiogram_bot_template.jpg)
aiogram_bot_template
Aiogram bot template is a boilerplate for creating Telegram bots using Aiogram framework. It provides a solid foundation for building robust and scalable bots with a focus on code organization, database integration, and localization.
![pluto Screenshot](/screenshots_githubs/pluto-lang-pluto.jpg)
pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.
![pinecone-ts-client Screenshot](/screenshots_githubs/pinecone-io-pinecone-ts-client.jpg)
pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.
![aiohttp-pydantic Screenshot](/screenshots_githubs/Maillol-aiohttp-pydantic.jpg)
aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.
![gcloud-aio Screenshot](/screenshots_githubs/talkiq-gcloud-aio.jpg)
gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.
![aioconsole Screenshot](/screenshots_githubs/vxgmichel-aioconsole.jpg)
aioconsole
aioconsole is a Python package that provides asynchronous console and interfaces for asyncio. It offers asynchronous equivalents to input, print, exec, and code.interact, an interactive loop running the asynchronous Python console, customization and running of command line interfaces using argparse, stream support to serve interfaces instead of using standard streams, and the apython script to access asyncio code at runtime without modifying the sources. The package requires Python version 3.8 or higher and can be installed from PyPI or GitHub. It allows users to run Python files or modules with a modified asyncio policy, replacing the default event loop with an interactive loop. aioconsole is useful for scenarios where users need to interact with asyncio code in a console environment.
![aiosqlite Screenshot](/screenshots_githubs/omnilib-aiosqlite.jpg)
aiosqlite
aiosqlite is a Python library that provides a friendly, async interface to SQLite databases. It replicates the standard sqlite3 module but with async versions of all the standard connection and cursor methods, along with context managers for automatically closing connections and cursors. It allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. The library also replicates most of the advanced features of sqlite3, such as row factories and total changes tracking.