
air
The new Python web framework by the authors of Two Scoops of Django
Stars: 338

Air is a new web framework for Python web development, built with FastAPI, Starlette, and Pydantic. It provides intuitive shortcuts and optimizations to expedite coding HTML with FastAPI, easy HTML content generation using Python classes, and seamless integration with Jinja templates. Air also offers utilities for using HTMX, HTML form validation powered by pydantic, and well-documented features. It aims to combine sophisticated HTML pages and a REST API into one app, making it easy to use FastAPI and Air together.
README:
Air 💨: The new web framework that breathes fresh air into Python web development. Built with FastAPI, Starlette, and Pydantic.
[!CAUTION] Air is currently in an alpha state. While breaking changes are becoming less common, nevertheless, anything and everything could change.
[!IMPORTANT] If you have an idea for a new feature, discuss it with us by opening an issue before writing any code. Do understand that we are working to remove features from core, and for new features you will almost always create your own package that extends or uses Air instead of adding to this package. This is by design, as our vision is for the Air package ecosystem to be as much a "core" part of Air as the code in this minimalist base package.
- Powered by FastAPI - Designed to work with FastAPI so you can serve your API and web pages from one app
- Fast to code - Tons of intuitive shortcuts and optimizations designed to expedite coding HTML with FastAPI
- Air Tags - Easy to write and performant HTML content generation using Python classes to render HTML
-
Jinja Friendly - No need to write
response_class=HtmlResponse
andtemplates.TemplateResponse
for every HTML view - Mix Jinja and Air Tags - Jinja and Air Tags both are first class citizens. Use either or both in the same view!
- HTMX friendly - We love HTMX and provide utilities to use it with Air
- HTML form validation powered by pydantic - We love using pydantic to validate incoming data. Air Forms provide two ways to use pydantic with HTML forms (dependency injection or from within views)
- Easy to learn yet well documented - Hopefully Air is so intuitive and well-typed you'll barely need to use the documentation. In case you do need to look something up we're taking our experience writing technical books and using it to make documentation worth boasting about
Documentation: https://airdocs.fastapicloud.dev
Source Code: https://github.com/feldroy/air
Install using pip install -U air
or conda install air -c conda-forge
.
For uv
users, just create a virtualenv and install the air package, like:
uv venv
source .venv/bin/activate
uv init
uv add air
You can install each optional feature (extras) like this:
-
Standard — FastAPI’s recommended extras
uv add "air[standard]"
-
Pretty — Beautiful HTML Rendering (lxml, rich)
uv add "air[pretty]"
-
SQL — SQLModel / SQLAlchemy support
uv add "air[sql]"
-
Auth — OAuth clients via Authlib
uv add "air[auth]"
-
All — everything above in one go
uv add "air[all]"
Tip: you can also combine extras in one command, for example:
uv add "air[pretty,sql]"
Create a main.py
with:
import air
app = air.Air()
@app.get("/")
async def index():
return air.Html(air.H1("Hello, world!", style="color: blue;"))
Run the app with:
fastapi dev
If you have fastapi installed globally, you may see an error:
To use the fastapi command, please install "fastapi[standard]":
pip install "fastapi[standard]"
In that case, run the app with:
uv run fastapi dev
[!NOTE] This example uses Air Tags, which are Python classes that render as HTML. Air Tags are typed and documented, designed to work well with any code completion tool. You can also run this with
uv run uvicorn main:app --reload
if you prefer using Uvicorn directly.
Then open your browser to http://127.0.0.1:8000 to see the result.
Air is just a layer over FastAPI. So it is trivial to combine sophisticated HTML pages and a REST API into one app.
import air
from fastapi import FastAPI
app = air.Air()
api = FastAPI()
@app.get("/")
def landing_page():
return air.Html(
air.Head(air.Title("Awesome SaaS")),
air.Body(
air.H1("Awesome SaaS"),
air.P(air.A("API Docs", target="_blank", href="/api/docs")),
),
)
@api.get("/")
def api_root():
return {"message": "Awesome SaaS is powered by FastAPI"}
# Combining the Air and and FastAPI apps into one
app.mount("/api", api)
Want to use Jinja2 instead of Air Tags? We've got you covered.
import air
from air.requests import Request
from fastapi import FastAPI
app = air.Air()
api = FastAPI()
# Air's JinjaRenderer is a shortcut for using Jinja templates
jinja = air.JinjaRenderer(directory="templates")
@app.get("/")
def index(request: Request):
return jinja(request, name="home.html")
@api.get("/")
def api_root():
return {"message": "Awesome SaaS is powered by FastAPI"}
# Combining the Air and and FastAPI apps into one
app.mount("/api", api)
Don't forget the Jinja template!
<!doctype html
<html>
<head>
<title>Awesome SaaS</title>
</head>
<body>
<h1>Awesome SaaS</h1>
<p>
<a target="_blank" href="/api/docs">API Docs</a>
</p>
</body>
</html>
[!NOTE] Using Jinja with Air is easier than with FastAPI. That's because as much as we enjoy Air Tags, we also love Jinja!
For guidance on setting up a development environment and how to make a contribution to Air, see Contributing to Air.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for air
Similar Open Source Tools

air
Air is a new web framework for Python web development, built with FastAPI, Starlette, and Pydantic. It provides intuitive shortcuts and optimizations to expedite coding HTML with FastAPI, easy HTML content generation using Python classes, and seamless integration with Jinja templates. Air also offers utilities for using HTMX, HTML form validation powered by pydantic, and well-documented features. It aims to combine sophisticated HTML pages and a REST API into one app, making it easy to use FastAPI and Air together.

obsei
Obsei is an open-source, low-code, AI powered automation tool that consists of an Observer to collect unstructured data from various sources, an Analyzer to analyze the collected data with various AI tasks, and an Informer to send analyzed data to various destinations. The tool is suitable for scheduled jobs or serverless applications as all Observers can store their state in databases. Obsei is still in alpha stage, so caution is advised when using it in production. The tool can be used for social listening, alerting/notification, automatic customer issue creation, extraction of deeper insights from feedbacks, market research, dataset creation for various AI tasks, and more based on creativity.

HuixiangDou
HuixiangDou is a **group chat** assistant based on LLM (Large Language Model). Advantages: 1. Design a two-stage pipeline of rejection and response to cope with group chat scenario, answer user questions without message flooding, see arxiv2401.08772 2. Low cost, requiring only 1.5GB memory and no need for training 3. Offers a complete suite of Web, Android, and pipeline source code, which is industrial-grade and commercially viable Check out the scenes in which HuixiangDou are running and join WeChat Group to try AI assistant inside. If this helps you, please give it a star ⭐

fastserve-ai
FastServe-AI is a machine learning serving tool focused on GenAI & LLMs with simplicity as the top priority. It allows users to easily serve custom models by implementing the 'handle' method for 'FastServe'. The tool provides a FastAPI server for custom models and can be deployed using Lightning AI Studio. Users can install FastServe-AI via pip and run it to serve their own GPT-like LLM models in minutes.

mcphub.nvim
MCPHub.nvim is a powerful Neovim plugin that integrates MCP (Model Context Protocol) servers into your workflow. It offers a centralized config file for managing servers and tools, with an intuitive UI for testing resources. Ideal for LLM integration, it provides programmatic API access and interactive testing through the `:MCPHub` command.

client-ts
Mistral Typescript Client is an SDK for Mistral AI API, providing Chat Completion and Embeddings APIs. It allows users to create chat completions, upload files, create agent completions, create embedding requests, and more. The SDK supports various JavaScript runtimes and provides detailed documentation on installation, requirements, API key setup, example usage, error handling, server selection, custom HTTP client, authentication, providers support, standalone functions, debugging, and contributions.

openai-edge-tts
This project provides a local, OpenAI-compatible text-to-speech (TTS) API using `edge-tts`. It emulates the OpenAI TTS endpoint (`/v1/audio/speech`), enabling users to generate speech from text with various voice options and playback speeds, just like the OpenAI API. `edge-tts` uses Microsoft Edge's online text-to-speech service, making it completely free. The project supports multiple audio formats, adjustable playback speed, and voice selection options, providing a flexible and customizable TTS solution for users.

ppl.llm.kernel.cuda
Primitive cuda kernel library for ppl.nn.llm, part of PPL.LLM system, tested on Ampere and Hopper, requires Linux on x86_64 or arm64 CPUs, GCC >= 9.4.0, CMake >= 3.18, Git >= 2.7.0, CUDA Toolkit >= 11.4. 11.6 recommended. Provides cuda kernel functionalities for deep learning tasks.

crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.

Scrapling
Scrapling is a high-performance, intelligent web scraping library for Python that automatically adapts to website changes while significantly outperforming popular alternatives. For both beginners and experts, Scrapling provides powerful features while maintaining simplicity. It offers features like fast and stealthy HTTP requests, adaptive scraping with smart element tracking and flexible selection, high performance with lightning-fast speed and memory efficiency, and developer-friendly navigation API and rich text processing. It also includes advanced parsing features like smart navigation, content-based selection, handling structural changes, and finding similar elements. Scrapling is designed to handle anti-bot protections and website changes effectively, making it a versatile tool for web scraping tasks.

gpustack
GPUStack is an open-source GPU cluster manager designed for running large language models (LLMs). It supports a wide variety of hardware, scales with GPU inventory, offers lightweight Python package with minimal dependencies, provides OpenAI-compatible APIs, simplifies user and API key management, enables GPU metrics monitoring, and facilitates token usage and rate metrics tracking. The tool is suitable for managing GPU clusters efficiently and effectively.

repomix
Repomix is a powerful tool that packs your entire repository into a single, AI-friendly file. It is designed to format your codebase for easy understanding by AI tools like Large Language Models (LLMs), Claude, ChatGPT, and Gemini. Repomix offers features such as AI optimization, token counting, simplicity in usage, customization options, Git awareness, and security-focused checks using Secretlint. It allows users to pack their entire repository or specific directories/files using glob patterns, and even supports processing remote Git repositories. The tool generates output in plain text, XML, or Markdown formats, with options for including/excluding files, removing comments, and performing security checks. Repomix also provides a global configuration option, custom instructions for AI context, and a security check feature to detect sensitive information in files.

curator
Bespoke Curator is an open-source tool for data curation and structured data extraction. It provides a Python library for generating synthetic data at scale, with features like programmability, performance optimization, caching, and integration with HuggingFace Datasets. The tool includes a Curator Viewer for dataset visualization and offers a rich set of functionalities for creating and refining data generation strategies.

client-python
The Mistral Python Client is a tool inspired by cohere-python that allows users to interact with the Mistral AI API. It provides functionalities to access and utilize the AI capabilities offered by Mistral. Users can easily install the client using pip and manage dependencies using poetry. The client includes examples demonstrating how to use the API for various tasks, such as chat interactions. To get started, users need to obtain a Mistral API Key and set it as an environment variable. Overall, the Mistral Python Client simplifies the integration of Mistral AI services into Python applications.

parseable
Parseable is a full stack observability platform designed to ingest, analyze, and extract insights from various types of telemetry data. It can be run locally, in the cloud, or as a managed service. The platform offers features like high availability, smart cache, alerts, role-based access control, OAuth2 support, and OpenTelemetry integration. Users can easily ingest data, query logs, and access the dashboard to monitor and analyze data. Parseable provides a seamless experience for observability and monitoring tasks.

genius-ai
Genius is a modern Next.js 14 SaaS AI platform that provides a comprehensive folder structure for app development. It offers features like authentication, dashboard management, landing pages, API integration, and more. The platform is built using React JS, Next JS, TypeScript, Tailwind CSS, and integrates with services like Netlify, Prisma, MySQL, and Stripe. Genius enables users to create AI-powered applications with functionalities such as conversation generation, image processing, code generation, and more. It also includes features like Clerk authentication, OpenAI integration, Replicate API usage, Aiven database connectivity, and Stripe API/webhook setup. The platform is fully configurable and provides a seamless development experience for building AI-driven applications.
For similar tasks

air
Air is a new web framework for Python web development, built with FastAPI, Starlette, and Pydantic. It provides intuitive shortcuts and optimizations to expedite coding HTML with FastAPI, easy HTML content generation using Python classes, and seamless integration with Jinja templates. Air also offers utilities for using HTMX, HTML form validation powered by pydantic, and well-documented features. It aims to combine sophisticated HTML pages and a REST API into one app, making it easy to use FastAPI and Air together.

sdk
Chai Builder is an Open Source Low Code React + Tailwind CSS Visual Builder that enables users to create web pages & email templates visually by dragging and dropping elements onto the canvas. It is a simple React component that renders a full-fledged visual builder into any React application. Chai Builder aims to simplify the process of building web pages and email templates by providing a visual interface for developers and designers to work collaboratively.
For similar jobs

resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.

aiogram_bot_template
Aiogram bot template is a boilerplate for creating Telegram bots using Aiogram framework. It provides a solid foundation for building robust and scalable bots with a focus on code organization, database integration, and localization.

pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.

pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.

aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.

gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.

aioconsole
aioconsole is a Python package that provides asynchronous console and interfaces for asyncio. It offers asynchronous equivalents to input, print, exec, and code.interact, an interactive loop running the asynchronous Python console, customization and running of command line interfaces using argparse, stream support to serve interfaces instead of using standard streams, and the apython script to access asyncio code at runtime without modifying the sources. The package requires Python version 3.8 or higher and can be installed from PyPI or GitHub. It allows users to run Python files or modules with a modified asyncio policy, replacing the default event loop with an interactive loop. aioconsole is useful for scenarios where users need to interact with asyncio code in a console environment.

aiosqlite
aiosqlite is a Python library that provides a friendly, async interface to SQLite databases. It replicates the standard sqlite3 module but with async versions of all the standard connection and cursor methods, along with context managers for automatically closing connections and cursors. It allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. The library also replicates most of the advanced features of sqlite3, such as row factories and total changes tracking.