lihil
2X faster ASGI web framework for python, offering high-level development, low-level performance.
Stars: 199
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.
README:
Lihil /ˈliːhaɪl/ — a performant, productive, and professional web framework with a vision:
Making Python the mainstream programming language for web development.
lihil is 100% test covered and strictly typed.
📚 Docs: https://lihil.cc
- Performant: Blazing fast across tasks and conditions—Lihil ranks among the fastest Python web frameworks, outperforming other webframeworks by 50%–100%, see reproducible, automated tests lihil benchmarks, independent benchmarks
- Designed to be tested: Built with testability in mind, making it easy for users to write unit, integration, and e2e tests. Lihil supports Starlette's TestClient and provides LocalClient that allows testing at different levels: endpoint, route, middleware, and application.
- Built for large scale applications: Architected to handle enterprise-level applications with robust dependency injection and modular design
- AI Agent Friendly: Designed to work seamlessly with AI coding assistants - see LIHIL_COPILOT.md for comprehensive guidance on using Lihil with AI agents
- Productive: Provides extensive typing information for superior developer experience, complemented by detailed error messages and docstrings for effortless debugging
- Not a microframework: Lihil has an ever-growing and prosperous ecosystem that provides industrial, enterprise-ready features such as throttler, timeout, auth, and more
- Not a one-man project: Lihil is open-minded and contributions are always welcome.you can safely assume that your PR will be carefully reviewed
- Not experimental: Lihil optimizes based on real-world use cases rather than benchmarks
lihil requires python>=3.10
pip install "lihil[standard]"The standard version comes with uvicorn
from lihil import Lihil, Route, EventStream, SSE
from openai import OpenAI
from openai.types.chat import ChatCompletionChunk as Chunk
from openai.types.chat import ChatCompletionUserMessageParam as MessageIn
gpt = Route("/gpt", deps=[OpenAI])
def chunk_to_str(chunk: Chunk) -> str:
if not chunk.choices:
return ""
return chunk.choices[0].delta.content or ""
@gpt.sub("/messages").post
async def add_new_message(
client: OpenAPI, question: MessageIn, model: str
) -> Stream[Chunk]:
yield SSE(event="open")
chat_iter = client.responses.create(messages=[question], model=model, stream=True)
async for chunk in chat_iter:
yield SSE(event="token", data={"text": chunk_to_str(chunk)})
yield SSE(event="close")what frontend would receive
event: open
event: token
data: {"text":"Hello"}
event: token
data: {"text":" world"}
event: token
data: {"text":"!"}
event: close
-
Param Parsing & Validation
Lihil provides a high level abstraction for parsing request, validating rquest data against endpoint type hints. various model is supported including
-
msgspec.Struct, -
pydantic.BaseModel, -
dataclasses.dataclass, typing.TypedDict
By default, lihil uses
msgspecto serialize/deserialize json data, which is extremly fast, we maintain first-class support forpydantic.BaseModelas well, no plugin required. see benchmarks,- Param Parsing: Automatically parse parameters from query strings, path parameters, headers, cookies, and request bodies
- Validation: Parameters are automatically converted to & validated against their annotated types and constraints.
- Custom Decoders: Apply custom decoders to have the maximum control of how your param should be parsed & validated.
-
-
Dependency injection: Inject factories, functions, sync/async, scoped/singletons based on type hints, blazingly fast.
-
WebSocket lihil supports the usage of websocket, you might use
WebSocketRoute.ws_handlerto register a function that handles websockets. -
OpenAPI docs & Error Response Generator Lihil creates smart & accurate openapi schemas based on your routes/endpoints, union types,
oneOfresponses, all supported. -
Powerful Plugin System: Lihil features a sophisticated plugin architecture that allows seamless integration of external libraries as if they were built-in components. Create custom plugins to extend functionality or integrate third-party services effortlessly.
-
Strong support for AI featuers: lihil takes AI as a main usecase, AI related features such as SSE, MCP, remote handler will be implemented in the next few patches
There will also be tutorials on how to develop your own AI agent/chatbot using lihil.
- ASGI-compatibility & Vendor types from starlette
- Lihil is ASGI copatible and works well with uvicorn and other ASGI servers.
- ASGI middlewares that works for any ASGIApp should also work with lihil, including those from Starlette.
Lihil's plugin system enables you to integrate external libraries seamlessly into your application as if they were built-in features. Any plugin that implements the IPlugin protocol can access endpoint information and wrap functionality around your endpoints.
When you apply multiple plugins like @app.sub("/api/data").get(plugins=[plugin1.dec, plugin2.dec]), here's how they execute:
Plugin Application (Setup Time - Left to Right)
┌─────────────────────────────────────────────────────────────┐
│ original_func → plugin1(ep_info) → plugin2(ep_info) │
│ │
│ Result: plugin2(plugin1(original_func)) │
└─────────────────────────────────────────────────────────────┘
Request Execution (Runtime - Nested/Onion Pattern)
┌────────────────────────────────────────────────────────────┐
│ │
│ Request │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Plugin2 (Outermost) │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ Plugin1 (Middle) │ │ │
│ │ │ ┌─────────────────────────────────────────────┐ │ │ │
│ │ │ │ Original Function (Core) │ │ │ │
│ │ │ │ │ │ │ │
│ │ │ │ async def get_data(): │ │ │ │
│ │ │ │ return {"data": "value"} │ │ │ │
│ │ │ │ │ │ │ │
│ │ │ └─────────────────────────────────────────────┘ │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Response │
│ │
└────────────────────────────────────────────────────────────┘
Request → Plugin2 → Plugin1 → get_data() → Plugin1 → Plugin2 → Response
@app.sub("/api").get(plugins=[
plugin.timeout(5), # Applied 1st → Executes Outermost
plugin.retry(max_attempts=3), # Applied 2nd → Executes Middle
plugin.cache(expire_s=60), # Applied 3rd → Executes Innermost
])Flow: Request → timeout → retry → cache → endpoint → cache → retry → timeout → Response
A plugin is anything that implements the IPlugin protocol - either a callable or a class with a decorate method:
from lihil.plugins.interface import IPlugin, IEndpointInfo
from lihil.interface import IAsyncFunc, P, R
from typing import Callable, Awaitable
class MyCustomPlugin:
"""Plugin that integrates external libraries with lihil endpoints"""
def __init__(self, external_service):
self.service = external_service
def decorate(self, ep_info: IEndpointInfo[P, R]) -> Callable[P, Awaitable[R]]:
"""
Access endpoint info and wrap functionality around it.
ep_info contains:
- ep_info.func: The original endpoint function
- ep_info.sig: Parsed signature with type information
- ep_info.graph: Dependency injection graph
"""
original_func = ep_info.func
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Pre-processing with external library
await self.service.before_request(ep_info.sig)
try:
result = await original_func(*args, **kwargs)
# Post-processing with external library
return await self.service.process_result(result)
except Exception as e:
# Error handling with external library
await self.service.handle_error(e)
raise
return wrapper
# Usage - integrate any external library
from some_external_lib import ExternalService
plugin = MyCustomPlugin(ExternalService())
@app.sub("/api/data").get(plugins=[plugin.decorate])
async def get_data() -> dict:
return {"data": "value"}Interface
class IEndpointInfo(Protocol, Generic[P, R]):
@property
def graph(self) -> Graph: ...
@property
def func(self) -> IAsyncFunc[P, R]: ...
@property
def sig(self) -> EndpointSignature[R]: ...
class EndpointSignature(Base, Generic[R]):
route_path: str
query_params: ParamMap[QueryParam[Any]]
path_params: ParamMap[PathParam[Any]]
header_params: ParamMap[HeaderParam[Any] | CookieParam[Any]]
body_param: tuple[str, BodyParam[bytes | FormData, Struct]] | None
dependencies: ParamMap[DependentNode]
transitive_params: set[str]
"""
Transitive params are parameters required by dependencies, but not directly required by the endpoint function.
"""
plugins: ParamMap[PluginParam]
scoped: bool
form_meta: FormMeta | None
return_params: dict[int, EndpointReturn[R]]
@property
def default_return(self) -> EndpointReturn[R]:
...
@property
def status_code(self) -> int: ...
@property
def encoder(self) -> Callable[[Any], bytes]:
...
@property
def static(self) -> bool: ...
@property
def media_type(self) -> str: ...This architecture allows you to:
- Integrate any external library as if it were built-in to lihil
- Access full endpoint context - signatures, types, dependency graphs
- Wrap functionality around endpoints with full control
- Compose multiple plugins for complex integrations
- Zero configuration - plugins work automatically based on decorators
Lihil provides a powerful and flexible error handling system based on RFC 9457 Problem Details specification. The HTTPException class extends DetailBase and allows you to create structured, consistent error responses with rich metadata.
By default, Lihil automatically generates problem details from your exception class:
from lihil import HTTPException
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
# Usage in endpoint
@app.sub("/users/{user_id}").get
async def get_user(user_id: str):
if not user_exists(user_id):
raise UserNotFound(f"User with ID {user_id} not found")
return get_user_data(user_id)This will produce a JSON response like:
{
"type": "user-not-found",
"title": "The user you are looking for does not exist",
"status": 404,
"detail": "User with ID 123 not found",
"instance": "/users/123"
}-
Problem Type: Automatically generated from class name in kebab-case (
UserNotFound→user-not-found) - Problem Title: Taken from the class docstring
-
Status Code: Set via
__status__class attribute (defaults to 422)
You can customize the problem type and title using class attributes:
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
__problem_type__ = "user-lookup-failed"
__problem_title__ = "User Lookup Failed"You can also override problem details at runtime:
@app.sub("/users/{user_id}").get
async def get_user(user_id: str):
if not user_exists(user_id):
raise UserNotFound(
detail=f"User with ID {user_id} not found",
problem_type="custom-user-error",
problem_title="Custom User Error",
status=404
)
return get_user_data(user_id)For fine-grained control over how your exception transforms into a ProblemDetail object:
from lihil.interface.problem import ProblemDetail
class ValidationError(HTTPException[dict]):
"""Request validation failed"""
__status__ = 400
def __problem_detail__(self, instance: str) -> ProblemDetail[dict]:
return ProblemDetail(
type_="validation-error",
title="Request Validation Failed",
status=400,
detail=self.detail,
instance=f"users/{instance}",
)
# Usage
@app.sub("/users/{user_id}").post
async def update_user(user_data: UserUpdate):
validation_errors = validate_user_data(user_data)
if validation_errors:
raise ValidationError(title="Updating user failed")
return create_user_in_db(user_data)Customize how your exceptions appear in OpenAPI documentation:
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
@classmethod
def __json_example__(cls) -> ProblemDetail[str]:
return ProblemDetail(
type_="user-not-found",
title="User Not Found",
status=404,
detail="User with ID 'user123' was not found in the system",
instance="/api/v1/users/user123"
)This is especially useful for providing realistic examples in your API documentation, including specific detail and instance values that Lihil cannot automatically resolve from class attributes.
from typing import Generic, TypeVar
T = TypeVar('T')
class ResourceNotFound(HTTPException[T], Generic[T]):
"""The requested resource was not found"""
__status__ = 404
def __init__(self, detail: T, resource_type: str):
super().__init__(detail)
self.resource_type = resource_type
def __problem_detail__(self, instance: str) -> ProblemDetail[T]:
return ProblemDetail(
type_=f"{self.resource_type}-not-found",
title=f"{self.resource_type.title()} Not Found",
status=404,
detail=self.detail,
instance=instance
)
# Usage
@app.sub("/posts/{post_id}").get
async def get_post(post_id: str):
if not post_exists(post_id):
raise ResourceNotFound(
detail=f"Post {post_id} does not exist",
resource_type="post"
)
return get_post_data(post_id)- Consistency: All error responses follow RFC 9457 Problem Details specification
- Developer Experience: Rich type information and clear error messages
- Documentation: Automatic OpenAPI schema generation with examples
- Flexibility: Multiple levels of customization from simple to advanced
- Traceability: Built-in problem page links in OpenAPI docs for debugging
The error handling system integrates seamlessly with Lihil's OpenAPI documentation generation, providing developers with comprehensive error schemas and examples in the generated API docs.
Using AI coding assistants with Lihil? Check out LIHIL_COPILOT.md for:
- AI Agent Best Practices - Comprehensive guide for AI assistants working with Lihil
- Common Mistakes & Solutions - Learn from real AI agent errors and how to avoid them
- Complete Templates - Ready-to-use patterns that AI agents can follow
- Lihil vs FastAPI Differences - Critical syntax differences AI agents must know
- How to Use as Prompt - Instructions for Claude Code, Cursor, ChatGPT, and GitHub Copilot
Quick Setup: Copy the entire LIHIL_COPILOT.md content and paste it as system context in your AI tool. This ensures your AI assistant understands Lihil's unique syntax and avoids FastAPI assumptions.
Check our detailed tutorials at https://lihil.cc, covering
- Core concepts, create endpoint, route, middlewares, etc.
- Configuring your app via
pyproject.toml, or via command line arguments. - Dependency Injection & Plugins
- Testing
- Type-Based Message System, Event listeners, atomic event handling, etc.
- Error Handling
- ...and much more
See how lihil works here, a production-ready full stack template that uses react and lihil,
lihil-fullstack-solopreneur-template
covering real world usage & best practices of lihil. A fullstack template for my fellow solopreneur, uses shadcn+tailwindcss+react+lihil+sqlalchemy+supabase+vercel+cloudlfare to end modern slavery
lihil follows semantic versioning after v1.0.0, where a version in x.y.z represents:
- x: major, breaking change
- y: minor, feature updates
- z: patch, bug fixes, typing updates
We welcome all contributions! Whether you're fixing bugs, adding features, improving documentation, or enhancing tests - every contribution matters.
- Fork & Clone: Fork the repository and clone your fork
-
Find Latest Branch: Use
git branch -r | grep "version/"to find the latest development branch (e.g.,version/0.2.23) - Create Feature Branch: Branch from the latest version branch
- Make Changes: Follow existing code conventions and add tests
- Submit PR: Target your PR to the latest development branch
For detailed contributing guidelines, workflow, and project conventions, see our Contributing Guide.
- [x] v0.1.x: Feature parity (alpha stage)
Implementing core functionalities of lihil, feature parity with fastapi
- [x] v0.2.x: Official Plugins (current stage)
We would keep adding new features & plugins to lihil without making breaking changes. This might be the last minor versions before v1.0.0.
- [ ] v0.3.x: Performance boost
The plan is to rewrite some components in c, roll out a server in c, or other performance optimizations in 0.3.x.
If we can do this without affect current implementations in 0.2.0 at all, 0.3.x may never occur and we would go straight to v1.0.0 from v0.2.x
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for lihil
Similar Open Source Tools
lihil
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.
httpjail
httpjail is a cross-platform tool designed for monitoring and restricting HTTP/HTTPS requests from processes using network isolation and transparent proxy interception. It provides process-level network isolation, HTTP/HTTPS interception with TLS certificate injection, script-based and JavaScript evaluation for custom request logic, request logging, default deny behavior, and zero-configuration setup. The tool operates on Linux and macOS, creating an isolated network environment for target processes and intercepting all HTTP/HTTPS traffic through a transparent proxy enforcing user-defined rules.
cordum
Cordum is a control plane for AI agents designed to close the Trust Gap by providing safety, observability, and control features. It allows teams to deploy autonomous agents with built-in governance mechanisms, including safety policies, workflow orchestration, job routing, observability, and human-in-the-loop approvals. The tool aims to address the challenges of deploying AI agents in production by offering visibility, safety rails, audit trails, and approval mechanisms for sensitive operations.
aichildedu
AICHILDEDU is a microservice-based AI education platform for children that integrates LLMs, image generation, and speech synthesis to provide personalized storybook creation, intelligent conversational learning, and multimedia content generation. It offers features like personalized story generation, educational quiz creation, multimedia integration, age-appropriate content, multi-language support, user management, parental controls, and asynchronous processing. The platform follows a microservice architecture with components like API Gateway, User Service, Content Service, Learning Service, and AI Services. Technologies used include Python, FastAPI, PostgreSQL, MongoDB, Redis, LangChain, OpenAI GPT models, TensorFlow, PyTorch, Transformers, MinIO, Elasticsearch, Docker, Docker Compose, and JWT-based authentication.
VT.ai
VT.ai is a multimodal AI platform that offers dynamic conversation routing with SemanticRouter, multi-modal interactions (text/image/audio), an assistant framework with code interpretation, real-time response streaming, cross-provider model switching, and local model support with Ollama integration. It supports various AI providers such as OpenAI, Anthropic, Google Gemini, Groq, Cohere, and OpenRouter, providing a wide range of core capabilities for AI orchestration.
mcp-ts-template
The MCP TypeScript Server Template is a production-grade framework for building powerful and scalable Model Context Protocol servers with TypeScript. It features built-in observability, declarative tooling, robust error handling, and a modular, DI-driven architecture. The template is designed to be AI-agent-friendly, providing detailed rules and guidance for developers to adhere to best practices. It enforces architectural principles like 'Logic Throws, Handler Catches' pattern, full-stack observability, declarative components, and dependency injection for decoupling. The project structure includes directories for configuration, container setup, server resources, services, storage, utilities, tests, and more. Configuration is done via environment variables, and key scripts are available for development, testing, and publishing to the MCP Registry.
distill
Distill is a reliability layer for LLM context that provides deterministic deduplication to remove redundancy before reaching the model. It aims to reduce redundant data, lower costs, provide faster responses, and offer more efficient and deterministic results. The tool works by deduplicating, compressing, summarizing, and caching context to ensure reliable outputs. It offers various installation methods, including binary download, Go install, Docker usage, and building from source. Distill can be used for tasks like deduplicating chunks, connecting to vector databases, integrating with AI assistants, analyzing files for duplicates, syncing vectors to Pinecone, querying from the command line, and managing configuration files. The tool supports self-hosting via Docker, Docker Compose, building from source, Fly.io deployment, Render deployment, and Railway integration. Distill also provides monitoring capabilities with Prometheus-compatible metrics, Grafana dashboard, and OpenTelemetry tracing.
DeepTutor
DeepTutor is an AI-powered personalized learning assistant that offers a suite of modules for massive document knowledge Q&A, interactive learning visualization, knowledge reinforcement with practice exercise generation, deep research, and idea generation. The tool supports multi-agent collaboration, dynamic topic queues, and structured outputs for various tasks. It provides a unified system entry for activity tracking, knowledge base management, and system status monitoring. DeepTutor is designed to streamline learning and research processes by leveraging AI technologies and interactive features.
sandbox
AIO Sandbox is an all-in-one agent sandbox environment that combines Browser, Shell, File, MCP operations, and VSCode Server in a single Docker container. It provides a unified, secure execution environment for AI agents and developers, with features like unified file system, multiple interfaces, secure execution, zero configuration, and agent-ready MCP-compatible APIs. The tool allows users to run shell commands, perform file operations, automate browser tasks, and integrate with various development tools and services.
mimiclaw
MimiClaw is a pocket AI assistant that runs on a $5 chip, specifically designed for the ESP32-S3 board. It operates without Linux or Node.js, using pure C language. Users can interact with MimiClaw through Telegram, enabling it to handle various tasks and learn from local memory. The tool is energy-efficient, running on USB power 24/7. With MimiClaw, users can have a personal AI assistant on a chip the size of a thumb, making it convenient and accessible for everyday use.
shell_gpt
ShellGPT is a command-line productivity tool powered by AI large language models (LLMs). This command-line tool offers streamlined generation of shell commands, code snippets, documentation, eliminating the need for external resources (like Google search). Supports Linux, macOS, Windows and compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc.
mcp-prompts
mcp-prompts is a Python library that provides a collection of prompts for generating creative writing ideas. It includes a variety of prompts such as story starters, character development, plot twists, and more. The library is designed to inspire writers and help them overcome writer's block by offering unique and engaging prompts to spark creativity. With mcp-prompts, users can access a wide range of writing prompts to kickstart their imagination and enhance their storytelling skills.
multi-agent-shogun
multi-agent-shogun is a system that runs multiple AI coding CLI instances simultaneously, orchestrating them like a feudal Japanese army. It supports Claude Code, OpenAI Codex, GitHub Copilot, and Kimi Code. The system allows you to command your AI army with zero coordination cost, enabling parallel execution, non-blocking workflow, cross-session memory, event-driven communication, and full transparency. It also features skills discovery, phone notifications, pane border task display, shout mode, and multi-CLI support.
nono
nono is a secure, kernel-enforced capability shell for running AI agents and any POSIX style process. It leverages OS security primitives to create an environment where unauthorized operations are structurally impossible. It provides protections against destructive commands and securely stores API keys, tokens, and secrets. The tool is agent-agnostic, works with any AI agent or process, and blocks dangerous commands by default. It follows a capability-based security model with defense-in-depth, ensuring secure execution of commands and protecting sensitive data.
Shannon
Shannon is a battle-tested infrastructure for AI agents that solves problems at scale, such as runaway costs, non-deterministic failures, and security concerns. It offers features like intelligent caching, deterministic replay of workflows, time-travel debugging, WASI sandboxing, and hot-swapping between LLM providers. Shannon allows users to ship faster with zero configuration multi-agent setup, multiple AI patterns, time-travel debugging, and hot configuration changes. It is production-ready with features like WASI sandbox, token budget control, policy engine (OPA), and multi-tenancy. Shannon helps scale without breaking by reducing costs, being provider agnostic, observable by default, and designed for horizontal scaling with Temporal workflow orchestration.
mcp-debugger
mcp-debugger is a Model Context Protocol (MCP) server that provides debugging tools as structured API calls. It enables AI agents to perform step-through debugging of multiple programming languages using the Debug Adapter Protocol (DAP). The tool supports multi-language debugging with clean adapter patterns, including Python debugging via debugpy, JavaScript (Node.js) debugging via js-debug, and Rust debugging via CodeLLDB. It offers features like mock adapter for testing, STDIO and SSE transport modes, zero-runtime dependencies, Docker and npm packages for deployment, structured JSON responses for easy parsing, path validation to prevent crashes, and AI-aware line context for intelligent breakpoint placement with code context.
For similar tasks
spatz
Spatz is a complete, fullstack template for Svelte that includes features such as Sveltekit for building fast web apps, Pocketbase for User Auth and Database, OpenAI for chatbots, Vercel AI SDK for AI/ML models, TailwindCSS for UI development, DaisyUI for components, and Zod for schema declaration and validation. The template provides a structured project setup with components, stores, routes, and APIs. It also offers theming and styling options with pre-loaded themes from DaisyUI. Contributions are welcomed through feature requests or pull requests.
mesop
Mesop is a Python-based UI framework designed for rapid web app development, particularly for demos and internal apps. It offers an intuitive interface for UI novices, frictionless developer workflows with hot reload and IDE support, and flexibility to build custom UIs without the need for JavaScript/CSS/HTML. Mesop allows users to write UI in idiomatic Python code and compose UI into components using Python functions. It is used at Google for internal app development and provides a quick way to build delightful web apps in Python.
spatz-2
Spatz-2 is a complete, fullstack template for Svelte, utilizing technologies such as Sveltekit, Pocketbase, OpenAI, Vercel AI SDK, TailwindCSS, svelte-animations, and Zod. It offers features like user authentication, admin dashboard, dark/light mode themes, AI chatbot, guestbook, and forms with client/server validation. The project structure includes components, stores, routes, APIs, and icons. Spatz-2 aims to provide a futuristic web framework for building fast web apps with advanced functionalities and easy customization.
ryoma
Ryoma is an AI Powered Data Agent framework that offers a comprehensive solution for data analysis, engineering, and visualization. It leverages cutting-edge technologies like Langchain, Reflex, Apache Arrow, Jupyter Ai Magics, Amundsen, Ibis, and Feast to provide seamless integration of language models, build interactive web applications, handle in-memory data efficiently, work with AI models, and manage machine learning features in production. Ryoma also supports various data sources like Snowflake, Sqlite, BigQuery, Postgres, MySQL, and different engines like Apache Spark and Apache Flink. The tool enables users to connect to databases, run SQL queries, and interact with data and AI models through a user-friendly UI called Ryoma Lab.
fragments
Fragments is an open-source tool that leverages Anthropic's Claude Artifacts, Vercel v0, and GPT Engineer. It is powered by E2B Sandbox SDK and Code Interpreter SDK, allowing secure execution of AI-generated code. The tool is based on Next.js 14, shadcn/ui, TailwindCSS, and Vercel AI SDK. Users can stream in the UI, install packages from npm and pip, and add custom stacks and LLM providers. Fragments enables users to build web apps with Python interpreter, Next.js, Vue.js, Streamlit, and Gradio, utilizing providers like OpenAI, Anthropic, Google AI, and more.
lihil
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.
enferno
Enferno is a modern Flask framework optimized for AI-assisted development workflows. It combines carefully crafted development patterns, smart Cursor Rules, and modern libraries to enable developers to build sophisticated web applications with unprecedented speed. Enferno's intelligent patterns and contextual guides help create production-ready SAAS applications faster than ever. It includes features like modern stack, authentication, OAuth integration, database support, task queue, frontend components, security measures, Docker readiness, and more.
mesop
Mesop is a Python-based UI framework designed for rapid web app development, particularly for demos and internal apps. It allows users to write UI in Python code, offers reactive UI paradigm, ready-to-use components, hot reload feature, rich IDE support, and the ability to build custom UIs without writing Javascript/CSS/HTML. Mesop is intuitive for UI novices, provides frictionless developer workflows, and is flexible for creating delightful demos. It is used at Google for rapid internal app development.
For similar jobs
sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.
teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.
ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.
classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.
chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.
BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students
uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.
griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.

