
lihil
2X faster ASGI web framework for python, offering high-level development, low-level performance.
Stars: 197

Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.
README:
Lihil /ˈliːhaɪl/ — a performant, productive, and professional web framework with a vision:
Making Python the mainstream programming language for web development.
lihil is 100% test covered and strictly typed.
📚 Docs: https://lihil.cc
- Performant: Blazing fast across tasks and conditions—Lihil ranks among the fastest Python web frameworks, outperforming other webframeworks by 50%–100%, see reproducible, automated tests lihil benchmarks, independent benchmarks
- Designed to be tested: Built with testability in mind, making it easy for users to write unit, integration, and e2e tests. Lihil supports Starlette's TestClient and provides LocalClient that allows testing at different levels: endpoint, route, middleware, and application.
- Built for large scale applications: Architected to handle enterprise-level applications with robust dependency injection and modular design
- AI-centric: While usable as a generic web framework, Lihil is optimized for AI applications with specialized features for AI/ML workloads
- AI Agent Friendly: Designed to work seamlessly with AI coding assistants - see LIHIL_COPILOT.md for comprehensive guidance on using Lihil with AI agents
- Productive: Provides extensive typing information for superior developer experience, complemented by detailed error messages and docstrings for effortless debugging
- Not a microframework: Lihil has an ever-growing and prosperous ecosystem that provides industrial, enterprise-ready features such as throttler, timeout, auth, and more
- Not a one-man project: Lihil is open-minded and contributions are always welcome.you can safely assume that your PR will be carefully reviewed
- Not experimental: Lihil optimizes based on real-world use cases rather than benchmarks
lihil requires python>=3.10
pip install "lihil[standard]"
The standard version comes with uvicorn
from lihil import Lihil, Route, Stream
from openai import OpenAI
from openai.types.chat import ChatCompletionChunk as Chunk
from openai.types.chat import ChatCompletionUserMessageParam as MessageIn
gpt = Route("/gpt", deps=[OpenAI])
def message_encoder(chunk: Chunk) -> bytes:
if not chunk.choices:
return b""
return chunk.choices[0].delta.content.encode() or b""
@gpt.sub("/messages").post(encoder=message_encoder)
async def add_new_message(
client: OpenAPI, question: MessageIn, model: str
) -> Stream[Chunk]:
chat_iter = client.responses.create(messages=[question], model=model, stream=True)
async for chunk in chat_iter:
yield chunk
-
Param Parsing & Validation
Lihil provides a high level abstraction for parsing request, validating rquest data against endpoint type hints. various model is supported including
-
msgspec.Struct
, -
pydantic.BaseModel
, -
dataclasses.dataclass
, typing.TypedDict
By default, lihil uses
msgspec
to serialize/deserialize json data, which is extremly fast, we maintain first-class support forpydantic.BaseModel
as well, no plugin required. see benchmarks,- Param Parsing: Automatically parse parameters from query strings, path parameters, headers, cookies, and request bodies
- Validation: Parameters are automatically converted to & validated against their annotated types and constraints.
- Custom Decoders: Apply custom decoders to have the maximum control of how your param should be parsed & validated.
-
-
Dependency injection: Inject factories, functions, sync/async, scoped/singletons based on type hints, blazingly fast.
-
WebSocket lihil supports the usage of websocket, you might use
WebSocketRoute.ws_handler
to register a function that handles websockets. -
OpenAPI docs & Error Response Generator Lihil creates smart & accurate openapi schemas based on your routes/endpoints, union types,
oneOf
responses, all supported. -
Powerful Plugin System: Lihil features a sophisticated plugin architecture that allows seamless integration of external libraries as if they were built-in components. Create custom plugins to extend functionality or integrate third-party services effortlessly.
-
Strong support for AI featuers: lihil takes AI as a main usecase, AI related features such as SSE, MCP, remote handler will be implemented in the next few patches
There will also be tutorials on how to develop your own AI agent/chatbot using lihil.
- ASGI-compatibility & Vendor types from starlette
- Lihil is ASGI copatible and works well with uvicorn and other ASGI servers.
- ASGI middlewares that works for any ASGIApp should also work with lihil, including those from Starlette.
Lihil's plugin system enables you to integrate external libraries seamlessly into your application as if they were built-in features. Any plugin that implements the IPlugin
protocol can access endpoint information and wrap functionality around your endpoints.
When you apply multiple plugins like @app.sub("/api/data").get(plugins=[plugin1.dec, plugin2.dec])
, here's how they execute:
Plugin Application (Setup Time - Left to Right)
┌─────────────────────────────────────────────────────────────┐
│ original_func → plugin1(ep_info) → plugin2(ep_info) │
│ │
│ Result: plugin2(plugin1(original_func)) │
└─────────────────────────────────────────────────────────────┘
Request Execution (Runtime - Nested/Onion Pattern)
┌────────────────────────────────────────────────────────────┐
│ │
│ Request │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Plugin2 (Outermost) │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ Plugin1 (Middle) │ │ │
│ │ │ ┌─────────────────────────────────────────────┐ │ │ │
│ │ │ │ Original Function (Core) │ │ │ │
│ │ │ │ │ │ │ │
│ │ │ │ async def get_data(): │ │ │ │
│ │ │ │ return {"data": "value"} │ │ │ │
│ │ │ │ │ │ │ │
│ │ │ └─────────────────────────────────────────────┘ │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Response │
│ │
└────────────────────────────────────────────────────────────┘
Request → Plugin2 → Plugin1 → get_data() → Plugin1 → Plugin2 → Response
@app.sub("/api").get(plugins=[
plugin.timeout(5), # Applied 1st → Executes Outermost
plugin.retry(max_attempts=3), # Applied 2nd → Executes Middle
plugin.cache(expire_s=60), # Applied 3rd → Executes Innermost
])
Flow: Request → timeout → retry → cache → endpoint → cache → retry → timeout → Response
A plugin is anything that implements the IPlugin
protocol - either a callable or a class with a decorate
method:
from lihil.plugins.interface import IPlugin, IEndpointInfo
from lihil.interface import IAsyncFunc, P, R
from typing import Callable, Awaitable
class MyCustomPlugin:
"""Plugin that integrates external libraries with lihil endpoints"""
def __init__(self, external_service):
self.service = external_service
def decorate(self, ep_info: IEndpointInfo[P, R]) -> Callable[P, Awaitable[R]]:
"""
Access endpoint info and wrap functionality around it.
ep_info contains:
- ep_info.func: The original endpoint function
- ep_info.sig: Parsed signature with type information
- ep_info.graph: Dependency injection graph
"""
original_func = ep_info.func
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Pre-processing with external library
await self.service.before_request(ep_info.sig)
try:
result = await original_func(*args, **kwargs)
# Post-processing with external library
return await self.service.process_result(result)
except Exception as e:
# Error handling with external library
await self.service.handle_error(e)
raise
return wrapper
# Usage - integrate any external library
from some_external_lib import ExternalService
plugin = MyCustomPlugin(ExternalService())
@app.sub("/api/data").get(plugins=[plugin.decorate])
async def get_data() -> dict:
return {"data": "value"}
This architecture allows you to:
- Integrate any external library as if it were built-in to lihil
- Access full endpoint context - signatures, types, dependency graphs
- Wrap functionality around endpoints with full control
- Compose multiple plugins for complex integrations
- Zero configuration - plugins work automatically based on decorators
Lihil provides a powerful and flexible error handling system based on RFC 9457 Problem Details specification. The HTTPException
class extends DetailBase
and allows you to create structured, consistent error responses with rich metadata.
By default, Lihil automatically generates problem details from your exception class:
from lihil import HTTPException
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
# Usage in endpoint
@app.sub("/users/{user_id}").get
async def get_user(user_id: str):
if not user_exists(user_id):
raise UserNotFound(f"User with ID {user_id} not found")
return get_user_data(user_id)
This will produce a JSON response like:
{
"type": "user-not-found",
"title": "The user you are looking for does not exist",
"status": 404,
"detail": "User with ID 123 not found",
"instance": "/users/123"
}
-
Problem Type: Automatically generated from class name in kebab-case (
UserNotFound
→user-not-found
) - Problem Title: Taken from the class docstring
-
Status Code: Set via
__status__
class attribute (defaults to 422)
You can customize the problem type and title using class attributes:
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
__problem_type__ = "user-lookup-failed"
__problem_title__ = "User Lookup Failed"
You can also override problem details at runtime:
@app.sub("/users/{user_id}").get
async def get_user(user_id: str):
if not user_exists(user_id):
raise UserNotFound(
detail=f"User with ID {user_id} not found",
problem_type="custom-user-error",
problem_title="Custom User Error",
status=404
)
return get_user_data(user_id)
For fine-grained control over how your exception transforms into a ProblemDetail
object:
from lihil.interface.problem import ProblemDetail
class ValidationError(HTTPException[dict]):
"""Request validation failed"""
__status__ = 400
def __problem_detail__(self, instance: str) -> ProblemDetail[dict]:
return ProblemDetail(
type_="validation-error",
title="Request Validation Failed",
status=400,
detail=self.detail,
instance=f"users/{instance}",
)
# Usage
@app.sub("/users/{user_id}").post
async def update_user(user_data: UserUpdate):
validation_errors = validate_user_data(user_data)
if validation_errors:
raise ValidationError(title="Updating user failed")
return create_user_in_db(user_data)
Customize how your exceptions appear in OpenAPI documentation:
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
@classmethod
def __json_example__(cls) -> ProblemDetail[str]:
return ProblemDetail(
type_="user-not-found",
title="User Not Found",
status=404,
detail="User with ID 'user123' was not found in the system",
instance="/api/v1/users/user123"
)
This is especially useful for providing realistic examples in your API documentation, including specific detail
and instance
values that Lihil cannot automatically resolve from class attributes.
from typing import Generic, TypeVar
T = TypeVar('T')
class ResourceNotFound(HTTPException[T], Generic[T]):
"""The requested resource was not found"""
__status__ = 404
def __init__(self, detail: T, resource_type: str):
super().__init__(detail)
self.resource_type = resource_type
def __problem_detail__(self, instance: str) -> ProblemDetail[T]:
return ProblemDetail(
type_=f"{self.resource_type}-not-found",
title=f"{self.resource_type.title()} Not Found",
status=404,
detail=self.detail,
instance=instance
)
# Usage
@app.sub("/posts/{post_id}").get
async def get_post(post_id: str):
if not post_exists(post_id):
raise ResourceNotFound(
detail=f"Post {post_id} does not exist",
resource_type="post"
)
return get_post_data(post_id)
- Consistency: All error responses follow RFC 9457 Problem Details specification
- Developer Experience: Rich type information and clear error messages
- Documentation: Automatic OpenAPI schema generation with examples
- Flexibility: Multiple levels of customization from simple to advanced
- Traceability: Built-in problem page links in OpenAPI docs for debugging
The error handling system integrates seamlessly with Lihil's OpenAPI documentation generation, providing developers with comprehensive error schemas and examples in the generated API docs.
Using AI coding assistants with Lihil? Check out LIHIL_COPILOT.md for:
- AI Agent Best Practices - Comprehensive guide for AI assistants working with Lihil
- Common Mistakes & Solutions - Learn from real AI agent errors and how to avoid them
- Complete Templates - Ready-to-use patterns that AI agents can follow
- Lihil vs FastAPI Differences - Critical syntax differences AI agents must know
- How to Use as Prompt - Instructions for Claude Code, Cursor, ChatGPT, and GitHub Copilot
Quick Setup: Copy the entire LIHIL_COPILOT.md content and paste it as system context in your AI tool. This ensures your AI assistant understands Lihil's unique syntax and avoids FastAPI assumptions.
Check our detailed tutorials at https://lihil.cc, covering
- Core concepts, create endpoint, route, middlewares, etc.
- Configuring your app via
pyproject.toml
, or via command line arguments. - Dependency Injection & Plugins
- Testing
- Type-Based Message System, Event listeners, atomic event handling, etc.
- Error Handling
- ...and much more
See how lihil works here, a production-ready full stack template that uses react and lihil,
lihil-fullstack-solopreneur-template
covering real world usage & best practices of lihil. A fullstack template for my fellow solopreneur, uses shadcn+tailwindcss+react+lihil+sqlalchemy+supabase+vercel+cloudlfare to end modern slavery
lihil follows semantic versioning after v1.0.0, where a version in x.y.z represents:
- x: major, breaking change
- y: minor, feature updates
- z: patch, bug fixes, typing updates
We welcome all contributions! Whether you're fixing bugs, adding features, improving documentation, or enhancing tests - every contribution matters.
- Fork & Clone: Fork the repository and clone your fork
-
Find Latest Branch: Use
git branch -r | grep "version/"
to find the latest development branch (e.g.,version/0.2.23
) - Create Feature Branch: Branch from the latest version branch
- Make Changes: Follow existing code conventions and add tests
- Submit PR: Target your PR to the latest development branch
For detailed contributing guidelines, workflow, and project conventions, see our Contributing Guide.
- [x] v0.1.x: Feature parity (alpha stage)
Implementing core functionalities of lihil, feature parity with fastapi
- [x] v0.2.x: Official Plugins (current stage)
We would keep adding new features & plugins to lihil without making breaking changes. This might be the last minor versions before v1.0.0.
- [ ] v0.3.x: Performance boost
The plan is to rewrite some components in c, roll out a server in c, or other performance optimizations in 0.3.x.
If we can do this without affect current implementations in 0.2.0 at all, 0.3.x may never occur and we would go straight to v1.0.0 from v0.2.x
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for lihil
Similar Open Source Tools

lihil
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.

WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.

aichildedu
AICHILDEDU is a microservice-based AI education platform for children that integrates LLMs, image generation, and speech synthesis to provide personalized storybook creation, intelligent conversational learning, and multimedia content generation. It offers features like personalized story generation, educational quiz creation, multimedia integration, age-appropriate content, multi-language support, user management, parental controls, and asynchronous processing. The platform follows a microservice architecture with components like API Gateway, User Service, Content Service, Learning Service, and AI Services. Technologies used include Python, FastAPI, PostgreSQL, MongoDB, Redis, LangChain, OpenAI GPT models, TensorFlow, PyTorch, Transformers, MinIO, Elasticsearch, Docker, Docker Compose, and JWT-based authentication.

crawl4ai
Crawl4AI is a powerful and free web crawling service that extracts valuable data from websites and provides LLM-friendly output formats. It supports crawling multiple URLs simultaneously, replaces media tags with ALT, and is completely free to use and open-source. Users can integrate Crawl4AI into Python projects as a library or run it as a standalone local server. The tool allows users to crawl and extract data from specified URLs using different providers and models, with options to include raw HTML content, force fresh crawls, and extract meaningful text blocks. Configuration settings can be adjusted in the `crawler/config.py` file to customize providers, API keys, chunk processing, and word thresholds. Contributions to Crawl4AI are welcome from the open-source community to enhance its value for AI enthusiasts and developers.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe supports various features like AI-friendly code extraction, fully local operation without external APIs, fast scanning of large codebases, accurate code structure parsing, re-rankers and NLP methods for better search results, multi-language support, interactive AI chat mode, and flexibility to run as a CLI tool, MCP server, or interactive AI chat.

probe
Probe is an AI-friendly, fully local, semantic code search tool designed to power the next generation of AI coding assistants. It combines the speed of ripgrep with the code-aware parsing of tree-sitter to deliver precise results with complete code blocks, making it perfect for large codebases and AI-driven development workflows. Probe is fully local, keeping code on the user's machine without relying on external APIs. It supports multiple languages, offers various search options, and can be used in CLI mode, MCP server mode, AI chat mode, and web interface. The tool is designed to be flexible, fast, and accurate, providing developers and AI models with full context and relevant code blocks for efficient code exploration and understanding.

ramparts
Ramparts is a fast, lightweight security scanner designed for the Model Context Protocol (MCP) ecosystem. It scans MCP servers to identify vulnerabilities and provides security features such as discovering capabilities, multi-transport support, session management, static analysis, cross-origin analysis, LLM-powered analysis, and risk assessment. The tool is suitable for developers, MCP users, and MCP developers to ensure the security of their connections. It can be used for security audits, development testing, CI/CD integration, and compliance with security requirements for AI agent deployments.

sre
SmythOS is an operating system designed for building, deploying, and managing intelligent AI agents at scale. It provides a unified SDK and resource abstraction layer for various AI services, making it easy to scale and flexible. With an agent-first design, developer-friendly SDK, modular architecture, and enterprise security features, SmythOS offers a robust foundation for AI workloads. The system is built with a philosophy inspired by traditional operating system kernels, ensuring autonomy, control, and security for AI agents. SmythOS aims to make shipping production-ready AI agents accessible and open for everyone in the coming Internet of Agents era.

g4f.dev
G4f.dev is the official documentation hub for GPT4Free, a free and convenient AI tool with endpoints that can be integrated directly into apps, scripts, and web browsers. The documentation provides clear overviews, quick examples, and deeper insights into the major features of GPT4Free, including text and image generation. Users can choose between Python and JavaScript for installation and setup, and can access various API endpoints, providers, models, and client options for different tasks.

polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI apps directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files to text, generate simple text, create a long-term memory, and generate images with Dall-E. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.

chat
deco.chat is an open-source foundation for building AI-native software, providing developers, engineers, and AI enthusiasts with robust tools to rapidly prototype, develop, and deploy AI-powered applications. It empowers Vibecoders to prototype ideas and Agentic engineers to deploy scalable, secure, and sustainable production systems. The core capabilities include an open-source runtime for composing tools and workflows, MCP Mesh for secure integration of models and APIs, a unified TypeScript stack for backend logic and custom frontends, global modular infrastructure built on Cloudflare, and a visual workspace for building agents and orchestrating everything in code.

VITA
VITA is an open-source interactive omni multimodal Large Language Model (LLM) capable of processing video, image, text, and audio inputs simultaneously. It stands out with features like Omni Multimodal Understanding, Non-awakening Interaction, and Audio Interrupt Interaction. VITA can respond to user queries without a wake-up word, track and filter external queries in real-time, and handle various query inputs effectively. The model utilizes state tokens and a duplex scheme to enhance the multimodal interactive experience.

dive
Dive is an AI toolkit for Go that enables the creation of specialized teams of AI agents and seamless integration with leading LLMs. It offers a CLI and APIs for easy integration, with features like creating specialized agents, hierarchical agent systems, declarative configuration, multiple LLM support, extended reasoning, model context protocol, advanced model settings, tools for agent capabilities, tool annotations, streaming, CLI functionalities, thread management, confirmation system, deep research, and semantic diff. Dive also provides semantic diff analysis, unified interface for LLM providers, tool system with annotations, custom tool creation, and support for various verified models. The toolkit is designed for developers to build AI-powered applications with rich agent capabilities and tool integrations.

rigging
Rigging is a lightweight LLM framework designed to simplify the usage of language models in production code. It offers structured Pydantic models for text output, supports various models like LiteLLM and transformers, and provides features such as defining prompts as python functions, simple tool use, storing models as connection strings, async batching for large scale generation, and modern Python support with type hints and async capabilities. Rigging is developed by dreadnode and is suitable for tasks like building chat pipelines, running completions, tracking behavior with tracing, playing with generation parameters, and scaling up with iterating and batching.

payload-ai
The Payload AI Plugin is an advanced extension that integrates modern AI capabilities into your Payload CMS, streamlining content creation and management. It offers features like text generation, voice and image generation, field-level prompt customization, prompt editor, document analyzer, fact checking, automated content workflows, internationalization support, editor AI suggestions, and AI chat support. Users can personalize and configure the plugin by setting environment variables. The plugin is actively developed and tested with Payload version v3.2.1, with regular updates expected.

trpc-agent-go
A powerful Go framework for building intelligent agent systems with large language models (LLMs), hierarchical planners, memory, telemetry, and a rich tool ecosystem. tRPC-Agent-Go enables the creation of autonomous or semi-autonomous agents that reason, call tools, collaborate with sub-agents, and maintain long-term state. The framework provides detailed documentation, examples, and tools for accelerating the development of AI applications.
For similar tasks

spatz
Spatz is a complete, fullstack template for Svelte that includes features such as Sveltekit for building fast web apps, Pocketbase for User Auth and Database, OpenAI for chatbots, Vercel AI SDK for AI/ML models, TailwindCSS for UI development, DaisyUI for components, and Zod for schema declaration and validation. The template provides a structured project setup with components, stores, routes, and APIs. It also offers theming and styling options with pre-loaded themes from DaisyUI. Contributions are welcomed through feature requests or pull requests.

mesop
Mesop is a Python-based UI framework designed for rapid web app development, particularly for demos and internal apps. It offers an intuitive interface for UI novices, frictionless developer workflows with hot reload and IDE support, and flexibility to build custom UIs without the need for JavaScript/CSS/HTML. Mesop allows users to write UI in idiomatic Python code and compose UI into components using Python functions. It is used at Google for internal app development and provides a quick way to build delightful web apps in Python.

spatz-2
Spatz-2 is a complete, fullstack template for Svelte, utilizing technologies such as Sveltekit, Pocketbase, OpenAI, Vercel AI SDK, TailwindCSS, svelte-animations, and Zod. It offers features like user authentication, admin dashboard, dark/light mode themes, AI chatbot, guestbook, and forms with client/server validation. The project structure includes components, stores, routes, APIs, and icons. Spatz-2 aims to provide a futuristic web framework for building fast web apps with advanced functionalities and easy customization.

ryoma
Ryoma is an AI Powered Data Agent framework that offers a comprehensive solution for data analysis, engineering, and visualization. It leverages cutting-edge technologies like Langchain, Reflex, Apache Arrow, Jupyter Ai Magics, Amundsen, Ibis, and Feast to provide seamless integration of language models, build interactive web applications, handle in-memory data efficiently, work with AI models, and manage machine learning features in production. Ryoma also supports various data sources like Snowflake, Sqlite, BigQuery, Postgres, MySQL, and different engines like Apache Spark and Apache Flink. The tool enables users to connect to databases, run SQL queries, and interact with data and AI models through a user-friendly UI called Ryoma Lab.

fragments
Fragments is an open-source tool that leverages Anthropic's Claude Artifacts, Vercel v0, and GPT Engineer. It is powered by E2B Sandbox SDK and Code Interpreter SDK, allowing secure execution of AI-generated code. The tool is based on Next.js 14, shadcn/ui, TailwindCSS, and Vercel AI SDK. Users can stream in the UI, install packages from npm and pip, and add custom stacks and LLM providers. Fragments enables users to build web apps with Python interpreter, Next.js, Vue.js, Streamlit, and Gradio, utilizing providers like OpenAI, Anthropic, Google AI, and more.

lihil
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.

enferno
Enferno is a modern Flask framework optimized for AI-assisted development workflows. It combines carefully crafted development patterns, smart Cursor Rules, and modern libraries to enable developers to build sophisticated web applications with unprecedented speed. Enferno's intelligent patterns and contextual guides help create production-ready SAAS applications faster than ever. It includes features like modern stack, authentication, OAuth integration, database support, task queue, frontend components, security measures, Docker readiness, and more.

mesop
Mesop is a Python-based UI framework designed for rapid web app development, particularly for demos and internal apps. It allows users to write UI in Python code, offers reactive UI paradigm, ready-to-use components, hot reload feature, rich IDE support, and the ability to build custom UIs without writing Javascript/CSS/HTML. Mesop is intuitive for UI novices, provides frictionless developer workflows, and is flexible for creating delightful demos. It is used at Google for rapid internal app development.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.