
lihil
2X faster ASGI web framework for python, offering high-level development, low-level performance.
Stars: 199

Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.
README:
Lihil /ˈliːhaɪl/ — a performant, productive, and professional web framework with a vision:
Making Python the mainstream programming language for web development.
lihil is 100% test covered and strictly typed.
📚 Docs: https://lihil.cc
- Performant: Blazing fast across tasks and conditions—Lihil ranks among the fastest Python web frameworks, outperforming other webframeworks by 50%–100%, see reproducible, automated tests lihil benchmarks, independent benchmarks
- Designed to be tested: Built with testability in mind, making it easy for users to write unit, integration, and e2e tests. Lihil supports Starlette's TestClient and provides LocalClient that allows testing at different levels: endpoint, route, middleware, and application.
- Built for large scale applications: Architected to handle enterprise-level applications with robust dependency injection and modular design
- AI Agent Friendly: Designed to work seamlessly with AI coding assistants - see LIHIL_COPILOT.md for comprehensive guidance on using Lihil with AI agents
- Productive: Provides extensive typing information for superior developer experience, complemented by detailed error messages and docstrings for effortless debugging
- Not a microframework: Lihil has an ever-growing and prosperous ecosystem that provides industrial, enterprise-ready features such as throttler, timeout, auth, and more
- Not a one-man project: Lihil is open-minded and contributions are always welcome.you can safely assume that your PR will be carefully reviewed
- Not experimental: Lihil optimizes based on real-world use cases rather than benchmarks
lihil requires python>=3.10
pip install "lihil[standard]"
The standard version comes with uvicorn
from lihil import Lihil, Route, EventStream, SSE
from openai import OpenAI
from openai.types.chat import ChatCompletionChunk as Chunk
from openai.types.chat import ChatCompletionUserMessageParam as MessageIn
gpt = Route("/gpt", deps=[OpenAI])
def chunk_to_str(chunk: Chunk) -> str:
if not chunk.choices:
return ""
return chunk.choices[0].delta.content or ""
@gpt.sub("/messages").post
async def add_new_message(
client: OpenAPI, question: MessageIn, model: str
) -> Stream[Chunk]:
yield SSE(event="open")
chat_iter = client.responses.create(messages=[question], model=model, stream=True)
async for chunk in chat_iter:
yield SSE(event="token", data={"text": chunk_to_str(chunk)})
yield SSE(event="close")
what frontend would receive
event: open
event: token
data: {"text":"Hello"}
event: token
data: {"text":" world"}
event: token
data: {"text":"!"}
event: close
-
Param Parsing & Validation
Lihil provides a high level abstraction for parsing request, validating rquest data against endpoint type hints. various model is supported including
-
msgspec.Struct
, -
pydantic.BaseModel
, -
dataclasses.dataclass
, typing.TypedDict
By default, lihil uses
msgspec
to serialize/deserialize json data, which is extremly fast, we maintain first-class support forpydantic.BaseModel
as well, no plugin required. see benchmarks,- Param Parsing: Automatically parse parameters from query strings, path parameters, headers, cookies, and request bodies
- Validation: Parameters are automatically converted to & validated against their annotated types and constraints.
- Custom Decoders: Apply custom decoders to have the maximum control of how your param should be parsed & validated.
-
-
Dependency injection: Inject factories, functions, sync/async, scoped/singletons based on type hints, blazingly fast.
-
WebSocket lihil supports the usage of websocket, you might use
WebSocketRoute.ws_handler
to register a function that handles websockets. -
OpenAPI docs & Error Response Generator Lihil creates smart & accurate openapi schemas based on your routes/endpoints, union types,
oneOf
responses, all supported. -
Powerful Plugin System: Lihil features a sophisticated plugin architecture that allows seamless integration of external libraries as if they were built-in components. Create custom plugins to extend functionality or integrate third-party services effortlessly.
-
Strong support for AI featuers: lihil takes AI as a main usecase, AI related features such as SSE, MCP, remote handler will be implemented in the next few patches
There will also be tutorials on how to develop your own AI agent/chatbot using lihil.
- ASGI-compatibility & Vendor types from starlette
- Lihil is ASGI copatible and works well with uvicorn and other ASGI servers.
- ASGI middlewares that works for any ASGIApp should also work with lihil, including those from Starlette.
Lihil's plugin system enables you to integrate external libraries seamlessly into your application as if they were built-in features. Any plugin that implements the IPlugin
protocol can access endpoint information and wrap functionality around your endpoints.
When you apply multiple plugins like @app.sub("/api/data").get(plugins=[plugin1.dec, plugin2.dec])
, here's how they execute:
Plugin Application (Setup Time - Left to Right)
┌─────────────────────────────────────────────────────────────┐
│ original_func → plugin1(ep_info) → plugin2(ep_info) │
│ │
│ Result: plugin2(plugin1(original_func)) │
└─────────────────────────────────────────────────────────────┘
Request Execution (Runtime - Nested/Onion Pattern)
┌────────────────────────────────────────────────────────────┐
│ │
│ Request │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Plugin2 (Outermost) │ │
│ │ ┌─────────────────────────────────────────────────┐ │ │
│ │ │ Plugin1 (Middle) │ │ │
│ │ │ ┌─────────────────────────────────────────────┐ │ │ │
│ │ │ │ Original Function (Core) │ │ │ │
│ │ │ │ │ │ │ │
│ │ │ │ async def get_data(): │ │ │ │
│ │ │ │ return {"data": "value"} │ │ │ │
│ │ │ │ │ │ │ │
│ │ │ └─────────────────────────────────────────────┘ │ │ │
│ │ └─────────────────────────────────────────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ▼ │
│ Response │
│ │
└────────────────────────────────────────────────────────────┘
Request → Plugin2 → Plugin1 → get_data() → Plugin1 → Plugin2 → Response
@app.sub("/api").get(plugins=[
plugin.timeout(5), # Applied 1st → Executes Outermost
plugin.retry(max_attempts=3), # Applied 2nd → Executes Middle
plugin.cache(expire_s=60), # Applied 3rd → Executes Innermost
])
Flow: Request → timeout → retry → cache → endpoint → cache → retry → timeout → Response
A plugin is anything that implements the IPlugin
protocol - either a callable or a class with a decorate
method:
from lihil.plugins.interface import IPlugin, IEndpointInfo
from lihil.interface import IAsyncFunc, P, R
from typing import Callable, Awaitable
class MyCustomPlugin:
"""Plugin that integrates external libraries with lihil endpoints"""
def __init__(self, external_service):
self.service = external_service
def decorate(self, ep_info: IEndpointInfo[P, R]) -> Callable[P, Awaitable[R]]:
"""
Access endpoint info and wrap functionality around it.
ep_info contains:
- ep_info.func: The original endpoint function
- ep_info.sig: Parsed signature with type information
- ep_info.graph: Dependency injection graph
"""
original_func = ep_info.func
async def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# Pre-processing with external library
await self.service.before_request(ep_info.sig)
try:
result = await original_func(*args, **kwargs)
# Post-processing with external library
return await self.service.process_result(result)
except Exception as e:
# Error handling with external library
await self.service.handle_error(e)
raise
return wrapper
# Usage - integrate any external library
from some_external_lib import ExternalService
plugin = MyCustomPlugin(ExternalService())
@app.sub("/api/data").get(plugins=[plugin.decorate])
async def get_data() -> dict:
return {"data": "value"}
Interface
class IEndpointInfo(Protocol, Generic[P, R]):
@property
def graph(self) -> Graph: ...
@property
def func(self) -> IAsyncFunc[P, R]: ...
@property
def sig(self) -> EndpointSignature[R]: ...
class EndpointSignature(Base, Generic[R]):
route_path: str
query_params: ParamMap[QueryParam[Any]]
path_params: ParamMap[PathParam[Any]]
header_params: ParamMap[HeaderParam[Any] | CookieParam[Any]]
body_param: tuple[str, BodyParam[bytes | FormData, Struct]] | None
dependencies: ParamMap[DependentNode]
transitive_params: set[str]
"""
Transitive params are parameters required by dependencies, but not directly required by the endpoint function.
"""
plugins: ParamMap[PluginParam]
scoped: bool
form_meta: FormMeta | None
return_params: dict[int, EndpointReturn[R]]
@property
def default_return(self) -> EndpointReturn[R]:
...
@property
def status_code(self) -> int: ...
@property
def encoder(self) -> Callable[[Any], bytes]:
...
@property
def static(self) -> bool: ...
@property
def media_type(self) -> str: ...
This architecture allows you to:
- Integrate any external library as if it were built-in to lihil
- Access full endpoint context - signatures, types, dependency graphs
- Wrap functionality around endpoints with full control
- Compose multiple plugins for complex integrations
- Zero configuration - plugins work automatically based on decorators
Lihil provides a powerful and flexible error handling system based on RFC 9457 Problem Details specification. The HTTPException
class extends DetailBase
and allows you to create structured, consistent error responses with rich metadata.
By default, Lihil automatically generates problem details from your exception class:
from lihil import HTTPException
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
# Usage in endpoint
@app.sub("/users/{user_id}").get
async def get_user(user_id: str):
if not user_exists(user_id):
raise UserNotFound(f"User with ID {user_id} not found")
return get_user_data(user_id)
This will produce a JSON response like:
{
"type": "user-not-found",
"title": "The user you are looking for does not exist",
"status": 404,
"detail": "User with ID 123 not found",
"instance": "/users/123"
}
-
Problem Type: Automatically generated from class name in kebab-case (
UserNotFound
→user-not-found
) - Problem Title: Taken from the class docstring
-
Status Code: Set via
__status__
class attribute (defaults to 422)
You can customize the problem type and title using class attributes:
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
__problem_type__ = "user-lookup-failed"
__problem_title__ = "User Lookup Failed"
You can also override problem details at runtime:
@app.sub("/users/{user_id}").get
async def get_user(user_id: str):
if not user_exists(user_id):
raise UserNotFound(
detail=f"User with ID {user_id} not found",
problem_type="custom-user-error",
problem_title="Custom User Error",
status=404
)
return get_user_data(user_id)
For fine-grained control over how your exception transforms into a ProblemDetail
object:
from lihil.interface.problem import ProblemDetail
class ValidationError(HTTPException[dict]):
"""Request validation failed"""
__status__ = 400
def __problem_detail__(self, instance: str) -> ProblemDetail[dict]:
return ProblemDetail(
type_="validation-error",
title="Request Validation Failed",
status=400,
detail=self.detail,
instance=f"users/{instance}",
)
# Usage
@app.sub("/users/{user_id}").post
async def update_user(user_data: UserUpdate):
validation_errors = validate_user_data(user_data)
if validation_errors:
raise ValidationError(title="Updating user failed")
return create_user_in_db(user_data)
Customize how your exceptions appear in OpenAPI documentation:
class UserNotFound(HTTPException[str]):
"""The user you are looking for does not exist"""
__status__ = 404
@classmethod
def __json_example__(cls) -> ProblemDetail[str]:
return ProblemDetail(
type_="user-not-found",
title="User Not Found",
status=404,
detail="User with ID 'user123' was not found in the system",
instance="/api/v1/users/user123"
)
This is especially useful for providing realistic examples in your API documentation, including specific detail
and instance
values that Lihil cannot automatically resolve from class attributes.
from typing import Generic, TypeVar
T = TypeVar('T')
class ResourceNotFound(HTTPException[T], Generic[T]):
"""The requested resource was not found"""
__status__ = 404
def __init__(self, detail: T, resource_type: str):
super().__init__(detail)
self.resource_type = resource_type
def __problem_detail__(self, instance: str) -> ProblemDetail[T]:
return ProblemDetail(
type_=f"{self.resource_type}-not-found",
title=f"{self.resource_type.title()} Not Found",
status=404,
detail=self.detail,
instance=instance
)
# Usage
@app.sub("/posts/{post_id}").get
async def get_post(post_id: str):
if not post_exists(post_id):
raise ResourceNotFound(
detail=f"Post {post_id} does not exist",
resource_type="post"
)
return get_post_data(post_id)
- Consistency: All error responses follow RFC 9457 Problem Details specification
- Developer Experience: Rich type information and clear error messages
- Documentation: Automatic OpenAPI schema generation with examples
- Flexibility: Multiple levels of customization from simple to advanced
- Traceability: Built-in problem page links in OpenAPI docs for debugging
The error handling system integrates seamlessly with Lihil's OpenAPI documentation generation, providing developers with comprehensive error schemas and examples in the generated API docs.
Using AI coding assistants with Lihil? Check out LIHIL_COPILOT.md for:
- AI Agent Best Practices - Comprehensive guide for AI assistants working with Lihil
- Common Mistakes & Solutions - Learn from real AI agent errors and how to avoid them
- Complete Templates - Ready-to-use patterns that AI agents can follow
- Lihil vs FastAPI Differences - Critical syntax differences AI agents must know
- How to Use as Prompt - Instructions for Claude Code, Cursor, ChatGPT, and GitHub Copilot
Quick Setup: Copy the entire LIHIL_COPILOT.md content and paste it as system context in your AI tool. This ensures your AI assistant understands Lihil's unique syntax and avoids FastAPI assumptions.
Check our detailed tutorials at https://lihil.cc, covering
- Core concepts, create endpoint, route, middlewares, etc.
- Configuring your app via
pyproject.toml
, or via command line arguments. - Dependency Injection & Plugins
- Testing
- Type-Based Message System, Event listeners, atomic event handling, etc.
- Error Handling
- ...and much more
See how lihil works here, a production-ready full stack template that uses react and lihil,
lihil-fullstack-solopreneur-template
covering real world usage & best practices of lihil. A fullstack template for my fellow solopreneur, uses shadcn+tailwindcss+react+lihil+sqlalchemy+supabase+vercel+cloudlfare to end modern slavery
lihil follows semantic versioning after v1.0.0, where a version in x.y.z represents:
- x: major, breaking change
- y: minor, feature updates
- z: patch, bug fixes, typing updates
We welcome all contributions! Whether you're fixing bugs, adding features, improving documentation, or enhancing tests - every contribution matters.
- Fork & Clone: Fork the repository and clone your fork
-
Find Latest Branch: Use
git branch -r | grep "version/"
to find the latest development branch (e.g.,version/0.2.23
) - Create Feature Branch: Branch from the latest version branch
- Make Changes: Follow existing code conventions and add tests
- Submit PR: Target your PR to the latest development branch
For detailed contributing guidelines, workflow, and project conventions, see our Contributing Guide.
- [x] v0.1.x: Feature parity (alpha stage)
Implementing core functionalities of lihil, feature parity with fastapi
- [x] v0.2.x: Official Plugins (current stage)
We would keep adding new features & plugins to lihil without making breaking changes. This might be the last minor versions before v1.0.0.
- [ ] v0.3.x: Performance boost
The plan is to rewrite some components in c, roll out a server in c, or other performance optimizations in 0.3.x.
If we can do this without affect current implementations in 0.2.0 at all, 0.3.x may never occur and we would go straight to v1.0.0 from v0.2.x
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for lihil
Similar Open Source Tools

lihil
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.

WebAI-to-API
This project implements a web API that offers a unified interface to Google Gemini and Claude 3. It provides a self-hosted, lightweight, and scalable solution for accessing these AI models through a streaming API. The API supports both Claude and Gemini models, allowing users to interact with them in real-time. The project includes a user-friendly web UI for configuration and documentation, making it easy to get started and explore the capabilities of the API.

aichildedu
AICHILDEDU is a microservice-based AI education platform for children that integrates LLMs, image generation, and speech synthesis to provide personalized storybook creation, intelligent conversational learning, and multimedia content generation. It offers features like personalized story generation, educational quiz creation, multimedia integration, age-appropriate content, multi-language support, user management, parental controls, and asynchronous processing. The platform follows a microservice architecture with components like API Gateway, User Service, Content Service, Learning Service, and AI Services. Technologies used include Python, FastAPI, PostgreSQL, MongoDB, Redis, LangChain, OpenAI GPT models, TensorFlow, PyTorch, Transformers, MinIO, Elasticsearch, Docker, Docker Compose, and JWT-based authentication.

LightRAG
LightRAG is a repository hosting the code for LightRAG, a system that supports seamless integration of custom knowledge graphs, Oracle Database 23ai, Neo4J for storage, and multiple file types. It includes features like entity deletion, batch insert, incremental insert, and graph visualization. LightRAG provides an API server implementation for RESTful API access to RAG operations, allowing users to interact with it through HTTP requests. The repository also includes evaluation scripts, code for reproducing results, and a comprehensive code structure.

agent-sdk-go
Agent Go SDK is a powerful Go framework for building production-ready AI agents that seamlessly integrates memory management, tool execution, multi-LLM support, and enterprise features into a flexible, extensible architecture. It offers core capabilities like multi-model intelligence, modular tool ecosystem, advanced memory management, and MCP integration. The SDK is enterprise-ready with built-in guardrails, complete observability, and support for enterprise multi-tenancy. It provides a structured task framework, declarative configuration, and zero-effort bootstrapping for development experience. The SDK supports environment variables for configuration and includes features like creating agents with YAML configuration, auto-generating agent configurations, using MCP servers with an agent, and CLI tool for headless usage.

polyfire-js
Polyfire is an all-in-one managed backend for AI apps that allows users to build AI apps directly from the frontend, eliminating the need for a separate backend. It simplifies the process by providing most backend services in just a few lines of code. With Polyfire, users can easily create chatbots, transcribe audio files to text, generate simple text, create a long-term memory, and generate images with Dall-E. The tool also offers starter guides and tutorials to help users get started quickly and efficiently.

memento-mcp
Memento MCP is a scalable, high-performance knowledge graph memory system designed for LLMs. It offers semantic retrieval, contextual recall, and temporal awareness to any LLM client supporting the model context protocol. The system is built on core concepts like entities and relations, utilizing Neo4j as its storage backend for unified graph and vector search capabilities. With advanced features such as semantic search, temporal awareness, confidence decay, and rich metadata support, Memento MCP provides a robust solution for managing knowledge graphs efficiently and effectively.

g4f.dev
G4f.dev is the official documentation hub for GPT4Free, a free and convenient AI tool with endpoints that can be integrated directly into apps, scripts, and web browsers. The documentation provides clear overviews, quick examples, and deeper insights into the major features of GPT4Free, including text and image generation. Users can choose between Python and JavaScript for installation and setup, and can access various API endpoints, providers, models, and client options for different tasks.

pilottai
PilottAI is a Python framework for building autonomous multi-agent systems with advanced orchestration capabilities. It provides enterprise-ready features for building scalable AI applications. The framework includes hierarchical agent systems, production-ready features like asynchronous processing and fault tolerance, advanced memory management with semantic storage, and integrations with multiple LLM providers and custom tools. PilottAI offers specialized agents for various tasks such as customer service, document processing, email handling, knowledge acquisition, marketing, research analysis, sales, social media, and web search. The framework also provides documentation, example use cases, and advanced features like memory management, load balancing, and fault tolerance.

mcp-omnisearch
mcp-omnisearch is a Model Context Protocol (MCP) server that acts as a unified gateway to multiple search providers and AI tools. It integrates Tavily, Perplexity, Kagi, Jina AI, Brave, Exa AI, and Firecrawl to offer a wide range of search, AI response, content processing, and enhancement features through a single interface. The server provides powerful search capabilities, AI response generation, content extraction, summarization, web scraping, structured data extraction, and more. It is designed to work flexibly with the API keys available, enabling users to activate only the providers they have keys for and easily add more as needed.

aider-desk
AiderDesk is a desktop application that enhances coding workflow by leveraging AI capabilities. It offers an intuitive GUI, project management, IDE integration, MCP support, settings management, cost tracking, structured messages, visual file management, model switching, code diff viewer, one-click reverts, and easy sharing. Users can install it by downloading the latest release and running the executable. AiderDesk also supports Python version detection and auto update disabling. It includes features like multiple project management, context file management, model switching, chat mode selection, question answering, cost tracking, MCP server integration, and MCP support for external tools and context. Development setup involves cloning the repository, installing dependencies, running in development mode, and building executables for different platforms. Contributions from the community are welcome following specific guidelines.

mcp-documentation-server
The mcp-documentation-server is a lightweight server application designed to serve documentation files for projects. It provides a simple and efficient way to host and access project documentation, making it easy for team members and stakeholders to find and reference important information. The server supports various file formats, such as markdown and HTML, and allows for easy navigation through the documentation. With mcp-documentation-server, teams can streamline their documentation process and ensure that project information is easily accessible to all involved parties.

UnrealGenAISupport
The Unreal Engine Generative AI Support Plugin is a tool designed to integrate various cutting-edge LLM/GenAI models into Unreal Engine for game development. It aims to simplify the process of using AI models for game development tasks, such as controlling scene objects, generating blueprints, running Python scripts, and more. The plugin currently supports models from organizations like OpenAI, Anthropic, XAI, Google Gemini, Meta AI, Deepseek, and Baidu. It provides features like API support, model control, generative AI capabilities, UI generation, project file management, and more. The plugin is still under development but offers a promising solution for integrating AI models into game development workflows.

quantalogic
QuantaLogic is a ReAct framework for building advanced AI agents that seamlessly integrates large language models with a robust tool system. It aims to bridge the gap between advanced AI models and practical implementation in business processes by enabling agents to understand, reason about, and execute complex tasks through natural language interaction. The framework includes features such as ReAct Framework, Universal LLM Support, Secure Tool System, Real-time Monitoring, Memory Management, and Enterprise Ready components.

klavis
Klavis AI is a production-ready solution for managing Multiple Communication Protocol (MCP) servers. It offers self-hosted solutions and a hosted service with enterprise OAuth support. With Klavis AI, users can easily deploy and manage over 50 MCP servers for various services like GitHub, Gmail, Google Sheets, YouTube, Slack, and more. The tool provides instant access to MCP servers, seamless authentication, and integration with AI frameworks, making it ideal for individuals and businesses looking to streamline their communication and data management workflows.

search_with_ai
Build your own conversation-based search with AI, a simple implementation with Node.js & Vue3. Live Demo Features: * Built-in support for LLM: OpenAI, Google, Lepton, Ollama(Free) * Built-in support for search engine: Bing, Sogou, Google, SearXNG(Free) * Customizable pretty UI interface * Support dark mode * Support mobile display * Support local LLM with Ollama * Support i18n * Support Continue Q&A with contexts.
For similar tasks

spatz
Spatz is a complete, fullstack template for Svelte that includes features such as Sveltekit for building fast web apps, Pocketbase for User Auth and Database, OpenAI for chatbots, Vercel AI SDK for AI/ML models, TailwindCSS for UI development, DaisyUI for components, and Zod for schema declaration and validation. The template provides a structured project setup with components, stores, routes, and APIs. It also offers theming and styling options with pre-loaded themes from DaisyUI. Contributions are welcomed through feature requests or pull requests.

mesop
Mesop is a Python-based UI framework designed for rapid web app development, particularly for demos and internal apps. It offers an intuitive interface for UI novices, frictionless developer workflows with hot reload and IDE support, and flexibility to build custom UIs without the need for JavaScript/CSS/HTML. Mesop allows users to write UI in idiomatic Python code and compose UI into components using Python functions. It is used at Google for internal app development and provides a quick way to build delightful web apps in Python.

spatz-2
Spatz-2 is a complete, fullstack template for Svelte, utilizing technologies such as Sveltekit, Pocketbase, OpenAI, Vercel AI SDK, TailwindCSS, svelte-animations, and Zod. It offers features like user authentication, admin dashboard, dark/light mode themes, AI chatbot, guestbook, and forms with client/server validation. The project structure includes components, stores, routes, APIs, and icons. Spatz-2 aims to provide a futuristic web framework for building fast web apps with advanced functionalities and easy customization.

ryoma
Ryoma is an AI Powered Data Agent framework that offers a comprehensive solution for data analysis, engineering, and visualization. It leverages cutting-edge technologies like Langchain, Reflex, Apache Arrow, Jupyter Ai Magics, Amundsen, Ibis, and Feast to provide seamless integration of language models, build interactive web applications, handle in-memory data efficiently, work with AI models, and manage machine learning features in production. Ryoma also supports various data sources like Snowflake, Sqlite, BigQuery, Postgres, MySQL, and different engines like Apache Spark and Apache Flink. The tool enables users to connect to databases, run SQL queries, and interact with data and AI models through a user-friendly UI called Ryoma Lab.

fragments
Fragments is an open-source tool that leverages Anthropic's Claude Artifacts, Vercel v0, and GPT Engineer. It is powered by E2B Sandbox SDK and Code Interpreter SDK, allowing secure execution of AI-generated code. The tool is based on Next.js 14, shadcn/ui, TailwindCSS, and Vercel AI SDK. Users can stream in the UI, install packages from npm and pip, and add custom stacks and LLM providers. Fragments enables users to build web apps with Python interpreter, Next.js, Vue.js, Streamlit, and Gradio, utilizing providers like OpenAI, Anthropic, Google AI, and more.

lihil
Lihil is a performant, productive, and professional web framework designed to make Python the mainstream programming language for web development. It is 100% test covered and strictly typed, offering fast performance, ergonomic API, and built-in solutions for common problems. Lihil is suitable for enterprise web development, delivering robust and scalable solutions with best practices in microservice architecture and related patterns. It features dependency injection, OpenAPI docs generation, error response generation, data validation, message system, testability, and strong support for AI features. Lihil is ASGI compatible and uses starlette as its ASGI toolkit, ensuring compatibility with starlette classes and middlewares. The framework follows semantic versioning and has a roadmap for future enhancements and features.

enferno
Enferno is a modern Flask framework optimized for AI-assisted development workflows. It combines carefully crafted development patterns, smart Cursor Rules, and modern libraries to enable developers to build sophisticated web applications with unprecedented speed. Enferno's intelligent patterns and contextual guides help create production-ready SAAS applications faster than ever. It includes features like modern stack, authentication, OAuth integration, database support, task queue, frontend components, security measures, Docker readiness, and more.

mesop
Mesop is a Python-based UI framework designed for rapid web app development, particularly for demos and internal apps. It allows users to write UI in Python code, offers reactive UI paradigm, ready-to-use components, hot reload feature, rich IDE support, and the ability to build custom UIs without writing Javascript/CSS/HTML. Mesop is intuitive for UI novices, provides frictionless developer workflows, and is flexible for creating delightful demos. It is used at Google for rapid internal app development.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.