
starter-monorepo
Monorepo with π€ AI goodies | LLM Chat | π₯Hono + OpenAPI & RPC, Nuxt, Convex, SST Ion, Kinde Auth, Tanstack Query, Shadcn, UnoCSS, Spreadsheet I18n, Lingo.dev
Stars: 153

Starter Monorepo is a template repository for setting up a monorepo structure in your project. It provides a basic setup with configurations for managing multiple packages within a single repository. This template includes tools for package management, versioning, testing, and deployment. By using this template, you can streamline your development process, improve code sharing, and simplify dependency management across your project. Whether you are working on a small project or a large-scale application, Starter Monorepo can help you organize your codebase efficiently and enhance collaboration among team members.
README:
- starter-monorepo
This is a base monorepo starter template to kick-start your beautifully organized project, whether its a fullstack project, monorepo of multiple libraries and applications, or even just one API server and its related infrastructure deployment and utilities.
Out-of-the-box with the included apps, we have a fullstack project: with a frontend
Nuxt 4 app, a main backend
using Hono, and a backend-convex
Convex app.
- General APIs, such as authentication, are handled by the main
backend
, which is designed to be serverless-compatible and can be deployed anywhere, allowing for the best possible latency, performance, and cost, according to your needs. -
backend-convex
is a modular, add-inbackend
, utilized to power components likeAI Chat
.
It is recommended to use an AI Agent (Roo Code
recommended) to help you setup the monorepo according to your needs, see Utilities
β© This template is powered by Turborepo.
π Out-of-the-box, this repo is configured for an SSG frontend
Nuxt app, and a backend
Hono app that will be the main API, to optimize on cost and simplicity.
- The starter kit is still configured for 100% SSR support,
Simply change theapps/frontend
's build script tonuxt build
to enable SSR building
π©οΈ SST Ion, an Infrastructure-as-Code solution, with powerful Live development.
- SST is 100% opt-in, by using
sst
CLI commands yourself, likesst dev
,
simply removesst
dependency andsst.config.ts
if you want to use another solution. - currently only
backend
app is configured, which will deploy a Lambda with Function URL enabled
π Comes with starter-kit for Kinde typescript-sdk, see: /apps/backend/api/auth
- Add your env variables, activate the auth routes, profit$
- Please note that by default
backend
comes with a cookies-based session manager, which have great DX, security and does not require an external database (which also means great performance), but as thebackend
is decoupled with the Nuxt's SSR server, it will not work well with SSR (the session/auth state is not shared).
So if you use SSR, you could use the official Nuxt Kinde module or implement your own way to manage the session atapps/backend/src/middlewares/session.ts
.- If you have a good session manager implementation, a PR is greatly appreciated!
π― JS is always TypeScript where possible.
Work started in 2025-06-12 for T3 Chat Cloneathon competition, with no prior AI SDK and chat streams experience, but I think I did an amazing job π«‘!
The focus of the project is for broader adoption, prioritizing easy-to-access UI/UX, bleeding-edge features like workflows are a low prio, though, advanced capabilities per-model capabilities and fine-tuning are still expected to be elegantly supported via the model's interface. #48
A super efficient and powerful, yet friendly LLM Chat system, featuring:
- Business-ready, support
hosted
provider that you can control the billing of. - Supports other add-in BYOK providers, like
OpenAI
,OpenRouter
,... - Seamless authentication integration with the main
backend
. - Beautiful syntax highlighting π.
- Thread branching, freezing, and sharing.
- Real-time, multi-agents, multi-users support ΒΉ.
- Invite your families and friends, and play with the Agents together in real-time.
- Or maybe invite your colleagues, and brainstorm together with the help and power of AI.
- Resumable and multi-streams ΒΉ.
- Ask follow-up questions while the previous isn't done, the model is able to pick up what's available currently π³π³.
- Multi-users can send messages at the same time π²π².
- Easy and private: guest, anonymous usage supported.
- Your dad can just join and chat with just a link share π, no setup needed.
- Mobile-friendly.
- Fully internalized, with AI-powered translations and smooth switching between languages.
- Blazingly fast β‘ with local caching and optimistic updates.
- Designed to be scalable
-
Things are isolated and common interfaces are defined and utilized where possible, there's no tightly coupled-hacks that prevents future scaling, things just works, elegantly.
- Any AI provider that is compatible with
@ai-sdk
interface can be added in a few words of code, I just don't want to bloat the UI by adding all of them.
-
*1
: currently the "stream" received when resuming or for other real-time users in the same thread is implemented via a custom polling mechanism, and not SSE. it is intentionally chosed to be this way for more minimal infrastructure setup and wider hosting support, so smaller user groups can host their own version easily, it is still very performant and efficient.
- There is boilerplate code for SSE resume support, you can simply add a pub-sub to the backend and switch to using SSE resume in
ChatInterface
.
- By default, the frontend
/api/*
routes is proxied to thebackendUrl
. - The
rpcApi
plugin will call the/api/*
proxy if they're on the same domain but different ports (e.g: 127.0.0.1)-
this mimics a production environment where the static frontend and the backend lives on the same domain at /api, which is the most efficient configuration for Cloudfront + Lambda Function Url, or Cloudflare Workers.
- If the
frontend
andbackend
are on different domains then the backend will be called directly without proxy. - This could be configured in frontend's
app.config.ts
-
backend-convex
: a Convex app.
-
@local/locales
: a shared central locales/i18n data library powered by spreadsheet-i18n.- πβ¨π€ AUTOMATIC localization with AI, powered by lingo.dev, just
pnpm run i18n
. - ποΈ Hot-reload and automatic-reload supported, changes are reflected in apps (
frontend
,backend
) instantly.
- πβ¨π€ AUTOMATIC localization with AI, powered by lingo.dev, just
-
@local/common
: a shared library that can contain constants, functions, types. -
@local/common-vue
: a shared library that can contain components, constants, functions, types for vue-based apps. -
tsconfig
:tsconfig.json
s used throughout the monorepo.
This Turborepo has some additional tools already setup for you:
- π§ ESLint + stylistic formatting rules (antfu)
- π¦π’
repo-release
script, easily generates changelog, bump version, create GitHub release, and publish your packages to npm. - π A few more goodies like:
- lint-staged pre-commit hook
- π€ Initialization prompt for AI Agents to modify the monorepo according to your needs.
- To start, open the chat with your AI Agent, and include the
INIT_PROMPT.md
file in your prompt.
- To start, open the chat with your AI Agent, and include the
To build all apps and packages, run the following command:
pnpm run build
If you just want a quick check out, without having to set up anything, you can use pnpm run dev:noConvex
, this will skips backend-convex
and which is the only component that have initial set ups, though this of course means related features are disabled.
To develop all apps and packages, run the following command:
pnpm run dev
For local development environment variables / secrets, create a copy of .env.dev
to .env.dev.local
.
Guide to setup local development for Cloudflare workerd
runtime testing:
- (Optional) Run the convex dev server if you use convex.
- Config
wrangler.jsonc
, specfically, thevars
block, so that it properly targets the local env. - (Optional) Config
apps/frontend/.env.workerd.dev
if you use convex or use different ip/port. - Build a new static dist for local dev:
- Start the local
backend
server - Run
build:workerdLocal
script forfrontend
.
- Start the local
- Run
pnpm dlx wrangler dev
to start wrangler dev server.-
workerd
does not work with Alpine Linux, so if you use the included Dev Container, change the base image to some other distro.
-
- You can add your custom deploy instructions in
deploy
script andscripts/deploy.sh
in each app, it could be a full script that deploys to a platform, or necessary actions before for some platform integration deploys it,frontend
will only start build and deploy after all backends are deployed, to have context for SSG. - The repo also contains some deployment presets samples:
- Action to deploy frontend to GitHub Pages
-
Wrangler configured to deploy fullstack to Cloudflare, just run
npx wrangler deploy
or connect and deploy it through the Cloudflare Dashboard.- Wrangler will deploy
backend
andfrontend
at the same time, which might causefrontend
to have old context for SSG, you should trigger a redeploy in such case.
- Wrangler will deploy
- Deploy backend to Lambda via SST
- Some more deploying notes:
- To enable deploy with Convex in production, simply rename
_deploy
script todeploy
inbackend-convex
app, run the deploy script once manually to get the Convex's production url, set it toNUXT_PUBLIC_CONVEX_URL
env infrontend
's.env.prod
file or CI / build machine env variable.
- To enable deploy with Convex in production, simply rename
Imports should not be separated by empty lines, and should be sorted automatically by eslint.
The project comes with a localcert
SSL at locals/common/dev
to enable HTTPS for local development, generated with mkcert, you can install mkcert, generate your own certificate and replace it, or install the localcert.crt
to your trusted CA to remove the untrusted SSL warning.
Turborepo can use a technique known as Remote Caching to share cache artifacts across machines, enabling you to share build caches with your team and CI/CD pipelines.
By default, Turborepo will cache locally. To enable Remote Caching you will need an account with Vercel. If you don't have an account you can create one, then enter the following commands:
npx turbo login
This will authenticate the Turborepo CLI with your Vercel account.
Next, you can link your Turborepo to your Remote Cache by running the following command from the root of your Turborepo:
npx turbo link
Learn more about the power of Turborepo:
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for starter-monorepo
Similar Open Source Tools

starter-monorepo
Starter Monorepo is a template repository for setting up a monorepo structure in your project. It provides a basic setup with configurations for managing multiple packages within a single repository. This template includes tools for package management, versioning, testing, and deployment. By using this template, you can streamline your development process, improve code sharing, and simplify dependency management across your project. Whether you are working on a small project or a large-scale application, Starter Monorepo can help you organize your codebase efficiently and enhance collaboration among team members.

ai-town
AI Town is a virtual town where AI characters live, chat, and socialize. This project provides a deployable starter kit for building and customizing your own version of AI Town. It features a game engine, database, vector search, auth, text model, deployment, pixel art generation, background music generation, and local inference. You can customize your own simulation by creating characters and stories, updating spritesheets, changing the background, and modifying the background music.

cagent
cagent is a powerful and easy-to-use multi-agent runtime that orchestrates AI agents with specialized capabilities and tools, allowing users to quickly build, share, and run a team of virtual experts to solve complex problems. It supports creating agents with YAML configuration, improving agents with MCP servers, and delegating tasks to specialists. Key features include multi-agent architecture, rich tool ecosystem, smart delegation, YAML configuration, advanced reasoning tools, and support for multiple AI providers like OpenAI, Anthropic, Gemini, and Docker Model Runner.

aider-composer
Aider Composer is a VSCode extension that integrates Aider into your development workflow. It allows users to easily add and remove files, toggle between read-only and editable modes, review code changes, use different chat modes, and reference files in the chat. The extension supports multiple models, code generation, code snippets, and settings customization. It has limitations such as lack of support for multiple workspaces, Git repository features, linting, testing, voice features, in-chat commands, and configuration options.

warc-gpt
WARC-GPT is an experimental retrieval augmented generation pipeline for web archive collections. It allows users to interact with WARC files, extract text, generate text embeddings, visualize embeddings, and interact with a web UI and API. The tool is highly customizable, supporting various LLMs, providers, and embedding models. Users can configure the application using environment variables, ingest WARC files, start the server, and interact with the web UI and API to search for content and generate text completions. WARC-GPT is designed for exploration and experimentation in exploring web archives using AI.

TypeGPT
TypeGPT is a Python application that enables users to interact with ChatGPT or Google Gemini from any text field in their operating system using keyboard shortcuts. It provides global accessibility, keyboard shortcuts for communication, and clipboard integration for larger text inputs. Users need to have Python 3.x installed along with specific packages and API keys from OpenAI for ChatGPT access. The tool allows users to run the program normally or in the background, manage processes, and stop the program. Users can use keyboard shortcuts like `/ask`, `/see`, `/stop`, `/chatgpt`, `/gemini`, `/check`, and `Shift + Cmd + Enter` to interact with the application in any text field. Customization options are available by modifying files like `keys.txt` and `system_prompt.txt`. Contributions are welcome, and future plans include adding support for other APIs and a user-friendly GUI.

spec-kit
Spec Kit is a tool designed to enable organizations to focus on product scenarios rather than writing undifferentiated code through Spec-Driven Development. It flips the script on traditional software development by making specifications executable, directly generating working implementations. The tool provides a structured process emphasizing intent-driven development, rich specification creation, multi-step refinement, and heavy reliance on advanced AI model capabilities for specification interpretation. Spec Kit supports various development phases, including 0-to-1 Development, Creative Exploration, and Iterative Enhancement, and aims to achieve experimental goals related to technology independence, enterprise constraints, user-centric development, and creative & iterative processes. The tool requires Linux/macOS (or WSL2 on Windows), an AI coding agent (Claude Code, GitHub Copilot, Gemini CLI, or Cursor), uv for package management, Python 3.11+, and Git.

bolt-python-ai-chatbot
The 'bolt-python-ai-chatbot' is a Slack chatbot app template that allows users to integrate AI-powered conversations into their Slack workspace. Users can interact with the bot in conversations and threads, send direct messages for private interactions, use commands to communicate with the bot, customize bot responses, and store user preferences. The app supports integration with Workflow Builder, custom language models, and different AI providers like OpenAI, Anthropic, and Google Cloud Vertex AI. Users can create user objects, manage user states, and select from various AI models for communication.

aides-jeunes
The user interface (and the main server) of the simulator of aids and social benefits for young people. It is based on the free socio-fiscal simulator Openfisca.

airbyte_serverless
AirbyteServerless is a lightweight tool designed to simplify the management of Airbyte connectors. It offers a serverless mode for running connectors, allowing users to easily move data from any source to their data warehouse. Unlike the full Airbyte-Open-Source-Platform, AirbyteServerless focuses solely on the Extract-Load process without a UI, database, or transform layer. It provides a CLI tool, 'abs', for managing connectors, creating connections, running jobs, selecting specific data streams, handling secrets securely, and scheduling remote runs. The tool is scalable, allowing independent deployment of multiple connectors. It aims to streamline the connector management process and provide a more agile alternative to the comprehensive Airbyte platform.

llm-subtrans
LLM-Subtrans is an open source subtitle translator that utilizes LLMs as a translation service. It supports translating subtitles between any language pairs supported by the language model. The application offers multiple subtitle formats support through a pluggable system, including .srt, .ssa/.ass, and .vtt files. Users can choose to use the packaged release for easy usage or install from source for more control over the setup. The tool requires an active internet connection as subtitles are sent to translation service providers' servers for translation.

unstructured
The `unstructured` library provides open-source components for ingesting and pre-processing images and text documents, such as PDFs, HTML, Word docs, and many more. The use cases of `unstructured` revolve around streamlining and optimizing the data processing workflow for LLMs. `unstructured` modular functions and connectors form a cohesive system that simplifies data ingestion and pre-processing, making it adaptable to different platforms and efficient in transforming unstructured data into structured outputs.

leptonai
A Pythonic framework to simplify AI service building. The LeptonAI Python library allows you to build an AI service from Python code with ease. Key features include a Pythonic abstraction Photon, simple abstractions to launch models like those on HuggingFace, prebuilt examples for common models, AI tailored batteries, a client to automatically call your service like native Python functions, and Pythonic configuration specs to be readily shipped in a cloud environment.

aioli
Aioli is a library for running genomics command-line tools in the browser using WebAssembly. It creates a single WebWorker to run all WebAssembly tools, shares a filesystem across modules, and efficiently mounts local files. The tool encapsulates each module for loading, does WebAssembly feature detection, and communicates with the WebWorker using the Comlink library. Users can deploy new releases and versions, and benefit from code reuse by porting existing C/C++/Rust/etc tools to WebAssembly for browser use.

Open-LLM-VTuber
Open-LLM-VTuber is a project in early stages of development that allows users to interact with Large Language Models (LLM) using voice commands and receive responses through a Live2D talking face. The project aims to provide a minimum viable prototype for offline use on macOS, Linux, and Windows, with features like long-term memory using MemGPT, customizable LLM backends, speech recognition, and text-to-speech providers. Users can configure the project to chat with LLMs, choose different backend services, and utilize Live2D models for visual representation. The project supports perpetual chat, offline operation, and GPU acceleration on macOS, addressing limitations of existing solutions on macOS.

redbox-copilot
Redbox Copilot is a retrieval augmented generation (RAG) app that uses GenAI to chat with and summarise civil service documents. It increases organisational memory by indexing documents and can summarise reports read months ago, supplement them with current work, and produce a first draft that lets civil servants focus on what they do best. The project uses a microservice architecture with each microservice running in its own container defined by a Dockerfile. Dependencies are managed using Python Poetry. Contributions are welcome, and the project is licensed under the MIT License.
For similar tasks

code-companion
CodeCompanion.AI is an AI coding assistant desktop app that helps with various coding tasks. It features an interactive chat interface, file system operations, web search capabilities, semantic code search, a fully functional terminal, code preview and approval, unlimited context window, dynamic context management, and more. Users can save chat conversations and set custom instructions per project.

starter-monorepo
Starter Monorepo is a template repository for setting up a monorepo structure in your project. It provides a basic setup with configurations for managing multiple packages within a single repository. This template includes tools for package management, versioning, testing, and deployment. By using this template, you can streamline your development process, improve code sharing, and simplify dependency management across your project. Whether you are working on a small project or a large-scale application, Starter Monorepo can help you organize your codebase efficiently and enhance collaboration among team members.

nx
Nx is a build system optimized for monorepos, featuring AI-powered architectural awareness and advanced CI capabilities. It provides faster task scheduling, caching, and more for existing workspaces. Nx Cloud enhances CI by offering remote caching, task distribution, automated e2e test splitting, and task flakiness detection. The tool aims to scale monorepos efficiently and improve developer productivity.

Figma-Context-MCP
Figma-Context-MCP is a plugin for Figma that allows users to easily manage and switch between multiple design contexts within a single Figma file. This tool simplifies the process of working on different design variations or versions by providing a seamless way to organize and switch between them. With Figma-Context-MCP, designers can streamline their workflow and improve collaboration by keeping all design contexts in one place and easily accessible. This plugin enhances productivity and efficiency for Figma users who frequently work on multiple design iterations or versions within a project.

mcp-unity
MCP Unity is an implementation of the Model Context Protocol for Unity Editor, enabling AI assistants like Claude, Windsurf, and Cursor to interact with Unity projects. It provides tools to execute Unity menu items, select game objects, manage packages, run tests, and display messages in the Unity Editor. The package bridges Unity with a Node.js server implementing the MCP protocol, offering resources to retrieve menu items, game objects, console logs, packages, assets, and tests. Requirements include Unity 2022.3 or later, Node.js 18 or later for the server, and npm 9 or later for building. Installation involves adding the Unity MCP Server package via Unity Package Manager and installing Node.js. Configuration settings for AI clients like Cursor IDE, Claude Desktop, and Windsurf IDE are provided. Running the server requires starting the Node.js server and Unity Editor MCP Server. Debugging and troubleshooting guidelines are included for server issues. Contributions are welcome under the MIT license.

commanddash
Dash AI is an open-source coding assistant for Flutter developers. It is designed to not only write code but also run and debug it, allowing it to assist beyond code completion and automate routine tasks. Dash AI is powered by Gemini, integrated with the Dart Analyzer, and specifically tailored for Flutter engineers. The vision for Dash AI is to create a single-command assistant that can automate tedious development tasks, enabling developers to focus on creativity and innovation. It aims to assist with the entire process of engineering a feature for an app, from breaking down the task into steps to generating exploratory tests and iterating on the code until the feature is complete. To achieve this vision, Dash AI is working on providing LLMs with the same access and information that human developers have, including full contextual knowledge, the latest syntax and dependencies data, and the ability to write, run, and debug code. Dash AI welcomes contributions from the community, including feature requests, issue fixes, and participation in discussions. The project is committed to building a coding assistant that empowers all Flutter developers.

ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.

crewAI-tools
The crewAI Tools repository provides a guide for setting up tools for crewAI agents, enabling the creation of custom tools to enhance AI solutions. Tools play a crucial role in improving agent functionality. The guide explains how to equip agents with a range of tools and how to create new tools. Tools are designed to return strings for generating responses. There are two main methods for creating tools: subclassing BaseTool and using the tool decorator. Contributions to the toolset are encouraged, and the development setup includes steps for installing dependencies, activating the virtual environment, setting up pre-commit hooks, running tests, static type checking, packaging, and local installation. Enhance AI agent capabilities with advanced tooling.
For similar jobs

resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.

aiogram_bot_template
Aiogram bot template is a boilerplate for creating Telegram bots using Aiogram framework. It provides a solid foundation for building robust and scalable bots with a focus on code organization, database integration, and localization.

pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.

pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.

aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.

gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.

aioconsole
aioconsole is a Python package that provides asynchronous console and interfaces for asyncio. It offers asynchronous equivalents to input, print, exec, and code.interact, an interactive loop running the asynchronous Python console, customization and running of command line interfaces using argparse, stream support to serve interfaces instead of using standard streams, and the apython script to access asyncio code at runtime without modifying the sources. The package requires Python version 3.8 or higher and can be installed from PyPI or GitHub. It allows users to run Python files or modules with a modified asyncio policy, replacing the default event loop with an interactive loop. aioconsole is useful for scenarios where users need to interact with asyncio code in a console environment.

aiosqlite
aiosqlite is a Python library that provides a friendly, async interface to SQLite databases. It replicates the standard sqlite3 module but with async versions of all the standard connection and cursor methods, along with context managers for automatically closing connections and cursors. It allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. The library also replicates most of the advanced features of sqlite3, such as row factories and total changes tracking.