aiomisc
aiomisc - miscellaneous utils for asyncio
Stars: 388
aiomisc is a Python library that provides a collection of utility functions and classes for working with asynchronous I/O in a more intuitive and efficient way. It offers features like worker pools, connection pools, circuit breaker pattern, and retry mechanisms to make asyncio code more robust and easier to maintain. The library simplifies the architecture of software using asynchronous I/O, making it easier for developers to write reliable and scalable asynchronous code.
README:
.. image:: https://coveralls.io/repos/github/aiokitchen/aiomisc/badge.svg?branch=master :target: https://coveralls.io/github/aiokitchen/aiomisc :alt: Coveralls
.. image:: https://github.com/aiokitchen/aiomisc/workflows/tox/badge.svg :target: https://github.com/aiokitchen/aiomisc/actions?query=workflow%3Atox :alt: Actions
.. image:: https://img.shields.io/pypi/v/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/ :alt: Latest Version
.. image:: https://img.shields.io/pypi/wheel/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/
.. image:: https://img.shields.io/pypi/pyversions/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/
.. image:: https://img.shields.io/pypi/l/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/
Miscellaneous utils for asyncio.
As a programmer, you are no stranger to the challenges that come with building and maintaining software applications. One area that can be particularly difficult is making architecture of the software that using asynchronous I/O.
This is where aiomisc comes in. aiomisc is a Python library that provides a
collection of utility functions and classes for working with asynchronous I/O
in a more intuitive and efficient way. It is built on top of the asyncio
library and is designed to make it easier for developers to write
asynchronous code that is both reliable and scalable.
With aiomisc, you can take advantage of powerful features like
worker pools, connection pools, circuit breaker pattern,
and retry mechanisms such as asyncbackoff and asyncretry to make your
asyncio code more robust and easier to maintain. In this documentation,
we'll take a closer look at what aiomisc has to offer and how it can
help you streamline your asyncio service development.
Installation is possible in standard ways, such as PyPI or installation from a git repository directly.
Installing from PyPI_:
.. code-block:: bash
pip3 install aiomisc
Installing from github.com:
.. code-block:: bash
# Using git tool
pip3 install git+https://github.com/aiokitchen/aiomisc.git
# Alternative way using http
pip3 install \
https://github.com/aiokitchen/aiomisc/archive/refs/heads/master.zip
The package contains several extras and you can install additional dependencies if you specify them in this way.
With uvloop_:
.. code-block:: bash
pip3 install "aiomisc[uvloop]"
With aiohttp_:
.. code-block:: bash
pip3 install "aiomisc[aiohttp]"
Complete table of extras bellow:
+-----------------------------------+------------------------------------------------+
| example | description |
+===================================+================================================+
| pip install aiomisc[aiohttp] | For running aiohttp_ applications. |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[asgi] | For running ASGI_ applications |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[carbon] | Sending metrics to carbon_ (part of graphite_) |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[cron] | use croniter_ for scheduling tasks |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[raven] | Sending exceptions to sentry_ using raven_ |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[rich] | You might using rich_ for logging |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[uvicorn] | For running ASGI_ application using uvicorn_ |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[uvloop] | use uvloop_ as a default event loop |
+-----------------------------------+------------------------------------------------+
.. _ASGI: https://asgi.readthedocs.io/en/latest/ .. _PyPI: https://pypi.org/ .. _aiohttp: https://pypi.org/project/aiohttp .. _carbon: https://pypi.org/project/carbon .. _croniter: https://pypi.org/project/croniter .. _graphite: http://graphiteapp.org .. _raven: https://pypi.org/project/raven .. _rich: https://pypi.org/project/rich .. _sentry: https://sentry.io/ .. _uvloop: https://pypi.org/project/uvloop .. _uvicorn: https://pypi.org/project/uvicorn
You can combine extras values by separating them with commas, for example:
.. code-block:: bash
pip3 install "aiomisc[aiohttp,cron,rich,uvloop]"
This section will cover how this library creates and uses the event loop and
creates services. Of course, you can't write about everything here, but you
can read about a lot in the Tutorial_ section, and you can
always refer to the Modules_ and API reference_ sections for help.
Event-loop and entrypoint +++++++++++++++++++++++++
Let's look at this simple example first:
.. code-block:: python
import asyncio
import logging
import aiomisc
log = logging.getLogger(__name__)
async def main():
log.info('Starting')
await asyncio.sleep(3)
log.info('Exiting')
if __name__ == '__main__':
with aiomisc.entrypoint(log_level="info", log_format="color") as loop:
loop.run_until_complete(main())
This code declares an asynchronous main() function that exits after
3 seconds. It would seem nothing interesting, but the whole point is in
the entrypoint.
What does the entrypoint do, it would seem not so much, it creates an
event-loop and transfers control to the user. However, under the hood, the
logger is configured in a separate thread, a pool of threads is created,
services are started, but more on that later and there are no services
in this example.
Alternatively, you can choose not to use an entrypoint, just create an event-loop and set this as a default event loop for current thread:
.. code-block:: python :name: test_index_get_loop
import asyncio
import aiomisc
# * Installs uvloop event loop is it's has been installed.
# * Creates and set `aiomisc.thread_pool.ThreadPoolExecutor`
# as a default executor
# * Sets just created event-loop as a current event-loop for this thread.
aiomisc.new_event_loop()
async def main():
await asyncio.sleep(1)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The example above is useful if your code is already using an implicitly created
event loop, you will have to modify less code, just add
aiomisc.new_event_loop() and all calls to asyncio.get_event_loop()
will return the created instance.
However, you can do with one call. Following example closes implicitly created asyncio event loop and install a new one:
.. code-block:: python :name: test_index_new_loop
import asyncio
import aiomisc
async def main():
await asyncio.sleep(3)
if __name__ == '__main__':
loop = aiomisc.new_event_loop()
loop.run_until_complete(main())
Services ++++++++
The main thing that an entrypoint does is start and gracefully
stop services.
The service concept within this library means a class derived from
the aiosmic.Service class and implementing the
async def start(self) -> None: method and optionally the
async def stop(self, exc: Optional[ Exception]) -> None method.
The concept of stopping a service is not necessarily is pressing Ctrl+C
keys by user, it's actually just exiting the entrypoint context manager.
The example below shows what your service might look like:
.. code-block:: python
from aiomisc import entrypoint, Service
class MyService(Service):
async def start(self):
do_something_when_start()
async def stop(self, exc):
do_graceful_shutdown()
with entrypoint(MyService()) as loop:
loop.run_forever()
The entry point can start as many instances of the service as it likes, and all of them will start concurrently.
There is also a way if the start method is a payload for a service,
and then there is no need to implement the stop method, since the running
task with the start function will be canceled at the stop stage.
But in this case, you will have to notify the entrypoint that the
initialization of the service instance is complete and it can continue.
Like this:
.. code-block:: python
import asyncio
from threading import Event
from aiomisc import entrypoint, Service
event = Event()
class MyService(Service):
async def start(self):
# Send signal to entrypoint for continue running
self.start_event.set()
await asyncio.sleep(3600)
with entrypoint(MyService()) as loop:
assert event.is_set()
.. note::
The ``entrypoint`` passes control to the body of the context manager only
after all service instances have started. As mentioned above, a start is
considered to be the completion of the ``start`` method or the setting of
an start event with ``self.start_event.set()``.
The whole power of this library is in the set of already implemented or
abstract services.
Such as: AIOHTTPService, ASGIService, TCPServer,
UDPServer, TCPClient, PeriodicService, CronService and so on.
Unfortunately in this section it is not possible to pay more attention to this,
please pay attention to the Tutorial_ section section, there are more
examples and explanations, and of cource you always can find out an answer on
the /api/index or in the source code. The authors have tried to make
the source code as clear and simple as possible, so feel free to explore it.
This software follows Semantic Versioning_
Summary: it's given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes
- MINOR version when you add functionality in a backwards compatible manner
- PATCH version when you make backwards compatible bug fixes
- Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
In this case, the package version is assigned automatically with poem-plugins_, it using on the tag in the repository as a major and minor and the counter, which takes the number of commits between tag to the head of branch.
.. _poem-plugins: https://pypi.org/project/poem-plugins
Summary: it's given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes
- MINOR version when you add functionality in a backwards compatible manner
- PATCH version when you make backwards compatible bug fixes
- Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
In this case, the package version is assigned automatically with poem-plugins_, it using on the tag in the repository as a major and minor and the counter, which takes the number of commits between tag to the head of branch.
.. _poem-plugins: https://pypi.org/project/poem-plugins
This project, like most open source projects, is developed by enthusiasts, you can join the development, submit issues, or send your merge requests.
In order to start developing in this repository, you need to do the following things.
Should be installed:
- Python 3.7+ as
python3 - Installed Poetry_ as
poetry
.. _Poetry: https://python-poetry.org/docs/
For setting up developer environment just execute:
.. code-block::
# installing all dependencies
poetry install
# setting up pre-commit hooks
poetry run pre-commit install
# adding poem-plugins to the poetry
poetry self add poem-plugins
.. _Semantic Versioning: http://semver.org/
.. _API reference: https://docs.aiomisc.com/api/index.html .. _Modules: https://docs.aiomisc.com/modules.html .. _Tutorial: https://docs.aiomisc.com/tutorial.html
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aiomisc
Similar Open Source Tools
aiomisc
aiomisc is a Python library that provides a collection of utility functions and classes for working with asynchronous I/O in a more intuitive and efficient way. It offers features like worker pools, connection pools, circuit breaker pattern, and retry mechanisms to make asyncio code more robust and easier to maintain. The library simplifies the architecture of software using asynchronous I/O, making it easier for developers to write reliable and scalable asynchronous code.
aioboto3
aioboto3 is an async AWS SDK for Python that allows users to use near enough all of the boto3 client commands in an async manner just by prefixing the command with `await`. It combines the great work of boto3 and aiobotocore, enabling users to use higher level APIs provided by boto3 in an asynchronous manner. The package provides support for various AWS services such as DynamoDB, S3, Kinesis, SSM Parameter Store, and Athena. It also offers features like client-side encryption using KMS-Managed master keys and supports asyncifying `get_presigned_url`. The library closely mimics the usage of boto3 and is mainly developed to be used in async microservices.
langserve
LangServe helps developers deploy `LangChain` runnables and chains as a REST API. This library is integrated with FastAPI and uses pydantic for data validation. In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in LangChain.js.
paxml
Pax is a framework to configure and run machine learning experiments on top of Jax.
lollms_legacy
Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications. The tool supports multiple personalities for generating text with different styles and tones, real-time text generation with WebSocket-based communication, RESTful API for listing personalities and adding new personalities, easy integration with various applications and frameworks, sending files to personalities, running on multiple nodes to provide a generation service to many outputs at once, and keeping data local even in the remote version.
lollms
LoLLMs Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.
ash_ai
Ash AI is a tool that provides a Model Context Protocol (MCP) server for exposing tool definitions to an MCP client. It allows for the installation of dev and production MCP servers, and supports features like OAuth2 flow with AshAuthentication, tool data access, tool execution callbacks, prompt-backed actions, and vectorization strategies. Users can also generate a chat feature for their Ash & Phoenix application using `ash_oban` and `ash_postgres`, and specify LLM API keys for OpenAI. The tool is designed to help developers experiment with tools and actions, monitor tool execution, and expose actions as tool calls.
chatWeb
ChatWeb is a tool that can crawl web pages, extract text from PDF, DOCX, TXT files, and generate an embedded summary. It can answer questions based on text content using chatAPI and embeddingAPI based on GPT3.5. The tool calculates similarity scores between text vectors to generate summaries, performs nearest neighbor searches, and designs prompts to answer user questions. It aims to extract relevant content from text and provide accurate search results based on keywords. ChatWeb supports various modes, languages, and settings, including temperature control and PostgreSQL integration.
json_repair
This simple package can be used to fix an invalid json string. To know all cases in which this package will work, check out the unit test. Inspired by https://github.com/josdejong/jsonrepair Motivation Some LLMs are a bit iffy when it comes to returning well formed JSON data, sometimes they skip a parentheses and sometimes they add some words in it, because that's what an LLM does. Luckily, the mistakes LLMs make are simple enough to be fixed without destroying the content. I searched for a lightweight python package that was able to reliably fix this problem but couldn't find any. So I wrote one How to use from json_repair import repair_json good_json_string = repair_json(bad_json_string) # If the string was super broken this will return an empty string You can use this library to completely replace `json.loads()`: import json_repair decoded_object = json_repair.loads(json_string) or just import json_repair decoded_object = json_repair.repair_json(json_string, return_objects=True) Read json from a file or file descriptor JSON repair provides also a drop-in replacement for `json.load()`: import json_repair try: file_descriptor = open(fname, 'rb') except OSError: ... with file_descriptor: decoded_object = json_repair.load(file_descriptor) and another method to read from a file: import json_repair try: decoded_object = json_repair.from_file(json_file) except OSError: ... except IOError: ... Keep in mind that the library will not catch any IO-related exception and those will need to be managed by you Performance considerations If you find this library too slow because is using `json.loads()` you can skip that by passing `skip_json_loads=True` to `repair_json`. Like: from json_repair import repair_json good_json_string = repair_json(bad_json_string, skip_json_loads=True) I made a choice of not using any fast json library to avoid having any external dependency, so that anybody can use it regardless of their stack. Some rules of thumb to use: - Setting `return_objects=True` will always be faster because the parser returns an object already and it doesn't have serialize that object to JSON - `skip_json_loads` is faster only if you 100% know that the string is not a valid JSON - If you are having issues with escaping pass the string as **raw** string like: `r"string with escaping\"" Adding to requirements Please pin this library only on the major version! We use TDD and strict semantic versioning, there will be frequent updates and no breaking changes in minor and patch versions. To ensure that you only pin the major version of this library in your `requirements.txt`, specify the package name followed by the major version and a wildcard for minor and patch versions. For example: json_repair==0.* In this example, any version that starts with `0.` will be acceptable, allowing for updates on minor and patch versions. How it works This module will parse the JSON file following the BNF definition:
simpleAI
SimpleAI is a self-hosted alternative to the not-so-open AI API, focused on replicating main endpoints for LLM such as text completion, chat, edits, and embeddings. It allows quick experimentation with different models, creating benchmarks, and handling specific use cases without relying on external services. Users can integrate and declare models through gRPC, query endpoints using Swagger UI or API, and resolve common issues like CORS with FastAPI middleware. The project is open for contributions and welcomes PRs, issues, documentation, and more.
suno-api
Suno AI API is an open-source project that allows developers to integrate the music generation capabilities of Suno.ai into their own applications. The API provides a simple and convenient way to generate music, lyrics, and other audio content using Suno.ai's powerful AI models. With Suno AI API, developers can easily add music generation functionality to their apps, websites, and other projects.
APIMyLlama
APIMyLlama is a server application that provides an interface to interact with the Ollama API, a powerful AI tool to run LLMs. It allows users to easily distribute API keys to create amazing things. The tool offers commands to generate, list, remove, add, change, activate, deactivate, and manage API keys, as well as functionalities to work with webhooks, set rate limits, and get detailed information about API keys. Users can install APIMyLlama packages with NPM, PIP, Jitpack Repo+Gradle or Maven, or from the Crates Repository. The tool supports Node.JS, Python, Java, and Rust for generating responses from the API. Additionally, it provides built-in health checking commands for monitoring API health status.
Trace
Trace is a new AutoDiff-like tool for training AI systems end-to-end with general feedback. It generalizes the back-propagation algorithm by capturing and propagating an AI system's execution trace. Implemented as a PyTorch-like Python library, users can write Python code directly and use Trace primitives to optimize certain parts, similar to training neural networks.
allms
allms is a versatile and powerful library designed to streamline the process of querying Large Language Models (LLMs). Developed by Allegro engineers, it simplifies working with LLM applications by providing a user-friendly interface, asynchronous querying, automatic retrying mechanism, error handling, and output parsing. It supports various LLM families hosted on different platforms like OpenAI, Google, Azure, and GCP. The library offers features for configuring endpoint credentials, batch querying with symbolic variables, and forcing structured output format. It also provides documentation, quickstart guides, and instructions for local development, testing, updating documentation, and making new releases.
aire
Aire is a modern Laravel form builder with a focus on expressive and beautiful code. It allows easy configuration of form components using fluent method calls or Blade components. Aire supports customization through config files and custom views, data binding with Eloquent models or arrays, method spoofing, CSRF token injection, server-side and client-side validation, and translations. It is designed to run on Laravel 5.8.28 and higher, with support for PHP 7.1 and higher. Aire is actively maintained and under consideration for additional features like read-only plain text, cross-browser support for custom checkboxes and radio buttons, support for Choices.js or similar libraries, improved file input handling, and better support for content prepending or appending to inputs.
llm-vscode
llm-vscode is an extension designed for all things LLM, utilizing llm-ls as its backend. It offers features such as code completion with 'ghost-text' suggestions, the ability to choose models for code generation via HTTP requests, ensuring prompt size fits within the context window, and code attribution checks. Users can configure the backend, suggestion behavior, keybindings, llm-ls settings, and tokenization options. Additionally, the extension supports testing models like Code Llama 13B, Phind/Phind-CodeLlama-34B-v2, and WizardLM/WizardCoder-Python-34B-V1.0. Development involves cloning llm-ls, building it, and setting up the llm-vscode extension for use.
For similar tasks
aiomisc
aiomisc is a Python library that provides a collection of utility functions and classes for working with asynchronous I/O in a more intuitive and efficient way. It offers features like worker pools, connection pools, circuit breaker pattern, and retry mechanisms to make asyncio code more robust and easier to maintain. The library simplifies the architecture of software using asynchronous I/O, making it easier for developers to write reliable and scalable asynchronous code.
For similar jobs
resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.
aiogram_bot_template
Aiogram bot template is a boilerplate for creating Telegram bots using Aiogram framework. It provides a solid foundation for building robust and scalable bots with a focus on code organization, database integration, and localization.
pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.
pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.
aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.
gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.
aioconsole
aioconsole is a Python package that provides asynchronous console and interfaces for asyncio. It offers asynchronous equivalents to input, print, exec, and code.interact, an interactive loop running the asynchronous Python console, customization and running of command line interfaces using argparse, stream support to serve interfaces instead of using standard streams, and the apython script to access asyncio code at runtime without modifying the sources. The package requires Python version 3.8 or higher and can be installed from PyPI or GitHub. It allows users to run Python files or modules with a modified asyncio policy, replacing the default event loop with an interactive loop. aioconsole is useful for scenarios where users need to interact with asyncio code in a console environment.
aiosqlite
aiosqlite is a Python library that provides a friendly, async interface to SQLite databases. It replicates the standard sqlite3 module but with async versions of all the standard connection and cursor methods, along with context managers for automatically closing connections and cursors. It allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. The library also replicates most of the advanced features of sqlite3, such as row factories and total changes tracking.