aiomisc
aiomisc - miscellaneous utils for asyncio
Stars: 378
aiomisc is a Python library that provides a collection of utility functions and classes for working with asynchronous I/O in a more intuitive and efficient way. It offers features like worker pools, connection pools, circuit breaker pattern, and retry mechanisms to make asyncio code more robust and easier to maintain. The library simplifies the architecture of software using asynchronous I/O, making it easier for developers to write reliable and scalable asynchronous code.
README:
.. image:: https://coveralls.io/repos/github/aiokitchen/aiomisc/badge.svg?branch=master :target: https://coveralls.io/github/aiokitchen/aiomisc :alt: Coveralls
.. image:: https://github.com/aiokitchen/aiomisc/workflows/tox/badge.svg :target: https://github.com/aiokitchen/aiomisc/actions?query=workflow%3Atox :alt: Actions
.. image:: https://img.shields.io/pypi/v/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/ :alt: Latest Version
.. image:: https://img.shields.io/pypi/wheel/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/
.. image:: https://img.shields.io/pypi/pyversions/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/
.. image:: https://img.shields.io/pypi/l/aiomisc.svg :target: https://pypi.python.org/pypi/aiomisc/
Miscellaneous utils for asyncio.
As a programmer, you are no stranger to the challenges that come with building and maintaining software applications. One area that can be particularly difficult is making architecture of the software that using asynchronous I/O.
This is where aiomisc comes in. aiomisc is a Python library that provides a
collection of utility functions and classes for working with asynchronous I/O
in a more intuitive and efficient way. It is built on top of the asyncio
library and is designed to make it easier for developers to write
asynchronous code that is both reliable and scalable.
With aiomisc, you can take advantage of powerful features like
worker pools
, connection pools
, circuit breaker pattern
,
and retry mechanisms such as asyncbackoff
and asyncretry
to make your
asyncio code more robust and easier to maintain. In this documentation,
we'll take a closer look at what aiomisc
has to offer and how it can
help you streamline your asyncio service development.
Installation is possible in standard ways, such as PyPI or installation from a git repository directly.
Installing from PyPI_:
.. code-block:: bash
pip3 install aiomisc
Installing from github.com:
.. code-block:: bash
# Using git tool
pip3 install git+https://github.com/aiokitchen/aiomisc.git
# Alternative way using http
pip3 install \
https://github.com/aiokitchen/aiomisc/archive/refs/heads/master.zip
The package contains several extras and you can install additional dependencies if you specify them in this way.
With uvloop_:
.. code-block:: bash
pip3 install "aiomisc[uvloop]"
With aiohttp_:
.. code-block:: bash
pip3 install "aiomisc[aiohttp]"
Complete table of extras bellow:
+-----------------------------------+------------------------------------------------+
| example | description |
+===================================+================================================+
| pip install aiomisc[aiohttp]
| For running aiohttp_ applications. |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[asgi]
| For running ASGI_ applications |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[carbon]
| Sending metrics to carbon_ (part of graphite_) |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[cron]
| use croniter_ for scheduling tasks |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[raven]
| Sending exceptions to sentry_ using raven_ |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[rich]
| You might using rich_ for logging |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[uvicorn]
| For running ASGI_ application using uvicorn_ |
+-----------------------------------+------------------------------------------------+
| pip install aiomisc[uvloop]
| use uvloop_ as a default event loop |
+-----------------------------------+------------------------------------------------+
.. _ASGI: https://asgi.readthedocs.io/en/latest/ .. _PyPI: https://pypi.org/ .. _aiohttp: https://pypi.org/project/aiohttp .. _carbon: https://pypi.org/project/carbon .. _croniter: https://pypi.org/project/croniter .. _graphite: http://graphiteapp.org .. _raven: https://pypi.org/project/raven .. _rich: https://pypi.org/project/rich .. _sentry: https://sentry.io/ .. _uvloop: https://pypi.org/project/uvloop .. _uvicorn: https://pypi.org/project/uvicorn
You can combine extras values by separating them with commas, for example:
.. code-block:: bash
pip3 install "aiomisc[aiohttp,cron,rich,uvloop]"
This section will cover how this library creates and uses the event loop and
creates services. Of course, you can't write about everything here, but you
can read about a lot in the Tutorial_ section, and you can
always refer to the Modules_ and API reference
_ sections for help.
Event-loop and entrypoint +++++++++++++++++++++++++
Let's look at this simple example first:
.. code-block:: python
import asyncio
import logging
import aiomisc
log = logging.getLogger(__name__)
async def main():
log.info('Starting')
await asyncio.sleep(3)
log.info('Exiting')
if __name__ == '__main__':
with aiomisc.entrypoint(log_level="info", log_format="color") as loop:
loop.run_until_complete(main())
This code declares an asynchronous main()
function that exits after
3 seconds. It would seem nothing interesting, but the whole point is in
the entrypoint
.
What does the entrypoint
do, it would seem not so much, it creates an
event-loop and transfers control to the user. However, under the hood, the
logger is configured in a separate thread, a pool of threads is created,
services are started, but more on that later and there are no services
in this example.
Alternatively, you can choose not to use an entrypoint, just create an event-loop and set this as a default event loop for current thread:
.. code-block:: python :name: test_index_get_loop
import asyncio
import aiomisc
# * Installs uvloop event loop is it's has been installed.
# * Creates and set `aiomisc.thread_pool.ThreadPoolExecutor`
# as a default executor
# * Sets just created event-loop as a current event-loop for this thread.
aiomisc.new_event_loop()
async def main():
await asyncio.sleep(1)
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The example above is useful if your code is already using an implicitly created
event loop, you will have to modify less code, just add
aiomisc.new_event_loop()
and all calls to asyncio.get_event_loop()
will return the created instance.
However, you can do with one call. Following example closes implicitly created asyncio event loop and install a new one:
.. code-block:: python :name: test_index_new_loop
import asyncio
import aiomisc
async def main():
await asyncio.sleep(3)
if __name__ == '__main__':
loop = aiomisc.new_event_loop()
loop.run_until_complete(main())
Services ++++++++
The main thing that an entrypoint
does is start and gracefully
stop services.
The service concept within this library means a class derived from
the aiosmic.Service
class and implementing the
async def start(self) -> None:
method and optionally the
async def stop(self, exc: Optional[ Exception]) -> None
method.
The concept of stopping a service is not necessarily is pressing Ctrl+C
keys by user, it's actually just exiting the entrypoint
context manager.
The example below shows what your service might look like:
.. code-block:: python
from aiomisc import entrypoint, Service
class MyService(Service):
async def start(self):
do_something_when_start()
async def stop(self, exc):
do_graceful_shutdown()
with entrypoint(MyService()) as loop:
loop.run_forever()
The entry point can start as many instances of the service as it likes, and all of them will start concurrently.
There is also a way if the start
method is a payload for a service,
and then there is no need to implement the stop method, since the running
task with the start
function will be canceled at the stop stage.
But in this case, you will have to notify the entrypoint
that the
initialization of the service instance is complete and it can continue.
Like this:
.. code-block:: python
import asyncio
from threading import Event
from aiomisc import entrypoint, Service
event = Event()
class MyService(Service):
async def start(self):
# Send signal to entrypoint for continue running
self.start_event.set()
await asyncio.sleep(3600)
with entrypoint(MyService()) as loop:
assert event.is_set()
.. note::
The ``entrypoint`` passes control to the body of the context manager only
after all service instances have started. As mentioned above, a start is
considered to be the completion of the ``start`` method or the setting of
an start event with ``self.start_event.set()``.
The whole power of this library is in the set of already implemented or
abstract services.
Such as: AIOHTTPService
, ASGIService
, TCPServer
,
UDPServer
, TCPClient
, PeriodicService
, CronService
and so on.
Unfortunately in this section it is not possible to pay more attention to this,
please pay attention to the Tutorial_ section section, there are more
examples and explanations, and of cource you always can find out an answer on
the /api/index
or in the source code. The authors have tried to make
the source code as clear and simple as possible, so feel free to explore it.
This software follows Semantic Versioning
_
Summary: it's given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes
- MINOR version when you add functionality in a backwards compatible manner
- PATCH version when you make backwards compatible bug fixes
- Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
In this case, the package version is assigned automatically with poem-plugins_, it using on the tag in the repository as a major and minor and the counter, which takes the number of commits between tag to the head of branch.
.. _poem-plugins: https://pypi.org/project/poem-plugins
Summary: it's given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes
- MINOR version when you add functionality in a backwards compatible manner
- PATCH version when you make backwards compatible bug fixes
- Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.
In this case, the package version is assigned automatically with poem-plugins_, it using on the tag in the repository as a major and minor and the counter, which takes the number of commits between tag to the head of branch.
.. _poem-plugins: https://pypi.org/project/poem-plugins
This project, like most open source projects, is developed by enthusiasts, you can join the development, submit issues, or send your merge requests.
In order to start developing in this repository, you need to do the following things.
Should be installed:
- Python 3.7+ as
python3
- Installed Poetry_ as
poetry
.. _Poetry: https://python-poetry.org/docs/
For setting up developer environment just execute:
.. code-block::
# installing all dependencies
poetry install
# setting up pre-commit hooks
poetry run pre-commit install
# adding poem-plugins to the poetry
poetry self add poem-plugins
.. _Semantic Versioning: http://semver.org/
.. _API reference: https://aiomisc.readthedocs.io/en/latest/api/index.html .. _Modules: https://aiomisc.readthedocs.io/en/latest/modules.html .. _Tutorial: https://aiomisc.readthedocs.io/en/latest/tutorial.html
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aiomisc
Similar Open Source Tools
aiomisc
aiomisc is a Python library that provides a collection of utility functions and classes for working with asynchronous I/O in a more intuitive and efficient way. It offers features like worker pools, connection pools, circuit breaker pattern, and retry mechanisms to make asyncio code more robust and easier to maintain. The library simplifies the architecture of software using asynchronous I/O, making it easier for developers to write reliable and scalable asynchronous code.
paxml
Pax is a framework to configure and run machine learning experiments on top of Jax.
aiohttp-session
aiohttp_session is a Python library that provides session management for aiohttp.web applications. It allows storing user-specific data in session objects with a dict-like interface. The library offers different session storage options, including SimpleCookieStorage for testing, EncryptedCookieStorage for secure data storage, and RedisStorage for storing data in Redis. Users can easily integrate session management into their aiohttp.web applications by registering the session middleware. The library is designed to simplify session handling and enhance the security of web applications.
AgentKit
AgentKit is a framework for constructing complex human thought processes from simple natural language prompts. It offers a unified way to represent and execute these processes as graphs, making it easy to design and tune agents without any programming experience. AgentKit can be used for a variety of tasks, including generating text, answering questions, and making decisions.
DeepPavlov
DeepPavlov is an open-source conversational AI library built on PyTorch. It is designed for the development of production-ready chatbots and complex conversational systems, as well as for research in the area of NLP and dialog systems. The library offers a wide range of models for tasks such as Named Entity Recognition, Intent/Sentence Classification, Question Answering, Sentence Similarity/Ranking, Syntactic Parsing, and more. DeepPavlov also provides embeddings like BERT, ELMo, and FastText for various languages, along with AutoML capabilities and integrations with REST API, Socket API, and Amazon AWS.
LLM-Pruner
LLM-Pruner is a tool for structural pruning of large language models, allowing task-agnostic compression while retaining multi-task solving ability. It supports automatic structural pruning of various LLMs with minimal human effort. The tool is efficient, requiring only 3 minutes for pruning and 3 hours for post-training. Supported LLMs include Llama-3.1, Llama-3, Llama-2, LLaMA, BLOOM, Vicuna, and Baichuan. Updates include support for new LLMs like GQA and BLOOM, as well as fine-tuning results achieving high accuracy. The tool provides step-by-step instructions for pruning, post-training, and evaluation, along with a Gradio interface for text generation. Limitations include issues with generating repetitive or nonsensical tokens in compressed models and manual operations for certain models.
clarifai-python-grpc
This is the official Clarifai gRPC Python client for interacting with their recognition API. Clarifai offers a platform for data scientists, developers, researchers, and enterprises to utilize artificial intelligence for image, video, and text analysis through computer vision and natural language processing. The client allows users to authenticate, predict concepts in images, and access various functionalities provided by the Clarifai API. It follows a versioning scheme that aligns with the backend API updates and includes specific instructions for installation and troubleshooting. Users can explore the Clarifai demo, sign up for an account, and refer to the documentation for detailed information.
extractor
Extractor is an AI-powered data extraction library for Laravel that leverages OpenAI's capabilities to effortlessly extract structured data from various sources, including images, PDFs, and emails. It features a convenient wrapper around OpenAI Chat and Completion endpoints, supports multiple input formats, includes a flexible Field Extractor for arbitrary data extraction, and integrates with Textract for OCR functionality. Extractor utilizes JSON Mode from the latest GPT-3.5 and GPT-4 models, providing accurate and efficient data extraction.
react-native-fast-tflite
A high-performance TensorFlow Lite library for React Native that utilizes JSI for power, zero-copy ArrayBuffers for efficiency, and low-level C/C++ TensorFlow Lite core API for direct memory access. It supports swapping out TensorFlow Models at runtime and GPU-accelerated delegates like CoreML/Metal/OpenGL. Easy VisionCamera integration allows for seamless usage. Users can load TensorFlow Lite models, interpret input and output data, and utilize GPU Delegates for faster computation. The library is suitable for real-time object detection, image classification, and other machine learning tasks in React Native applications.
LightRAG
LightRAG is a PyTorch library designed for building and optimizing Retriever-Agent-Generator (RAG) pipelines. It follows principles of simplicity, quality, and optimization, offering developers maximum customizability with minimal abstraction. The library includes components for model interaction, output parsing, and structured data generation. LightRAG facilitates tasks like providing explanations and examples for concepts through a question-answering pipeline.
GOLEM
GOLEM is an open-source AI framework focused on optimization and learning of structured graph-based models using meta-heuristic methods. It emphasizes the potential of meta-heuristics in complex problem spaces where gradient-based methods are not suitable, and the importance of structured models in various problem domains. The framework offers features like structured model optimization, metaheuristic methods, multi-objective optimization, constrained optimization, extensibility, interpretability, and reproducibility. It can be applied to optimization problems represented as directed graphs with defined fitness functions. GOLEM has applications in areas like AutoML, Bayesian network structure search, differential equation discovery, geometric design, and neural architecture search. The project structure includes packages for core functionalities, adapters, graph representation, optimizers, genetic algorithms, utilities, serialization, visualization, examples, and testing. Contributions are welcome, and the project is supported by ITMO University's Research Center Strong Artificial Intelligence in Industry.
litserve
LitServe is a high-throughput serving engine for deploying AI models at scale. It generates an API endpoint for a model, handles batching, streaming, autoscaling across CPU/GPUs, and more. Built for enterprise scale, it supports every framework like PyTorch, JAX, Tensorflow, and more. LitServe is designed to let users focus on model performance, not the serving boilerplate. It is like PyTorch Lightning for model serving but with broader framework support and scalability.
LLamaSharp
LLamaSharp is a cross-platform library to run 🦙LLaMA/LLaVA model (and others) on your local device. Based on llama.cpp, inference with LLamaSharp is efficient on both CPU and GPU. With the higher-level APIs and RAG support, it's convenient to deploy LLM (Large Language Model) in your application with LLamaSharp.
languagemodels
Language Models is a Python package that provides building blocks to explore large language models with as little as 512MB of RAM. It simplifies the usage of large language models from Python, ensuring all inference is performed locally to keep data private. The package includes features such as text completions, chat capabilities, code completions, external text retrieval, semantic search, and more. It outperforms Hugging Face transformers for CPU inference and offers sensible default models with varying parameters based on memory constraints. The package is suitable for learners and educators exploring the intersection of large language models with modern software development.
For similar tasks
aiomisc
aiomisc is a Python library that provides a collection of utility functions and classes for working with asynchronous I/O in a more intuitive and efficient way. It offers features like worker pools, connection pools, circuit breaker pattern, and retry mechanisms to make asyncio code more robust and easier to maintain. The library simplifies the architecture of software using asynchronous I/O, making it easier for developers to write reliable and scalable asynchronous code.
For similar jobs
resonance
Resonance is a framework designed to facilitate interoperability and messaging between services in your infrastructure and beyond. It provides AI capabilities and takes full advantage of asynchronous PHP, built on top of Swoole. With Resonance, you can: * Chat with Open-Source LLMs: Create prompt controllers to directly answer user's prompts. LLM takes care of determining user's intention, so you can focus on taking appropriate action. * Asynchronous Where it Matters: Respond asynchronously to incoming RPC or WebSocket messages (or both combined) with little overhead. You can set up all the asynchronous features using attributes. No elaborate configuration is needed. * Simple Things Remain Simple: Writing HTTP controllers is similar to how it's done in the synchronous code. Controllers have new exciting features that take advantage of the asynchronous environment. * Consistency is Key: You can keep the same approach to writing software no matter the size of your project. There are no growing central configuration files or service dependencies registries. Every relation between code modules is local to those modules. * Promises in PHP: Resonance provides a partial implementation of Promise/A+ spec to handle various asynchronous tasks. * GraphQL Out of the Box: You can build elaborate GraphQL schemas by using just the PHP attributes. Resonance takes care of reusing SQL queries and optimizing the resources' usage. All fields can be resolved asynchronously.
aiogram_bot_template
Aiogram bot template is a boilerplate for creating Telegram bots using Aiogram framework. It provides a solid foundation for building robust and scalable bots with a focus on code organization, database integration, and localization.
pluto
Pluto is a development tool dedicated to helping developers **build cloud and AI applications more conveniently** , resolving issues such as the challenging deployment of AI applications and open-source models. Developers are able to write applications in familiar programming languages like **Python and TypeScript** , **directly defining and utilizing the cloud resources necessary for the application within their code base** , such as AWS SageMaker, DynamoDB, and more. Pluto automatically deduces the infrastructure resource needs of the app through **static program analysis** and proceeds to create these resources on the specified cloud platform, **simplifying the resources creation and application deployment process**.
pinecone-ts-client
The official Node.js client for Pinecone, written in TypeScript. This client library provides a high-level interface for interacting with the Pinecone vector database service. With this client, you can create and manage indexes, upsert and query vector data, and perform other operations related to vector search and retrieval. The client is designed to be easy to use and provides a consistent and idiomatic experience for Node.js developers. It supports all the features and functionality of the Pinecone API, making it a comprehensive solution for building vector-powered applications in Node.js.
aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.
gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.
aioconsole
aioconsole is a Python package that provides asynchronous console and interfaces for asyncio. It offers asynchronous equivalents to input, print, exec, and code.interact, an interactive loop running the asynchronous Python console, customization and running of command line interfaces using argparse, stream support to serve interfaces instead of using standard streams, and the apython script to access asyncio code at runtime without modifying the sources. The package requires Python version 3.8 or higher and can be installed from PyPI or GitHub. It allows users to run Python files or modules with a modified asyncio policy, replacing the default event loop with an interactive loop. aioconsole is useful for scenarios where users need to interact with asyncio code in a console environment.
aiosqlite
aiosqlite is a Python library that provides a friendly, async interface to SQLite databases. It replicates the standard sqlite3 module but with async versions of all the standard connection and cursor methods, along with context managers for automatically closing connections and cursors. It allows interaction with SQLite databases on the main AsyncIO event loop without blocking execution of other coroutines while waiting for queries or data fetches. The library also replicates most of the advanced features of sqlite3, such as row factories and total changes tracking.