
aiodns
Simple DNS resolver for asyncio
Stars: 570

aiodns is a simple DNS resolver for asyncio that provides a way for asynchronous DNS resolutions using pycares. It offers functions like query, gethostbyname, gethostbyaddr, and cancel for DNS resolution and reverse lookup. The library supports various query types such as A, AAAA, CNAME, MX, NS, PTR, SOA, SRV, and TXT. Note that Windows users need to set the asyncio loop to SelectorEventLoop. The tool is licensed under MIT and welcomes contributions.
README:
.. image:: https://badge.fury.io/py/aiodns.png :target: https://pypi.org/project/aiodns/
.. image:: https://github.com/saghul/aiodns/workflows/CI/badge.svg :target: https://github.com/saghul/aiodns/actions
aiodns provides a simple way for doing asynchronous DNS resolutions using pycares <https://github.com/saghul/pycares>
_.
.. code:: python
import asyncio
import aiodns
loop = asyncio.get_event_loop()
resolver = aiodns.DNSResolver(loop=loop)
async def query(name, query_type):
return await resolver.query(name, query_type)
coro = query('google.com', 'A')
result = loop.run_until_complete(coro)
The following query types are supported: A, AAAA, ANY, CAA, CNAME, MX, NAPTR, NS, PTR, SOA, SRV, TXT.
The API is pretty simple, the following functions are provided in the DNSResolver
class:
-
query(host, type)
: Do a DNS resolution of the given type for the given hostname. It returns an instance ofasyncio.Future
. The actual result of the DNS query is taken directly from pycares. As of version 1.0.0 of aiodns (and pycares, for that matter) results are always namedtuple-like objects with different attributes. Please check thedocumentation <http://pycares.readthedocs.org/latest/channel.html#pycares.Channel.query>
_ for the result fields. -
gethostbyname(host, socket_family)
: Do a DNS resolution for the given hostname and the desired type of address family (i.e.socket.AF_INET
). Whilequery()
always performs a request to a DNS server,gethostbyname()
first looks into/etc/hosts
and thus can resolve local hostnames (such aslocalhost
). Please checkthe documentation <http://pycares.readthedocs.io/latest/channel.html#pycares.Channel.gethostbyname>
_ for the result fields. The actual result of the call is aasyncio.Future
. -
gethostbyaddr(name)
: Make a reverse lookup for an address. -
cancel()
: Cancel all pending DNS queries. All futures will getDNSError
exception set, withARES_ECANCELLED
errno. -
close()
: Close the resolver. This releases all resources and cancels any pending queries. It must be called when the resolver is no longer needed (e.g., application shutdown). The resolver should only be closed from the event loop that created the resolver.
While not recommended for typical use cases, DNSResolver
can be used as an async context manager
for scenarios where automatic cleanup is desired:
.. code:: python
async with aiodns.DNSResolver() as resolver:
result = await resolver.query('example.com', 'A')
# resolver.close() is called automatically when exiting the context
Important: This pattern is discouraged for most applications because DNSResolver
instances
are designed to be long-lived and reused for many queries. Creating and destroying resolvers
frequently adds unnecessary overhead. Use the context manager pattern only when you specifically
need automatic cleanup for short-lived resolver instances, such as in tests or one-off scripts.
This library requires the use of an asyncio.SelectorEventLoop
or winloop
on Windows
only when using a custom build of pycares
that links against a system-
provided c-ares
library without thread-safety support. This is because
non-thread-safe builds of c-ares
are incompatible with the default
ProactorEventLoop
on Windows.
If you're using the official prebuilt pycares
wheels on PyPI (version 4.7.0 or
later), which include a thread-safe version of c-ares
, this limitation does
not apply and can be safely ignored.
The default event loop can be changed as follows (do this very early in your application):
.. code:: python
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
This may have other implications for the rest of your codebase, so make sure to test thoroughly.
To run the test suite: python tests.py
Saúl Ibarra Corretgé [email protected]
aiodns uses the MIT license, check LICENSE file.
If you'd like to contribute, fork the project, make a patch and send a pull request. Have a look at the surrounding code and please, make yours look alike :-)
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aiodns
Similar Open Source Tools

aiodns
aiodns is a simple DNS resolver for asyncio that provides a way for asynchronous DNS resolutions using pycares. It offers functions like query, gethostbyname, gethostbyaddr, and cancel for DNS resolution and reverse lookup. The library supports various query types such as A, AAAA, CNAME, MX, NS, PTR, SOA, SRV, and TXT. Note that Windows users need to set the asyncio loop to SelectorEventLoop. The tool is licensed under MIT and welcomes contributions.

mlstm_kernels
This repository provides fast and efficient mLSTM training and inference Triton kernels built on Tiled Flash Linear Attention (TFLA). It includes implementations in JAX, PyTorch, and Triton, with chunkwise, parallel, and recurrent kernels for mLSTM. The repository also contains a benchmark library for runtime benchmarks and full mLSTM Huggingface models.

blinkid-android
The BlinkID Android SDK is a comprehensive solution for implementing secure document scanning and extraction. It offers powerful capabilities for extracting data from a wide range of identification documents. The SDK provides features for integrating document scanning into Android apps, including camera requirements, SDK resource pre-bundling, customizing the UX, changing default strings and localization, troubleshooting integration difficulties, and using the SDK through various methods. It also offers options for completely custom UX with low-level API integration. The SDK size is optimized for different processor architectures, and API documentation is available for reference. For any questions or support, users can contact the Microblink team at help.microblink.com.

probsem
ProbSem is a repository that provides a framework to leverage large language models (LLMs) for assigning context-conditional probability distributions over queried strings. It supports OpenAI engines and HuggingFace CausalLM models, and is flexible for research applications in linguistics, cognitive science, program synthesis, and NLP. Users can define prompts, contexts, and queries to derive probability distributions over possible completions, enabling tasks like cloze completion, multiple-choice QA, semantic parsing, and code completion. The repository offers CLI and API interfaces for evaluation, with options to customize models, normalize scores, and adjust temperature for probability distributions.

MiniAgents
MiniAgents is an open-source Python framework designed to simplify the creation of multi-agent AI systems. It offers a parallelism and async-first design, allowing users to focus on building intelligent agents while handling concurrency challenges. The framework, built on asyncio, supports LLM-based applications with immutable messages and seamless asynchronous token and message streaming between agents.

aiorun
aiorun is a Python package that provides a `run()` function as the starting point of your `asyncio`-based application. The `run()` function handles everything needed during the shutdown sequence of the application, such as creating a `Task` for the given coroutine, running the event loop, adding signal handlers for `SIGINT` and `SIGTERM`, cancelling tasks, waiting for the executor to complete shutdown, and closing the loop. It automates standard actions for asyncio apps, eliminating the need to write boilerplate code. The package also offers error handling options and tools for specific scenarios like TCP server startup and smart shield for shutdown.

LLMUnity
LLM for Unity enables seamless integration of Large Language Models (LLMs) within the Unity engine, allowing users to create intelligent characters for immersive player interactions. The tool supports major LLM models, runs locally without internet access, offers fast inference on CPU and GPU, and is easy to set up with a single line of code. It is free for both personal and commercial use, tested on Unity 2021 LTS, 2022 LTS, and 2023. Users can build multiple AI characters efficiently, use remote servers for processing, and customize model settings for text generation.

py-vectara-agentic
The `vectara-agentic` Python library is designed for developing powerful AI assistants using Vectara and Agentic-RAG. It supports various agent types, includes pre-built tools for domains like finance and legal, and enables easy creation of custom AI assistants and agents. The library provides tools for summarizing text, rephrasing text, legal tasks like summarizing legal text and critiquing as a judge, financial tasks like analyzing balance sheets and income statements, and database tools for inspecting and querying databases. It also supports observability via LlamaIndex and Arize Phoenix integration.

paper-qa
PaperQA is a minimal package for question and answering from PDFs or text files, providing very good answers with in-text citations. It uses OpenAI Embeddings to embed and search documents, and includes a process of embedding docs, queries, searching for top passages, creating summaries, using an LLM to re-score and select relevant summaries, putting summaries into prompt, and generating answers. The tool can be used to answer specific questions related to scientific research by leveraging citations and relevant passages from documents.

crb
CRB (Composable Runtime Blocks) is a unique framework that implements hybrid workloads by seamlessly combining synchronous and asynchronous activities, state machines, routines, the actor model, and supervisors. It is ideal for building massive applications and serves as a low-level framework for creating custom frameworks, such as AI-agents. The core idea is to ensure high compatibility among all blocks, enabling significant code reuse. The framework allows for the implementation of algorithms with complex branching, making it suitable for building large-scale applications or implementing complex workflows, such as AI pipelines. It provides flexibility in defining structures, implementing traits, and managing execution flow, allowing users to create robust and nonlinear algorithms easily.

paper-qa
PaperQA is a minimal package for question and answering from PDFs or text files, providing very good answers with in-text citations. It uses OpenAI Embeddings to embed and search documents, and follows a process of embedding docs and queries, searching for top passages, creating summaries, scoring and selecting relevant summaries, putting summaries into prompt, and generating answers. Users can customize prompts and use various models for embeddings and LLMs. The tool can be used asynchronously and supports adding documents from paths, files, or URLs.

mirage
Mirage Persistent Kernel (MPK) is a compiler and runtime system that automatically transforms LLM inference into a single megakernel—a fused GPU kernel that performs all necessary computation and communication within a single kernel launch. This end-to-end GPU fusion approach reduces LLM inference latency by 1.2× to 6.7×, all while requiring minimal developer effort.

neocodeium
NeoCodeium is a free AI completion plugin powered by Codeium, designed for Neovim users. It aims to provide a smoother experience by eliminating flickering suggestions and allowing for repeatable completions using the `.` key. The plugin offers performance improvements through cache techniques, displays suggestion count labels, and supports Lua scripting. Users can customize keymaps, manage suggestions, and interact with the AI chat feature. NeoCodeium enhances code completion in Neovim, making it a valuable tool for developers seeking efficient coding assistance.

aire
Aire is a modern Laravel form builder with a focus on expressive and beautiful code. It allows easy configuration of form components using fluent method calls or Blade components. Aire supports customization through config files and custom views, data binding with Eloquent models or arrays, method spoofing, CSRF token injection, server-side and client-side validation, and translations. It is designed to run on Laravel 5.8.28 and higher, with support for PHP 7.1 and higher. Aire is actively maintained and under consideration for additional features like read-only plain text, cross-browser support for custom checkboxes and radio buttons, support for Choices.js or similar libraries, improved file input handling, and better support for content prepending or appending to inputs.

aiomisc
aiomisc is a Python library that provides a collection of utility functions and classes for working with asynchronous I/O in a more intuitive and efficient way. It offers features like worker pools, connection pools, circuit breaker pattern, and retry mechanisms to make asyncio code more robust and easier to maintain. The library simplifies the architecture of software using asynchronous I/O, making it easier for developers to write reliable and scalable asynchronous code.

web-llm
WebLLM is a modular and customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including json-mode, function-calling, streaming, etc. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
For similar tasks

aiodns
aiodns is a simple DNS resolver for asyncio that provides a way for asynchronous DNS resolutions using pycares. It offers functions like query, gethostbyname, gethostbyaddr, and cancel for DNS resolution and reverse lookup. The library supports various query types such as A, AAAA, CNAME, MX, NS, PTR, SOA, SRV, and TXT. Note that Windows users need to set the asyncio loop to SelectorEventLoop. The tool is licensed under MIT and welcomes contributions.
For similar jobs

AirGo
AirGo is a front and rear end separation, multi user, multi protocol proxy service management system, simple and easy to use. It supports vless, vmess, shadowsocks, and hysteria2.

mosec
Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and the efficient online service API. * **Highly performant** : web layer and task coordination built with Rust 🦀, which offers blazing speed in addition to efficient CPU utilization powered by async I/O * **Ease of use** : user interface purely in Python 🐍, by which users can serve their models in an ML framework-agnostic manner using the same code as they do for offline testing * **Dynamic batching** : aggregate requests from different users for batched inference and distribute results back * **Pipelined stages** : spawn multiple processes for pipelined stages to handle CPU/GPU/IO mixed workloads * **Cloud friendly** : designed to run in the cloud, with the model warmup, graceful shutdown, and Prometheus monitoring metrics, easily managed by Kubernetes or any container orchestration systems * **Do one thing well** : focus on the online serving part, users can pay attention to the model optimization and business logic

llm-code-interpreter
The 'llm-code-interpreter' repository is a deprecated plugin that provides a code interpreter on steroids for ChatGPT by E2B. It gives ChatGPT access to a sandboxed cloud environment with capabilities like running any code, accessing Linux OS, installing programs, using filesystem, running processes, and accessing the internet. The plugin exposes commands to run shell commands, read files, and write files, enabling various possibilities such as running different languages, installing programs, starting servers, deploying websites, and more. It is powered by the E2B API and is designed for agents to freely experiment within a sandboxed environment.

pezzo
Pezzo is a fully cloud-native and open-source LLMOps platform that allows users to observe and monitor AI operations, troubleshoot issues, save costs and latency, collaborate, manage prompts, and deliver AI changes instantly. It supports various clients for prompt management, observability, and caching. Users can run the full Pezzo stack locally using Docker Compose, with prerequisites including Node.js 18+, Docker, and a GraphQL Language Feature Support VSCode Extension. Contributions are welcome, and the source code is available under the Apache 2.0 License.

learn-generative-ai
Learn Cloud Applied Generative AI Engineering (GenEng) is a course focusing on the application of generative AI technologies in various industries. The course covers topics such as the economic impact of generative AI, the role of developers in adopting and integrating generative AI technologies, and the future trends in generative AI. Students will learn about tools like OpenAI API, LangChain, and Pinecone, and how to build and deploy Large Language Models (LLMs) for different applications. The course also explores the convergence of generative AI with Web 3.0 and its potential implications for decentralized intelligence.

gcloud-aio
This repository contains shared codebase for two projects: gcloud-aio and gcloud-rest. gcloud-aio is built for Python 3's asyncio, while gcloud-rest is a threadsafe requests-based implementation. It provides clients for Google Cloud services like Auth, BigQuery, Datastore, KMS, PubSub, Storage, and Task Queue. Users can install the library using pip and refer to the documentation for usage details. Developers can contribute to the project by following the contribution guide.

fluid
Fluid is an open source Kubernetes-native Distributed Dataset Orchestrator and Accelerator for data-intensive applications, such as big data and AI applications. It implements dataset abstraction, scalable cache runtime, automated data operations, elasticity and scheduling, and is runtime platform agnostic. Key concepts include Dataset and Runtime. Prerequisites include Kubernetes version > 1.16, Golang 1.18+, and Helm 3. The tool offers features like accelerating remote file accessing, machine learning, accelerating PVC, preloading dataset, and on-the-fly dataset cache scaling. Contributions are welcomed, and the project is under the Apache 2.0 license with a vendor-neutral approach.

aiges
AIGES is a core component of the Athena Serving Framework, designed as a universal encapsulation tool for AI developers to deploy AI algorithm models and engines quickly. By integrating AIGES, you can deploy AI algorithm models and engines rapidly and host them on the Athena Serving Framework, utilizing supporting auxiliary systems for networking, distribution strategies, data processing, etc. The Athena Serving Framework aims to accelerate the cloud service of AI algorithm models and engines, providing multiple guarantees for cloud service stability through cloud-native architecture. You can efficiently and securely deploy, upgrade, scale, operate, and monitor models and engines without focusing on underlying infrastructure and service-related development, governance, and operations.