aiohttp-cors
CORS support for aiohttp
Stars: 211
The aiohttp_cors library provides Cross Origin Resource Sharing (CORS) support for aiohttp, an asyncio-powered asynchronous HTTP server. CORS allows overriding the Same-origin policy for specific resources, enabling web pages to access resources from different origins. The library helps configure CORS settings for resources and routes in aiohttp applications, allowing control over origins, credentials passing, headers, and preflight requests.
README:
aiohttp_cors library implements
Cross Origin Resource Sharing (CORS) <cors_>__
support for aiohttp <aiohttp_>__
asyncio-powered asynchronous HTTP server.
Jump directly to Usage_ part to see how to use aiohttp_cors.
Web security model is tightly connected to
Same-origin policy (SOP) <sop_>__.
In short: web pages cannot Read resources which origin
doesn't match origin of requested page, but can Embed (or Execute)
resources and have limited ability to Write resources.
Origin of a page is defined in the Standard <cors_>__ as tuple
(schema, host, port)
(there is a notable exception with Internet Explorer: it doesn't use port to
define origin, but uses it's own
Security Zones <https://msdn.microsoft.com/en-us/library/ms537183.aspx>__).
Can Embed means that resource from other origin can be embedded into
the page,
e.g. by using <script src="...">, <img src="...">,
<iframe src="...">.
Cannot Read means that resource from other origin source cannot be
obtained by page
(source — any information that would allow to reconstruct resource).
E.g. the page can Embed image with <img src="...">,
but it can't get information about specific pixels, so page can't reconstruct
original image
(though some information from the other resource may still be leaked:
e.g. the page can read embedded image dimensions).
Limited ability to Write means, that the page can send POST requests to
other origin with limited set of Content-Type values and headers.
Restriction to Read resource from other origin is related to authentication mechanism that is used by browsers: when browser reads (downloads) resource he automatically sends all security credentials that user previously authorized for that resource (e.g. cookies, HTTP Basic Authentication).
For example, if Read would be allowed and user is authenticated
in some internet banking,
malicious page would be able to embed internet banking page with iframe
(since authentication is done by the browser it may be embedded as if
user is directly navigated to internet banking page),
then read user private information by reading source of the embedded page
(which may be not only source code, but, for example,
screenshot of the embedded internet banking page).
Cross-origin Resource Sharing (CORS) <cors_>__ allows to override
SOP for specific resources.
In short, CORS works in the following way.
When page https://client.example.com request (Read) resource
https://server.example.com/resource that have other origin,
browser implicitly appends Origin: https://client.example.com header
to the HTTP request,
effectively requesting server to give read permission for
the resource to the https://client.example.com page::
GET /resource HTTP/1.1
Origin: https://client.example.com
Host: server.example.com
If server allows access from the page to the resource, it responds with
resource with Access-Control-Allow-Origin: https://client.example.com
HTTP header
(optionally allowing exposing custom server headers to the page and
enabling use of the user credentials on the server resource)::
Access-Control-Allow-Origin: https://client.example.com
Access-Control-Allow-Credentials: true
Access-Control-Expose-Headers: X-Server-Header
Browser checks, if server responded with proper
Access-Control-Allow-Origin header and accordingly allows or denies
access for the obtained resource to the page.
CORS specification designed in a way that servers that are not aware of CORS will not expose any additional information, except allowed by the SOP.
To request resources with custom headers or using custom HTTP methods
(e.g. PUT, DELETE) that are not allowed by SOP,
CORS-enabled browser first send preflight request to the
resource using OPTIONS method, in which he queries access to the resource
with specific method and headers::
OPTIONS / HTTP/1.1
Origin: https://client.example.com
Access-Control-Request-Method: PUT
Access-Control-Request-Headers: X-Client-Header
CORS-enabled server responds is requested method is allowed and which of the specified headers are allowed::
Access-Control-Allow-Origin: https://client.example.com
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: PUT
Access-Control-Allow-Headers: X-Client-Header
Access-Control-Max-Age: 3600
Browser checks response to preflight request, and, if actual request allowed, does actual request.
You can install aiohttp_cors as a typical Python library from PyPI or
from git:
.. code-block:: bash
$ pip install aiohttp_cors
Note that aiohttp_cors requires versions of Python >= 3.4.1 and
aiohttp >= 1.1.
To use aiohttp_cors you need to configure the application and
enable CORS on
resources and routes <https://aiohttp.readthedocs.org/en/stable/web.html#resources-and-routes>__
that you want to expose:
.. code-block:: python
import asyncio
from aiohttp import web
import aiohttp_cors
@asyncio.coroutine
def handler(request):
return web.Response(
text="Hello!",
headers={
"X-Custom-Server-Header": "Custom data",
})
app = web.Application()
# `aiohttp_cors.setup` returns `aiohttp_cors.CorsConfig` instance.
# The `cors` instance will store CORS configuration for the
# application.
cors = aiohttp_cors.setup(app)
# To enable CORS processing for specific route you need to add
# that route to the CORS configuration object and specify its
# CORS options.
resource = cors.add(app.router.add_resource("/hello"))
route = cors.add(
resource.add_route("GET", handler), {
"http://client.example.org": aiohttp_cors.ResourceOptions(
allow_credentials=True,
expose_headers=("X-Custom-Server-Header",),
allow_headers=("X-Requested-With", "Content-Type"),
max_age=3600,
)
})
Each route has it's own CORS configuration passed in CorsConfig.add()
method.
CORS configuration is a mapping from origins to options for that origins.
In the example above CORS is configured for the resource under path /hello
and HTTP method GET, and in the context of CORS:
-
This resource will be available using CORS only to
http://client.example.orgorigin. -
Passing of credentials to this resource will be allowed.
-
The resource will expose to the client
X-Custom-Server-Headerserver header. -
The client will be allowed to pass
X-Requested-WithandContent-Typeheaders to the server. -
Preflight requests will be allowed to be cached by client for
3600seconds.
Resource will be available only to the explicitly specified origins.
You can specify "all other origins" using special * origin:
.. code-block:: python
cors.add(route, {
"*":
aiohttp_cors.ResourceOptions(allow_credentials=False),
"http://client.example.org":
aiohttp_cors.ResourceOptions(allow_credentials=True),
})
Here the resource specified by route will be available to all origins with
disallowed credentials passing, and with allowed credentials passing only to
http://client.example.org.
By default ResourceOptions will be constructed without any allowed CORS
options.
This means, that resource will be available using CORS to specified origin,
but client will not be allowed to send either credentials,
or send non-simple headers, or read from server non-simple headers.
To enable sending or receiving all headers you can specify special value
* instead of sequence of headers:
.. code-block:: python
cors.add(route, {
"http://client.example.org":
aiohttp_cors.ResourceOptions(
expose_headers="*",
allow_headers="*"),
})
You can specify default CORS-enabled resource options using
aiohttp_cors.setup()'s defaults argument:
.. code-block:: python
cors = aiohttp_cors.setup(app, defaults={
# Allow all to read all CORS-enabled resources from
# http://client.example.org.
"http://client.example.org": aiohttp_cors.ResourceOptions(),
})
# Enable CORS on routes.
# According to defaults POST and PUT will be available only to
# "http://client.example.org".
hello_resource = cors.add(app.router.add_resource("/hello"))
cors.add(hello_resource.add_route("POST", handler_post))
cors.add(hello_resource.add_route("PUT", handler_put))
# In addition to "http://client.example.org", GET request will be
# allowed from "http://other-client.example.org" origin.
cors.add(hello_resource.add_route("GET", handler), {
"http://other-client.example.org":
aiohttp_cors.ResourceOptions(),
})
# CORS will be enabled only on the resources added to `CorsConfig`,
# so following resource will be NOT CORS-enabled.
app.router.add_route("GET", "/private", handler)
Also you can specify default options for resources:
.. code-block:: python
# Allow POST and PUT requests from "http://client.example.org" origin.
hello_resource = cors.add(app.router.add_resource("/hello"), {
"http://client.example.org": aiohttp_cors.ResourceOptions(),
})
cors.add(hello_resource.add_route("POST", handler_post))
cors.add(hello_resource.add_route("PUT", handler_put))
Resource CORS configuration allows to use allow_methods option that
explicitly specifies list of allowed HTTP methods for origin
(or * for all HTTP methods).
By using this option it is not required to add all resource routes to
CORS configuration object:
.. code-block:: python
# Allow POST and PUT requests from "http://client.example.org" origin.
hello_resource = cors.add(app.router.add_resource("/hello"), {
"http://client.example.org":
aiohttp_cors.ResourceOptions(allow_methods=["POST", "PUT"]),
})
# No need to add POST and PUT routes into CORS configuration object.
hello_resource.add_route("POST", handler_post)
hello_resource.add_route("PUT", handler_put)
# Still you can add additional methods to CORS configuration object:
cors.add(hello_resource.add_route("DELETE", handler_delete))
Here is an example of how to enable CORS for all origins with all CORS features:
.. code-block:: python
cors = aiohttp_cors.setup(app, defaults={
"*": aiohttp_cors.ResourceOptions(
allow_credentials=True,
expose_headers="*",
allow_headers="*",
)
})
# Add all resources to `CorsConfig`.
resource = cors.add(app.router.add_resource("/hello"))
cors.add(resource.add_route("GET", handler_get))
cors.add(resource.add_route("PUT", handler_put))
cors.add(resource.add_route("POST", handler_put))
cors.add(resource.add_route("DELETE", handler_delete))
Old routes API is supported — you can use router.add_router and
router.register_route as before, though this usage is discouraged:
.. code-block:: python
cors.add(
app.router.add_route("GET", "/hello", handler), {
"http://client.example.org": aiohttp_cors.ResourceOptions(
allow_credentials=True,
expose_headers=("X-Custom-Server-Header",),
allow_headers=("X-Requested-With", "Content-Type"),
max_age=3600,
)
})
You can enable CORS for all added routes by accessing routes list in the router:
.. code-block:: python
# Setup application routes.
app.router.add_route("GET", "/hello", handler_get)
app.router.add_route("PUT", "/hello", handler_put)
app.router.add_route("POST", "/hello", handler_put)
app.router.add_route("DELETE", "/hello", handler_delete)
# Configure default CORS settings.
cors = aiohttp_cors.setup(app, defaults={
"*": aiohttp_cors.ResourceOptions(
allow_credentials=True,
expose_headers="*",
allow_headers="*",
)
})
# Configure CORS on all routes.
for route in list(app.router.routes()):
cors.add(route)
You can also use CorsViewMixin on web.View:
.. code-block:: python
class CorsView(web.View, CorsViewMixin):
cors_config = {
"*": ResourceOption(
allow_credentials=True,
allow_headers="X-Request-ID",
)
}
@asyncio.coroutine
def get(self):
return web.Response(text="Done")
@custom_cors({
"*": ResourceOption(
allow_credentials=True,
allow_headers="*",
)
})
@asyncio.coroutine
def post(self):
return web.Response(text="Done")
cors = aiohttp_cors.setup(app, defaults={
"*": aiohttp_cors.ResourceOptions(
allow_credentials=True,
expose_headers="*",
allow_headers="*",
)
})
cors.add(
app.router.add_route("*", "/resource", CorsView),
webview=True)
TODO: fill this
To setup development environment:
.. code-block:: bash
git clone https://github.com/aio-libs/aiohttp_cors.git .
python3 -m venv env source env/bin/activate
pip install -r requirements-dev.txt
To run tests:
.. code-block:: bash
tox
To run only runtime tests in current environment:
.. code-block:: bash
py.test
To run only static code analysis checks:
.. code-block:: bash
tox -e check
To run Selenium tests with Firefox web driver you need to install Firefox.
To run Selenium tests with Chromium web driver you need to:
-
Install Chrome driver. On Ubuntu 14.04 it's in
chromium-chromedriverpackage. -
Either add
chromedriverto PATH or setWEBDRIVER_CHROMEDRIVER_PATHenvironment variable tochromedriver, e.g. on Ubuntu 14.04WEBDRIVER_CHROMEDRIVER_PATH=/usr/lib/chromium-browser/chromedriver.
To release version vA.B.C from the current version of master branch
you need to:
-
Create local branch
vA.B.C. -
In
CHANGES.rstset release date to today. -
In
aiohttp_cors/__about__.pychange version fromA.B.Ca0toA.B.C. -
Create pull request with
vA.B.Cbranch, wait for all checks to successfully finish (Travis and Appveyor). -
Merge pull request to master.
-
Update and checkout
masterbranch. -
Create and push tag for release version to GitHub:
.. code-block:: bash
git tag vA.B.C git push --tags
Now Travis should ran tests again, and build and deploy wheel on PyPI.
If Travis release doesn't work for some reason, use following steps for manual release upload.
-
Install fresh versions of setuptools and pip. Install
wheelfor building wheels. Installtwinefor uploading to PyPI... code-block:: bash
pip install -U pip setuptools twine wheel
-
Configure PyPI credentials in
~/.pypirc. -
Build distribution:
.. code-block:: bash
rm -rf build dist; python setup.py sdist bdist_wheel
-
Upload new release to PyPI:
.. code-block:: bash
twine upload dist/*
-
-
Edit release description on GitHub if needed.
-
Announce new release on the aio-libs mailing list: https://groups.google.com/forum/#!forum/aio-libs.
Post release steps:
- In
CHANGES.rstadd template for the next release. - In
aiohttp_cors/__about__.pychange version fromA.B.CtoA.(B + 1).0a0.
Please report bugs, issues, feature requests, etc. on
GitHub <https://github.com/aio-libs/aiohttp_cors/issues>__.
Copyright 2015 Vladimir Rutsky [email protected].
Licensed under the
Apache License, Version 2.0 <https://www.apache.org/licenses/LICENSE-2.0>__,
see LICENSE file for details.
.. _cors: http://www.w3.org/TR/cors/ .. _aiohttp: https://github.com/KeepSafe/aiohttp/ .. _sop: https://en.wikipedia.org/wiki/Same-origin_policy
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aiohttp-cors
Similar Open Source Tools
aiohttp-cors
The aiohttp_cors library provides Cross Origin Resource Sharing (CORS) support for aiohttp, an asyncio-powered asynchronous HTTP server. CORS allows overriding the Same-origin policy for specific resources, enabling web pages to access resources from different origins. The library helps configure CORS settings for resources and routes in aiohttp applications, allowing control over origins, credentials passing, headers, and preflight requests.
langserve
LangServe helps developers deploy `LangChain` runnables and chains as a REST API. This library is integrated with FastAPI and uses pydantic for data validation. In addition, it provides a client that can be used to call into runnables deployed on a server. A JavaScript client is available in LangChain.js.
Lumos
Lumos is a Chrome extension powered by a local LLM co-pilot for browsing the web. It allows users to summarize long threads, news articles, and technical documentation. Users can ask questions about reviews and product pages. The tool requires a local Ollama server for LLM inference and embedding database. Lumos supports multimodal models and file attachments for processing text and image content. It also provides options to customize models, hosts, and content parsers. The extension can be easily accessed through keyboard shortcuts and offers tools for automatic invocation based on prompts.
lollms
LoLLMs Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications.
lollms_legacy
Lord of Large Language Models (LoLLMs) Server is a text generation server based on large language models. It provides a Flask-based API for generating text using various pre-trained language models. This server is designed to be easy to install and use, allowing developers to integrate powerful text generation capabilities into their applications. The tool supports multiple personalities for generating text with different styles and tones, real-time text generation with WebSocket-based communication, RESTful API for listing personalities and adding new personalities, easy integration with various applications and frameworks, sending files to personalities, running on multiple nodes to provide a generation service to many outputs at once, and keeping data local even in the remote version.
simpleAI
SimpleAI is a self-hosted alternative to the not-so-open AI API, focused on replicating main endpoints for LLM such as text completion, chat, edits, and embeddings. It allows quick experimentation with different models, creating benchmarks, and handling specific use cases without relying on external services. Users can integrate and declare models through gRPC, query endpoints using Swagger UI or API, and resolve common issues like CORS with FastAPI middleware. The project is open for contributions and welcomes PRs, issues, documentation, and more.
APIMyLlama
APIMyLlama is a server application that provides an interface to interact with the Ollama API, a powerful AI tool to run LLMs. It allows users to easily distribute API keys to create amazing things. The tool offers commands to generate, list, remove, add, change, activate, deactivate, and manage API keys, as well as functionalities to work with webhooks, set rate limits, and get detailed information about API keys. Users can install APIMyLlama packages with NPM, PIP, Jitpack Repo+Gradle or Maven, or from the Crates Repository. The tool supports Node.JS, Python, Java, and Rust for generating responses from the API. Additionally, it provides built-in health checking commands for monitoring API health status.
ash_ai
Ash AI is a tool that provides a Model Context Protocol (MCP) server for exposing tool definitions to an MCP client. It allows for the installation of dev and production MCP servers, and supports features like OAuth2 flow with AshAuthentication, tool data access, tool execution callbacks, prompt-backed actions, and vectorization strategies. Users can also generate a chat feature for their Ash & Phoenix application using `ash_oban` and `ash_postgres`, and specify LLM API keys for OpenAI. The tool is designed to help developers experiment with tools and actions, monitor tool execution, and expose actions as tool calls.
agent-mimir
Agent Mimir is a command line and Discord chat client 'agent' manager for LLM's like Chat-GPT that provides the models with access to tooling and a framework with which accomplish multi-step tasks. It is easy to configure your own agent with a custom personality or profession as well as enabling access to all tools that are compatible with LangchainJS. Agent Mimir is based on LangchainJS, every tool or LLM that works on Langchain should also work with Mimir. The tasking system is based on Auto-GPT and BabyAGI where the agent needs to come up with a plan, iterate over its steps and review as it completes the task.
suno-api
Suno AI API is an open-source project that allows developers to integrate the music generation capabilities of Suno.ai into their own applications. The API provides a simple and convenient way to generate music, lyrics, and other audio content using Suno.ai's powerful AI models. With Suno AI API, developers can easily add music generation functionality to their apps, websites, and other projects.
neocodeium
NeoCodeium is a free AI completion plugin powered by Codeium, designed for Neovim users. It aims to provide a smoother experience by eliminating flickering suggestions and allowing for repeatable completions using the `.` key. The plugin offers performance improvements through cache techniques, displays suggestion count labels, and supports Lua scripting. Users can customize keymaps, manage suggestions, and interact with the AI chat feature. NeoCodeium enhances code completion in Neovim, making it a valuable tool for developers seeking efficient coding assistance.
langchain-extract
LangChain Extract is a simple web server that allows you to extract information from text and files using LLMs. It is built using FastAPI, LangChain, and Postgresql. The backend closely follows the extraction use-case documentation and provides a reference implementation of an app that helps to do extraction over data using LLMs. This repository is meant to be a starting point for building your own extraction application which may have slightly different requirements or use cases.
deep-searcher
DeepSearcher is a tool that combines reasoning LLMs and Vector Databases to perform search, evaluation, and reasoning based on private data. It is suitable for enterprise knowledge management, intelligent Q&A systems, and information retrieval scenarios. The tool maximizes the utilization of enterprise internal data while ensuring data security, supports multiple embedding models, and provides support for multiple LLMs for intelligent Q&A and content generation. It also includes features like private data search, vector database management, and document loading with web crawling capabilities under development.
swarmzero
SwarmZero SDK is a library that simplifies the creation and execution of AI Agents and Swarms of Agents. It supports various LLM Providers such as OpenAI, Azure OpenAI, Anthropic, MistralAI, Gemini, Nebius, and Ollama. Users can easily install the library using pip or poetry, set up the environment and configuration, create and run Agents, collaborate with Swarms, add tools for complex tasks, and utilize retriever tools for semantic information retrieval. Sample prompts are provided to help users explore the capabilities of the agents and swarms. The SDK also includes detailed examples and documentation for reference.
aiavatarkit
AIAvatarKit is a tool for building AI-based conversational avatars quickly. It supports various platforms like VRChat and cluster, along with real-world devices. The tool is extensible, allowing unlimited capabilities based on user needs. It requires VOICEVOX API, Google or Azure Speech Services API keys, and Python 3.10. Users can start conversations out of the box and enjoy seamless interactions with the avatars.
Gemini-API
Gemini-API is a reverse-engineered asynchronous Python wrapper for Google Gemini web app (formerly Bard). It provides features like persistent cookies, ImageFx support, extension support, classified outputs, official flavor, and asynchronous operation. The tool allows users to generate contents from text or images, have conversations across multiple turns, retrieve images in response, generate images with ImageFx, save images to local files, use Gemini extensions, check and switch reply candidates, and control log level.
For similar tasks
aiohttp-cors
The aiohttp_cors library provides Cross Origin Resource Sharing (CORS) support for aiohttp, an asyncio-powered asynchronous HTTP server. CORS allows overriding the Same-origin policy for specific resources, enabling web pages to access resources from different origins. The library helps configure CORS settings for resources and routes in aiohttp applications, allowing control over origins, credentials passing, headers, and preflight requests.
For similar jobs
google.aip.dev
API Improvement Proposals (AIPs) are design documents that provide high-level, concise documentation for API development at Google. The goal of AIPs is to serve as the source of truth for API-related documentation and to facilitate discussion and consensus among API teams. AIPs are similar to Python's enhancement proposals (PEPs) and are organized into different areas within Google to accommodate historical differences in customs, styles, and guidance.
kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.
speakeasy
Speakeasy is a tool that helps developers create production-quality SDKs, Terraform providers, documentation, and more from OpenAPI specifications. It supports a wide range of languages, including Go, Python, TypeScript, Java, and C#, and provides features such as automatic maintenance, type safety, and fault tolerance. Speakeasy also integrates with popular package managers like npm, PyPI, Maven, and Terraform Registry for easy distribution.
apicat
ApiCat is an API documentation management tool that is fully compatible with the OpenAPI specification. With ApiCat, you can freely and efficiently manage your APIs. It integrates the capabilities of LLM, which not only helps you automatically generate API documentation and data models but also creates corresponding test cases based on the API content. Using ApiCat, you can quickly accomplish anything outside of coding, allowing you to focus your energy on the code itself.
aiohttp-pydantic
Aiohttp pydantic is an aiohttp view to easily parse and validate requests. You define using function annotations what your methods for handling HTTP verbs expect, and Aiohttp pydantic parses the HTTP request for you, validates the data, and injects the parameters you want. It provides features like query string, request body, URL path, and HTTP headers validation, as well as Open API Specification generation.
ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.
OllamaKit
OllamaKit is a Swift library designed to simplify interactions with the Ollama API. It handles network communication and data processing, offering an efficient interface for Swift applications to communicate with the Ollama API. The library is optimized for use within Ollamac, a macOS app for interacting with Ollama models.
ollama4j
Ollama4j is a Java library that serves as a wrapper or binding for the Ollama server. It facilitates communication with the Ollama server and provides models for deployment. The tool requires Java 11 or higher and can be installed locally or via Docker. Users can integrate Ollama4j into Maven projects by adding the specified dependency. The tool offers API specifications and supports various development tasks such as building, running unit tests, and integration tests. Releases are automated through GitHub Actions CI workflow. Areas of improvement include adhering to Java naming conventions, updating deprecated code, implementing logging, using lombok, and enhancing request body creation. Contributions to the project are encouraged, whether reporting bugs, suggesting enhancements, or contributing code.