
aiven-client
aiven-client (avn) is the official command-line client for Aiven
Stars: 86

Aiven Client is the official command-line client for Aiven, a next-generation managed cloud services platform. It focuses on ease of adoption, high fault resilience, customer's peace of mind, and advanced features at competitive price points. The client allows users to interact with Aiven services through a command-line interface, providing functionalities such as authentication, project management, service exploration, service launching, service integrations, custom files management, team management, OAuth2 client configuration, autocomplete support, and auth helpers for connecting to services. Users can perform various tasks related to managing cloud services efficiently using the Aiven Client.
README:
Aiven Client |BuildStatus|_ ###########################
.. |BuildStatus| image:: https://github.com/aiven/aiven-client/workflows/Build%20Aiven%20Client/badge.svg?branch=main .. _BuildStatus: https://github.com/aiven/aiven-client/actions
Aiven is a next-generation managed cloud services platform. Its focus is in ease of adoption, high fault resilience, customer's peace of mind and advanced features at competitive price points. See https://aiven.io/ for more information about the backend service.
aiven-client (avn
) is the official command-line client for Aiven.
.. contents::
.. _platform-requirements:
Requirements:
-
Python 3.8 or later
-
Requests_
-
For Windows and OSX, certifi_ is also needed
.. _Requests
: http://www.python-requests.org/
.. _certifi
: https://github.com/certifi/python-certifi
.. _installation:
Pypi installation is the recommended route for most users::
$ python3 -m pip install aiven-client
It is also possible to build an RPM::
$ make rpm
To check that the tool is installed and working, run it without arguments::
$ avn
If you see usage output, you're all set.
Note: On Windows you may need to use python3 -m aiven.client
instead of avn
.
The simplest way to use Aiven CLI is to authenticate with the username and password you use on Aiven::
$ avn user login [email protected]
The command will prompt you for your password.
You can also use an access token generated in the Aiven Console::
$ avn user login [email protected] --token
You will be prompted for your access token as above.
If you are registered on Aiven through the AWS or GCP marketplace, then you need to specify an additional argument --tenant
. Currently the supported value are aws
and gcp
, for example::
$ avn user login [email protected] --tenant aws
.. _help-command: .. _basic-usage:
Some handy hints that work with all commands:
-
The
avn help
command shows all commands and can search for a command, so for exampleavn help kafka topic
shows commands with kafka and topic in their description. -
Passing
-h
or--help
gives help output for any command. Examples:avn --help
oravn service --help
. -
All commands will output the raw REST API JSON response with
--json
, we use this extensively ourselves in conjunction withjq <https://stedolan.github.io/jq/>
__.
.. _login-and-users:
Login::
$ avn user login [email protected]
Logout (revokes current access token, other sessions remain valid)::
$ avn user logout
Expire all authentication tokens for your user, logs out all web console sessions, etc. You will need to login again after this::
$ avn user tokens-expire
Manage individual access tokens::
$ avn user access-token list $ avn user access-token create --description <usage_description> [--max-age-seconds ] [--extend-when-used] $ avn user access-token update <token|token_prefix> --description <new_description> $ avn user access-token revoke <token|token_prefix>
Note that the system has hard limits for the number of tokens you can create. If you're
permanently done using a token you should always use user access-token revoke
operation
to revoke the token so that it does not count towards the quota.
Alternatively, you can add 2 JSON files, first create a default config in ~/.config/aiven/aiven-credentials.json
containing the JSON with an auth_token
::
{ "auth_token": "ABC1+123...TOKEN==", "user_email": "[email protected]" }
Second create a default config in ~/.config/aiven/aiven-client.json
containing the json with the default_project
::
{"default_project": "yourproject-abcd"}
.. _clouds:
List available cloud regions::
$ avn cloud list
.. _projects:
List projects you are a member of::
$ avn project list
Project commands operate on the currently active project or the project
specified with the --project NAME
switch. The active project cab be changed
with the project switch
command::
$ avn project switch
Show active project's details::
$ avn project details
Create a project and set the default cloud region for it::
$ avn project create myproject --cloud aws-us-east-1
Delete an empty project::
$ avn project delete myproject
List authorized users in a project::
$ avn project user-list
Invite an existing Aiven user to a project::
$ avn project user-invite [email protected]
Remove a user from the project::
$ avn project user-remove [email protected]
View project management event log::
$ avn events
.. _services:
List services (of the active project)::
$ avn service list
List services in a specific project::
$ avn service list --project proj2
List only a specific service::
$ avn service list db1
Verbose list (includes connection information, etc.)::
$ avn service list db1 -v
Full service information in JSON, as it is returned by the Aiven REST API::
$ avn service list db1 --json
Only a specific field in the output, custom formatting::
$ avn service list db1 --format "The service is at {service_uri}"
View service log entries (most recent entries and keep on following logs, other options can be used to get history)::
$ avn service logs db1 -f
.. _launching-services:
View available service plans::
$ avn service plans
Launch a PostgreSQL service::
$ avn service create mydb -t pg --plan hobbyist
View service type specific options, including examples on how to set them::
$ avn service types -v
Launch a PostgreSQL service of a specific version (see above command)::
$ avn service create mydb96 -t pg --plan hobbyist -c pg_version=9.6
Update a service's list of allowed client IP addresses. Note that a list of multiple values is provided as a comma separated list::
$ avn service update mydb96 -c ip_filter=10.0.1.0/24,10.0.2.0/24,1.2.3.4/32
Open psql client and connect to the PostgreSQL service (also available for InfluxDB)::
$ avn service cli mydb96
Update a service to a different plan AND move it to another cloud region::
$ avn service update mydb --plan startup-4 --cloud aws-us-east-1
Power off a service::
$ avn service update mydb --power-off
Power on a service::
$ avn service update mydb --power-on
Terminate a service (all data will be gone!)::
$ avn service terminate mydb
Some service types support multiple users (e.g. PostgreSQL database users).
List, add and delete service users::
$ avn service user-list $ avn service user-create $ avn service user-delete
For Valkey services it's possible to create users with ACLs_::
$ avn service user-create --username new_user --valkey-acl-keys="prefix* another_key" --valkey-acl-commands="+set" --valkey-acl-categories="-@all +@admin" --valkey-acl-channels="prefix* some_chan" my-valkey-service
.. _ACLs
: https://valkey.io/docs/topics/acl
Service users are created with strong random passwords.
Service integrations <https://aiven.io/service-integrations>
_ allow to link Aiven services to other Aiven services or to services
offered by other companies for example for logging. Some examples for various diffenent integrations:
Google cloud logging
, AWS Cloudwatch logging
, Remote syslog integration
_ and Getting started with Datadog
_.
.. _Google cloud logging
: https://help.aiven.io/en/articles/4209837-sending-service-logs-to-google-cloud-logging
.. _AWS Cloudwatch logging
: https://help.aiven.io/en/articles/4134821-sending-service-logs-to-aws-cloudwatch
.. _Remote syslog integration
: https://help.aiven.io/en/articles/2933115-remote-syslog-integration
.. _Getting started with Datadog
: https://help.aiven.io/en/articles/1759208-getting-started-with-datadog
List service integration endpoints::
$ avn service integration-endpoint-list
List all available integration endpoint types for given project::
$ avn service integration-endpoint-types-list --project <project>
Create a service integration endpoint::
$ avn service integration-endpoint-create --project <project> --endpoint-type <endpoint type> --endpoint-name <endpoint name> --user-config-json <user configuration as json>
$ avn service integration-endpoint-create --project <project> --endpoint-type <endpoint type> --endpoint-name <endpoint name> -c <KEY=VALUE type user configuration>
Update a service integration endpoint::
$ avn service integration-endpoint-update --project <project> --user-config-json <user configuration as json> <endpoint id>
$ avn service integration-endpoint-update --project <project> -c <KEY=VALUE type user configuration> <endpoint id>
Delete a service integration endpoint::
$ avn service integration-endpoint-delete --project <project> <endpoint_id>
List service integrations::
$ avn service integration-list <service name>
List all available integration types for given project::
$ avn service integration-types-list --project <project>
Create a service integration::
$ avn service integration-create --project <project> -t <integration type> -s <source service> -d <dest service> -S <source endpoint id> -D <destination endpoint id> --user-config-json <user configuration as json>
$ avn service integration-create --project <project> -t <integration type> -s <source service> -d <dest service> -S <source endpoint id> -D <destination endpoint id> -c <KEY=VALUE type user configuration>
Update a service integration::
$ avn service integration-update --project <project> --user-config-json <user configuration as json> <integration_id>
$ avn service integration-update --project <project> -c <KEY=VALUE type user configuration> <integration_id>
Delete a service integration::
$ avn service integration-delete --project <project> <integration_id>
Listing files::
$ avn service custom-file list --project <project> <service_name>
Reading file::
$ avn service custom-file get --project <project> --file_id <file_id> [--target_filepath <file_path>] [--stdout_write] <service_name>
Uploading new files::
$ avn service custom-file upload --project <project> --file_type <file_type> --file_path <file_path> --file_name <file_name> <service_name>
Updating existing files::
$ avn service custom-file update --project <project> --file_path <file_path> --file_id <file_id> <service_name>
.. _teams:
List account teams::
$ avn account team list <account_id>
Create a team::
$ avn account team create --team-name <team_name> <account_id>
Delete a team::
$ avn account team delete --team-id <team_id> <account_id>
Attach team to a project::
$ avn account team project-attach --team-id <team_id> --project <project_name> <account_id> --team-type <admin|developer|operator|read_only>
Detach team from project::
$ avn account team project-detach --team-id <team_id> --project <project_name> <account_id>
List projects associated to the team::
$ avn account team project-list --team-id <team_id> <account_id>
List members of the team::
$ avn account team user-list --team-id <team_id> <account_id>
Invite a new member to the team::
$ avn account team user-invite --team-id <team_id> <account_id> [email protected]
See the list of pending invitations::
$ avn account team user-list-pending --team-id <team_id> <account_id>
Remove user from the team::
$ avn account team user-delete --team-id <team_id> --user-id <user_id> <account_id>
.. _oauth2-clients:
List configured OAuth2 clients::
$ avn account oauth2-client list <account_id>
Get a configured OAuth2 client's configuration::
$ avn account oauth2-client list <account_id> --oauth2-client-id <client_id>
Create a new OAuth2 client information::
$ avn account oauth2-client create <account_id> --name <app_name> -d <app_description> --redirect-uri <redirect_uri>
Delete an OAuth2 client::
$ avn account oauth2-client delete <account_id> --oauth2-client-id <client_id>
List an OAuth2 client's redirect URIs::
$ avn account oauth2-client redirect-list <account_id> --oauth2-client-id <client_id>
Create a new OAuth2 client redirect URI::
$ avn account oauth2-client redirect-create <account_id> --oauth2-client-id <client_id> --redirect-uri <redirect_uri>
Delete an OAuth2 client redirect URI::
$ avn account oauth2-client redirect-delete <account_id> --oauth2-client-id <client_id> --redirect-uri-id <redirect_uri_id>
List an OAuth2 client's secrets::
$ avn account oauth2-client secret-list <account_id> --oauth2-client-id <client_id>
Create a new OAUth2 client secret::
$ avn account oauth2-client secret-create <account_id> --oauth2-client-id <client_id>
Delete an OAuth2 client's secret::
$ avn account oauth2-client secret-delete <account_id> --oauth2-client-id <client_id> --secret-id <secret_id>
.. _shell-completions:
avn supports shell completions. It requires an optional dependency: argcomplete. Install it::
$ python3 -m pip install argcomplete
To use completions in bash, add following line to ~/.bashrc
::
eval "$(register-python-argcomplete avn)"
For more information (including completions usage in other shells) see https://kislyuk.github.io/argcomplete/.
When you spin up a new service, you'll want to connect to it. The --json
option combined with the jq <https://stedolan.github.io/jq/>
_ utility is a good way to grab the fields you need for your specific service. Try this to get the connection string::
$ avn service get --json | jq ".service_uri"
Each project has its own CA cert, and other services (notably Kafka) use mutualTLS so you will also need the service.key
and service.cert
files too for those. Download all three files to the local directory::
$ avn service user-creds-download --username avnadmin
For working with kcat <https://github.com/edenhill/kcat>
_ (see also our help article <https://developer.aiven.io/docs/products/kafka/howto/kcat.html>
_ ) or the command-line tools that ship with Kafka itself, a keystore and trustore are needed. By specifying which user's creds to use, and a secret, you can generate these via avn
too::
$ avn service user-kafka-java-creds --username avnadmin -p t0pS3cr3t
Check the CONTRIBUTING <https://github.com/aiven/aiven-client/blob/main/.github/CONTRIBUTING.md>
_ guide for details on how to contribute to this repository.
We maintain some other resources that you may also find useful:
-
Command Line Magic with avn <https://aiven.io/blog/command-line-magic-with-the-aiven-cli>
__ -
Managing Billing Groups via CLI <https://help.aiven.io/en/articles/4720981-using-billing-groups-via-cli>
__
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aiven-client
Similar Open Source Tools

aiven-client
Aiven Client is the official command-line client for Aiven, a next-generation managed cloud services platform. It focuses on ease of adoption, high fault resilience, customer's peace of mind, and advanced features at competitive price points. The client allows users to interact with Aiven services through a command-line interface, providing functionalities such as authentication, project management, service exploration, service launching, service integrations, custom files management, team management, OAuth2 client configuration, autocomplete support, and auth helpers for connecting to services. Users can perform various tasks related to managing cloud services efficiently using the Aiven Client.

gemini-ai-code-reviewer
Gemini AI Code Reviewer is a GitHub Action that automatically reviews pull requests using Google's Gemini AI. It analyzes code changes, consults the Gemini model, provides feedback, and delivers review comments directly to pull requests on GitHub. Users need a Gemini API key and can trigger the workflow by commenting '/gemini-review' in the PR. The tool helps improve source code quality by giving suggestions and comments for enhancement.

langstream
LangStream is a tool for natural language processing tasks, providing a CLI for easy installation and usage. Users can try sample applications like Chat Completions and create their own applications using the developer documentation. It supports running on Kubernetes for production-ready deployment, with support for various Kubernetes distributions and external components like Apache Kafka or Apache Pulsar cluster. Users can deploy LangStream locally using minikube and manage the cluster with mini-langstream. Development requirements include Docker, Java 17, Git, Python 3.11+, and PIP, with the option to test local code changes using mini-langstream.

jetson-generative-ai-playground
This repo hosts tutorial documentation for running generative AI models on NVIDIA Jetson devices. The documentation is auto-generated and hosted on GitHub Pages using their CI/CD feature to automatically generate/update the HTML documentation site upon new commits.

shinkai-apps
Shinkai apps unlock the full capabilities/automation of first-class LLM (AI) support in the web browser. It enables creating multiple agents, each connected to either local or 3rd-party LLMs (ex. OpenAI GPT), which have permissioned (meaning secure) access to act in every webpage you visit. There is a companion repo called Shinkai Node, that allows you to set up the node anywhere as the central unit of the Shinkai Network, handling tasks such as agent management, job processing, and secure communications.

tangent
Tangent is a canvas for exploring AI conversations, allowing users to resurrect and continue conversations, branch and explore different ideas, organize conversations by topics, and support archive data exports. It aims to provide a visual/textual/audio exploration experience with AI assistants, offering a 'thoughts workbench' for experimenting freely, reviving old threads, and diving into tangents. The project structure includes a modular backend with components for API routes, background task management, data processing, and more. Prerequisites for setup include Whisper.cpp, Ollama, and exported archive data from Claude or ChatGPT. Users can initialize the environment, install Python packages, set up Ollama, configure local models, and start the backend and frontend to interact with the tool.

docetl
DocETL is a tool for creating and executing data processing pipelines, especially suited for complex document processing tasks. It offers a low-code, declarative YAML interface to define LLM-powered operations on complex data. Ideal for maximizing correctness and output quality for semantic processing on a collection of data, representing complex tasks via map-reduce, maximizing LLM accuracy, handling long documents, and automating task retries based on validation criteria.

expo-stable-diffusion
The `expo-stable-diffusion` repository provides a tool for generating images using Stable Diffusion natively on iOS devices within Expo and React Native apps. Users can install and configure the module to create images based on prompts. The repository includes information on updating iOS deployment targets, enabling increased memory limits, and building iOS apps. Additionally, users can obtain Stable Diffusion models from various sources. The repository also addresses troubleshooting tips related to model load times and image generation durations. The developer seeks sponsorship to further enhance the project, including adding Android support.

dify-google-cloud-terraform
This repository provides Terraform configurations to automatically set up Google Cloud resources and deploy Dify in a highly available configuration. It includes features such as serverless hosting, auto-scaling, and data persistence. Users need a Google Cloud account, Terraform, and gcloud CLI installed to use this tool. The configuration involves setting environment-specific values and creating a GCS bucket for managing Terraform state. The tool allows users to initialize Terraform, create Artifact Registry repository, build and push container images, plan and apply Terraform changes, and cleanup resources when needed.

llama.vscode
llama.vscode is a local LLM-assisted text completion extension for Visual Studio Code. It provides auto-suggestions on input, allows accepting suggestions with shortcuts, and offers various features to enhance text completion. The extension is designed to be lightweight and efficient, enabling high-quality completions even on low-end hardware. Users can configure the scope of context around the cursor and control text generation time. It supports very large contexts and displays performance statistics for better user experience.

PSAI
PSAI is a PowerShell module that empowers scripts with the intelligence of OpenAI, bridging the gap between PowerShell and AI. It enables seamless integration for tasks like file searches and data analysis, revolutionizing automation possibilities with just a few lines of code. The module supports the latest OpenAI API changes, offering features like improved file search, vector store objects, token usage control, message limits, tool choice parameter, custom conversation histories, and model configuration parameters.

ProX
ProX is a lm-based data refinement framework that automates the process of cleaning and improving data used in pre-training large language models. It offers better performance, domain flexibility, efficiency, and cost-effectiveness compared to traditional methods. The framework has been shown to improve model performance by over 2% and boost accuracy by up to 20% in tasks like math. ProX is designed to refine data at scale without the need for manual adjustments, making it a valuable tool for data preprocessing in natural language processing tasks.

vim-ollama
The 'vim-ollama' plugin for Vim adds Copilot-like code completion support using Ollama as a backend, enabling intelligent AI-based code completion and integrated chat support for code reviews. It does not rely on cloud services, preserving user privacy. The plugin communicates with Ollama via Python scripts for code completion and interactive chat, supporting Vim only. Users can configure LLM models for code completion tasks and interactive conversations, with detailed installation and usage instructions provided in the README.

inspector-laravel
Inspector is a code execution monitoring tool specifically designed for Laravel applications. It provides simple and efficient monitoring capabilities to track and analyze the performance of your Laravel code. With Inspector, you can easily monitor web requests, test the functionality of your application, and explore data through a user-friendly dashboard. The tool requires PHP version 7.2.0 or higher and Laravel version 5.5 or above. By configuring the ingestion key and attaching the middleware, users can seamlessly integrate Inspector into their Laravel projects. The official documentation provides detailed instructions on installation, configuration, and usage of Inspector. Contributions to the tool are welcome, and users are encouraged to follow the Contribution Guidelines to participate in the development of Inspector.

Chital
Chital is a native macOS app designed for chatting with Ollama models. It offers low memory usage and fast app launch times, supports multiple chat threads, allows users to switch between different models, provides Markdown support, and automatically summarizes chat thread titles. The app requires macOS 14 Sonoma or above, the installation of Ollama, and at least one downloaded LLM model. Chital is a user-friendly tool that simplifies the process of engaging with Ollama models through chat threads on macOS systems.

Upscaler
Holloway's Upscaler is a consolidation of various compiled open-source AI image/video upscaling products for a CLI-friendly image and video upscaling program. It provides low-cost AI upscaling software that can run locally on a laptop, programmable for albums and videos, reliable for large video files, and works without GUI overheads. The repository supports hardware testing on various systems and provides important notes on GPU compatibility, video types, and image decoding bugs. Dependencies include ffmpeg and ffprobe for video processing. The user manual covers installation, setup pathing, calling for help, upscaling images and videos, and contributing back to the project. Benchmarks are provided for performance evaluation on different hardware setups.
For similar tasks

aiven-client
Aiven Client is the official command-line client for Aiven, a next-generation managed cloud services platform. It focuses on ease of adoption, high fault resilience, customer's peace of mind, and advanced features at competitive price points. The client allows users to interact with Aiven services through a command-line interface, providing functionalities such as authentication, project management, service exploration, service launching, service integrations, custom files management, team management, OAuth2 client configuration, autocomplete support, and auth helpers for connecting to services. Users can perform various tasks related to managing cloud services efficiently using the Aiven Client.
For similar jobs

minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources

kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.

AI-in-a-Box
AI-in-a-Box is a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency. It provides essential guidance on the responsible use of AI and LLM technologies, specific security guidance for Generative AI (GenAI) applications, and best practices for scaling OpenAI applications within Azure. The available accelerators include: Azure ML Operationalization in-a-box, Edge AI in-a-box, Doc Intelligence in-a-box, Image and Video Analysis in-a-box, Cognitive Services Landing Zone in-a-box, Semantic Kernel Bot in-a-box, NLP to SQL in-a-box, Assistants API in-a-box, and Assistants API Bot in-a-box.

awsome-distributed-training
This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker Hyperpod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (Pytorch DDP/FSDP, MegatronLM, NemoMegatron...).

generative-ai-cdk-constructs
The AWS Generative AI Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions in code to create predictable and repeatable infrastructure, called constructs. The goal of AWS Generative AI CDK Constructs is to help developers build generative AI solutions using pattern-based definitions for their architecture. The patterns defined in AWS Generative AI CDK Constructs are high level, multi-service abstractions of AWS CDK constructs that have default configurations based on well-architected best practices. The library is organized into logical modules using object-oriented techniques to create each architectural pattern model.

model_server
OpenVINO™ Model Server (OVMS) is a high-performance system for serving models. Implemented in C++ for scalability and optimized for deployment on Intel architectures, the model server uses the same architecture and API as TensorFlow Serving and KServe while applying OpenVINO for inference execution. Inference service is provided via gRPC or REST API, making deploying new algorithms and AI experiments easy.

dify-helm
Deploy langgenius/dify, an LLM based chat bot app on kubernetes with helm chart.