
aiven-client
aiven-client (avn) is the official command-line client for Aiven
Stars: 86

Aiven Client is the official command-line client for Aiven, a next-generation managed cloud services platform. It focuses on ease of adoption, high fault resilience, customer's peace of mind, and advanced features at competitive price points. The client allows users to interact with Aiven services through a command-line interface, providing functionalities such as authentication, project management, service exploration, service launching, service integrations, custom files management, team management, OAuth2 client configuration, autocomplete support, and auth helpers for connecting to services. Users can perform various tasks related to managing cloud services efficiently using the Aiven Client.
README:
Aiven Client |BuildStatus|_ ###########################
.. |BuildStatus| image:: https://github.com/aiven/aiven-client/workflows/Build%20Aiven%20Client/badge.svg?branch=main .. _BuildStatus: https://github.com/aiven/aiven-client/actions
Aiven is a next-generation managed cloud services platform. Its focus is in ease of adoption, high fault resilience, customer's peace of mind and advanced features at competitive price points. See https://aiven.io/ for more information about the backend service.
aiven-client (avn
) is the official command-line client for Aiven.
.. contents::
.. _platform-requirements:
Requirements:
-
Python 3.8 or later
-
Requests_
-
For Windows and OSX, certifi_ is also needed
.. _Requests
: http://www.python-requests.org/
.. _certifi
: https://github.com/certifi/python-certifi
.. _installation:
Pypi installation is the recommended route for most users::
$ python3 -m pip install aiven-client
It is also possible to build an RPM::
$ make rpm
To check that the tool is installed and working, run it without arguments::
$ avn
If you see usage output, you're all set.
Note: On Windows you may need to use python3 -m aiven.client
instead of avn
.
The simplest way to use Aiven CLI is to authenticate with the username and password you use on Aiven::
$ avn user login [email protected]
The command will prompt you for your password.
You can also use an access token generated in the Aiven Console::
$ avn user login [email protected] --token
You will be prompted for your access token as above.
If you are registered on Aiven through the AWS or GCP marketplace, then you need to specify an additional argument --tenant
. Currently the supported value are aws
and gcp
, for example::
$ avn user login [email protected] --tenant aws
.. _help-command: .. _basic-usage:
Some handy hints that work with all commands:
-
The
avn help
command shows all commands and can search for a command, so for exampleavn help kafka topic
shows commands with kafka and topic in their description. -
Passing
-h
or--help
gives help output for any command. Examples:avn --help
oravn service --help
. -
All commands will output the raw REST API JSON response with
--json
, we use this extensively ourselves in conjunction withjq <https://stedolan.github.io/jq/>
__.
.. _login-and-users:
Login::
$ avn user login [email protected]
Logout (revokes current access token, other sessions remain valid)::
$ avn user logout
Expire all authentication tokens for your user, logs out all web console sessions, etc. You will need to login again after this::
$ avn user tokens-expire
Manage individual access tokens::
$ avn user access-token list $ avn user access-token create --description <usage_description> [--max-age-seconds ] [--extend-when-used] $ avn user access-token update <token|token_prefix> --description <new_description> $ avn user access-token revoke <token|token_prefix>
Note that the system has hard limits for the number of tokens you can create. If you're
permanently done using a token you should always use user access-token revoke
operation
to revoke the token so that it does not count towards the quota.
Alternatively, you can add 2 JSON files, first create a default config in ~/.config/aiven/aiven-credentials.json
containing the JSON with an auth_token
::
{ "auth_token": "ABC1+123...TOKEN==", "user_email": "[email protected]" }
Second create a default config in ~/.config/aiven/aiven-client.json
containing the json with the default_project
::
{"default_project": "yourproject-abcd"}
.. _clouds:
List available cloud regions::
$ avn cloud list
.. _projects:
List projects you are a member of::
$ avn project list
Project commands operate on the currently active project or the project
specified with the --project NAME
switch. The active project cab be changed
with the project switch
command::
$ avn project switch
Show active project's details::
$ avn project details
Create a project and set the default cloud region for it::
$ avn project create myproject --cloud aws-us-east-1
Delete an empty project::
$ avn project delete myproject
List authorized users in a project::
$ avn project user-list
Invite an existing Aiven user to a project::
$ avn project user-invite [email protected]
Remove a user from the project::
$ avn project user-remove [email protected]
View project management event log::
$ avn events
.. _services:
List services (of the active project)::
$ avn service list
List services in a specific project::
$ avn service list --project proj2
List only a specific service::
$ avn service list db1
Verbose list (includes connection information, etc.)::
$ avn service list db1 -v
Full service information in JSON, as it is returned by the Aiven REST API::
$ avn service list db1 --json
Only a specific field in the output, custom formatting::
$ avn service list db1 --format "The service is at {service_uri}"
View service log entries (most recent entries and keep on following logs, other options can be used to get history)::
$ avn service logs db1 -f
.. _launching-services:
View available service plans::
$ avn service plans
Launch a PostgreSQL service::
$ avn service create mydb -t pg --plan hobbyist
View service type specific options, including examples on how to set them::
$ avn service types -v
Launch a PostgreSQL service of a specific version (see above command)::
$ avn service create mydb96 -t pg --plan hobbyist -c pg_version=9.6
Update a service's list of allowed client IP addresses. Note that a list of multiple values is provided as a comma separated list::
$ avn service update mydb96 -c ip_filter=10.0.1.0/24,10.0.2.0/24,1.2.3.4/32
Open psql client and connect to the PostgreSQL service (also available for InfluxDB)::
$ avn service cli mydb96
Update a service to a different plan AND move it to another cloud region::
$ avn service update mydb --plan startup-4 --cloud aws-us-east-1
Power off a service::
$ avn service update mydb --power-off
Power on a service::
$ avn service update mydb --power-on
Terminate a service (all data will be gone!)::
$ avn service terminate mydb
Some service types support multiple users (e.g. PostgreSQL database users).
List, add and delete service users::
$ avn service user-list $ avn service user-create $ avn service user-delete
For Valkey services it's possible to create users with ACLs_::
$ avn service user-create --username new_user --valkey-acl-keys="prefix* another_key" --valkey-acl-commands="+set" --valkey-acl-categories="-@all +@admin" --valkey-acl-channels="prefix* some_chan" my-valkey-service
.. _ACLs
: https://valkey.io/docs/topics/acl
Service users are created with strong random passwords.
Service integrations <https://aiven.io/service-integrations>
_ allow to link Aiven services to other Aiven services or to services
offered by other companies for example for logging. Some examples for various diffenent integrations:
Google cloud logging
, AWS Cloudwatch logging
, Remote syslog integration
_ and Getting started with Datadog
_.
.. _Google cloud logging
: https://help.aiven.io/en/articles/4209837-sending-service-logs-to-google-cloud-logging
.. _AWS Cloudwatch logging
: https://help.aiven.io/en/articles/4134821-sending-service-logs-to-aws-cloudwatch
.. _Remote syslog integration
: https://help.aiven.io/en/articles/2933115-remote-syslog-integration
.. _Getting started with Datadog
: https://help.aiven.io/en/articles/1759208-getting-started-with-datadog
List service integration endpoints::
$ avn service integration-endpoint-list
List all available integration endpoint types for given project::
$ avn service integration-endpoint-types-list --project <project>
Create a service integration endpoint::
$ avn service integration-endpoint-create --project <project> --endpoint-type <endpoint type> --endpoint-name <endpoint name> --user-config-json <user configuration as json>
$ avn service integration-endpoint-create --project <project> --endpoint-type <endpoint type> --endpoint-name <endpoint name> -c <KEY=VALUE type user configuration>
Update a service integration endpoint::
$ avn service integration-endpoint-update --project <project> --user-config-json <user configuration as json> <endpoint id>
$ avn service integration-endpoint-update --project <project> -c <KEY=VALUE type user configuration> <endpoint id>
Delete a service integration endpoint::
$ avn service integration-endpoint-delete --project <project> <endpoint_id>
List service integrations::
$ avn service integration-list <service name>
List all available integration types for given project::
$ avn service integration-types-list --project <project>
Create a service integration::
$ avn service integration-create --project <project> -t <integration type> -s <source service> -d <dest service> -S <source endpoint id> -D <destination endpoint id> --user-config-json <user configuration as json>
$ avn service integration-create --project <project> -t <integration type> -s <source service> -d <dest service> -S <source endpoint id> -D <destination endpoint id> -c <KEY=VALUE type user configuration>
Update a service integration::
$ avn service integration-update --project <project> --user-config-json <user configuration as json> <integration_id>
$ avn service integration-update --project <project> -c <KEY=VALUE type user configuration> <integration_id>
Delete a service integration::
$ avn service integration-delete --project <project> <integration_id>
Listing files::
$ avn service custom-file list --project <project> <service_name>
Reading file::
$ avn service custom-file get --project <project> --file_id <file_id> [--target_filepath <file_path>] [--stdout_write] <service_name>
Uploading new files::
$ avn service custom-file upload --project <project> --file_type <file_type> --file_path <file_path> --file_name <file_name> <service_name>
Updating existing files::
$ avn service custom-file update --project <project> --file_path <file_path> --file_id <file_id> <service_name>
.. _teams:
List account teams::
$ avn account team list <account_id>
Create a team::
$ avn account team create --team-name <team_name> <account_id>
Delete a team::
$ avn account team delete --team-id <team_id> <account_id>
Attach team to a project::
$ avn account team project-attach --team-id <team_id> --project <project_name> <account_id> --team-type <admin|developer|operator|read_only>
Detach team from project::
$ avn account team project-detach --team-id <team_id> --project <project_name> <account_id>
List projects associated to the team::
$ avn account team project-list --team-id <team_id> <account_id>
List members of the team::
$ avn account team user-list --team-id <team_id> <account_id>
Invite a new member to the team::
$ avn account team user-invite --team-id <team_id> <account_id> [email protected]
See the list of pending invitations::
$ avn account team user-list-pending --team-id <team_id> <account_id>
Remove user from the team::
$ avn account team user-delete --team-id <team_id> --user-id <user_id> <account_id>
.. _oauth2-clients:
List configured OAuth2 clients::
$ avn account oauth2-client list <account_id>
Get a configured OAuth2 client's configuration::
$ avn account oauth2-client list <account_id> --oauth2-client-id <client_id>
Create a new OAuth2 client information::
$ avn account oauth2-client create <account_id> --name <app_name> -d <app_description> --redirect-uri <redirect_uri>
Delete an OAuth2 client::
$ avn account oauth2-client delete <account_id> --oauth2-client-id <client_id>
List an OAuth2 client's redirect URIs::
$ avn account oauth2-client redirect-list <account_id> --oauth2-client-id <client_id>
Create a new OAuth2 client redirect URI::
$ avn account oauth2-client redirect-create <account_id> --oauth2-client-id <client_id> --redirect-uri <redirect_uri>
Delete an OAuth2 client redirect URI::
$ avn account oauth2-client redirect-delete <account_id> --oauth2-client-id <client_id> --redirect-uri-id <redirect_uri_id>
List an OAuth2 client's secrets::
$ avn account oauth2-client secret-list <account_id> --oauth2-client-id <client_id>
Create a new OAUth2 client secret::
$ avn account oauth2-client secret-create <account_id> --oauth2-client-id <client_id>
Delete an OAuth2 client's secret::
$ avn account oauth2-client secret-delete <account_id> --oauth2-client-id <client_id> --secret-id <secret_id>
.. _shell-completions:
avn supports shell completions. It requires an optional dependency: argcomplete. Install it::
$ python3 -m pip install argcomplete
To use completions in bash, add following line to ~/.bashrc
::
eval "$(register-python-argcomplete avn)"
For more information (including completions usage in other shells) see https://kislyuk.github.io/argcomplete/.
When you spin up a new service, you'll want to connect to it. The --json
option combined with the jq <https://stedolan.github.io/jq/>
_ utility is a good way to grab the fields you need for your specific service. Try this to get the connection string::
$ avn service get --json | jq ".service_uri"
Each project has its own CA cert, and other services (notably Kafka) use mutualTLS so you will also need the service.key
and service.cert
files too for those. Download all three files to the local directory::
$ avn service user-creds-download --username avnadmin
For working with kcat <https://github.com/edenhill/kcat>
_ (see also our help article <https://developer.aiven.io/docs/products/kafka/howto/kcat.html>
_ ) or the command-line tools that ship with Kafka itself, a keystore and trustore are needed. By specifying which user's creds to use, and a secret, you can generate these via avn
too::
$ avn service user-kafka-java-creds --username avnadmin -p t0pS3cr3t
Check the CONTRIBUTING <https://github.com/aiven/aiven-client/blob/main/.github/CONTRIBUTING.md>
_ guide for details on how to contribute to this repository.
We maintain some other resources that you may also find useful:
-
Command Line Magic with avn <https://aiven.io/blog/command-line-magic-with-the-aiven-cli>
__ -
Managing Billing Groups via CLI <https://help.aiven.io/en/articles/4720981-using-billing-groups-via-cli>
__
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for aiven-client
Similar Open Source Tools

aiven-client
Aiven Client is the official command-line client for Aiven, a next-generation managed cloud services platform. It focuses on ease of adoption, high fault resilience, customer's peace of mind, and advanced features at competitive price points. The client allows users to interact with Aiven services through a command-line interface, providing functionalities such as authentication, project management, service exploration, service launching, service integrations, custom files management, team management, OAuth2 client configuration, autocomplete support, and auth helpers for connecting to services. Users can perform various tasks related to managing cloud services efficiently using the Aiven Client.

swe-rl
SWE-RL is the official codebase for the paper 'SWE-RL: Advancing LLM Reasoning via Reinforcement Learning on Open Software Evolution'. It is the first approach to scale reinforcement learning based LLM reasoning for real-world software engineering, leveraging open-source software evolution data and rule-based rewards. The code provides prompt templates and the implementation of the reward function based on sequence similarity. Agentless Mini, a part of SWE-RL, builds on top of Agentless with improvements like fast async inference, code refactoring for scalability, and support for using multiple reproduction tests for reranking. The tool can be used for localization, repair, and reproduction test generation in software engineering tasks.

tangent
Tangent is a canvas for exploring AI conversations, allowing users to resurrect and continue conversations, branch and explore different ideas, organize conversations by topics, and support archive data exports. It aims to provide a visual/textual/audio exploration experience with AI assistants, offering a 'thoughts workbench' for experimenting freely, reviving old threads, and diving into tangents. The project structure includes a modular backend with components for API routes, background task management, data processing, and more. Prerequisites for setup include Whisper.cpp, Ollama, and exported archive data from Claude or ChatGPT. Users can initialize the environment, install Python packages, set up Ollama, configure local models, and start the backend and frontend to interact with the tool.

docetl
DocETL is a tool for creating and executing data processing pipelines, especially suited for complex document processing tasks. It offers a low-code, declarative YAML interface to define LLM-powered operations on complex data. Ideal for maximizing correctness and output quality for semantic processing on a collection of data, representing complex tasks via map-reduce, maximizing LLM accuracy, handling long documents, and automating task retries based on validation criteria.

expo-stable-diffusion
The `expo-stable-diffusion` repository provides a tool for generating images using Stable Diffusion natively on iOS devices within Expo and React Native apps. Users can install and configure the module to create images based on prompts. The repository includes information on updating iOS deployment targets, enabling increased memory limits, and building iOS apps. Additionally, users can obtain Stable Diffusion models from various sources. The repository also addresses troubleshooting tips related to model load times and image generation durations. The developer seeks sponsorship to further enhance the project, including adding Android support.

archgw
Arch is an intelligent Layer 7 gateway designed to protect, observe, and personalize AI agents with APIs. It handles tasks related to prompts, including detecting jailbreak attempts, calling backend APIs, routing between LLMs, and managing observability. Built on Envoy Proxy, it offers features like function calling, prompt guardrails, traffic management, and observability. Users can build fast, observable, and personalized AI agents using Arch to improve speed, security, and personalization of GenAI apps.

dify-google-cloud-terraform
This repository provides Terraform configurations to automatically set up Google Cloud resources and deploy Dify in a highly available configuration. It includes features such as serverless hosting, auto-scaling, and data persistence. Users need a Google Cloud account, Terraform, and gcloud CLI installed to use this tool. The configuration involves setting environment-specific values and creating a GCS bucket for managing Terraform state. The tool allows users to initialize Terraform, create Artifact Registry repository, build and push container images, plan and apply Terraform changes, and cleanup resources when needed.

llama.vscode
llama.vscode is a local LLM-assisted text completion extension for Visual Studio Code. It provides auto-suggestions on input, allows accepting suggestions with shortcuts, and offers various features to enhance text completion. The extension is designed to be lightweight and efficient, enabling high-quality completions even on low-end hardware. Users can configure the scope of context around the cursor and control text generation time. It supports very large contexts and displays performance statistics for better user experience.

python-aiplatform
The Vertex AI SDK for Python is a library that provides a convenient way to use the Vertex AI API. It offers a high-level interface for creating and managing Vertex AI resources, such as datasets, models, and endpoints. The SDK also provides support for training and deploying custom models, as well as using AutoML models. With the Vertex AI SDK for Python, you can quickly and easily build and deploy machine learning models on Vertex AI.

finagg
finagg is a Python package that provides implementations of popular and free financial APIs, tools for aggregating historical data from those APIs into SQL databases, and tools for transforming aggregated data into features useful for analysis and AI/ML. It offers documentation, installation instructions, and basic usage examples for exploring various financial APIs and features. Users can install recommended datasets from 3rd party APIs into a local SQL database, access Bureau of Economic Analysis (BEA) data, Federal Reserve Economic Data (FRED), Securities and Exchange Commission (SEC) filings, and more. The package also allows users to explore raw data features, install refined data features, and perform refined aggregations of raw data. Configuration options for API keys, user agents, and data locations are provided, along with information on dependencies and related projects.

private-ml-sdk
Private ML SDK is a secure solution for running Large Language Models (LLMs) in Trusted Execution Environments (TEEs) using NVIDIA GPU TEE and Intel TDX technologies. It provides a tamper-proof data processing environment with secure execution, open-source builds, and nearly native speed performance. The system includes components like Secure Compute Environment, Remote Attestation, Secure Communication, and Key Management Service (KMS). Users can build TDX guest images, run Local KMS, and TDX guest images on TDX host machines with Nvidia GPUs. The SDK offers verifiable execution results and high performance for LLM workloads.

co-llm
Co-LLM (Collaborative Language Models) is a tool for learning to decode collaboratively with multiple language models. It provides a method for data processing, training, and inference using a collaborative approach. The tool involves steps such as formatting/tokenization, scoring logits, initializing Z vector, deferral training, and generating results using multiple models. Co-LLM supports training with different collaboration pairs and provides baseline training scripts for various models. In inference, it uses 'vllm' services to orchestrate models and generate results through API-like services. The tool is inspired by allenai/open-instruct and aims to improve decoding performance through collaborative learning.

ai-containers
This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow and PyTorch optimized for Intel platforms. Scaling is done with python, Docker, kubernetes, kubeflow, cnvrg.io, Helm, and other container orchestration frameworks for use in the cloud and on-premise.

kaito
KAITO is an operator that automates the AI/ML model inference or tuning workload in a Kubernetes cluster. It manages large model files using container images, provides preset configurations to avoid adjusting workload parameters based on GPU hardware, supports popular open-sourced inference runtimes, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry. Using KAITO simplifies the workflow of onboarding large AI inference models in Kubernetes.

OpenLLM
OpenLLM is a platform that helps developers run any open-source Large Language Models (LLMs) as OpenAI-compatible API endpoints, locally and in the cloud. It supports a wide range of LLMs, provides state-of-the-art serving and inference performance, and simplifies cloud deployment via BentoML. Users can fine-tune, serve, deploy, and monitor any LLMs with ease using OpenLLM. The platform also supports various quantization techniques, serving fine-tuning layers, and multiple runtime implementations. OpenLLM seamlessly integrates with other tools like OpenAI Compatible Endpoints, LlamaIndex, LangChain, and Transformers Agents. It offers deployment options through Docker containers, BentoCloud, and provides a community for collaboration and contributions.

slack-machine
Slack Machine is a simple, yet powerful and extendable Slack bot framework. More than just a bot, Slack Machine is a framework that helps you develop your Slack workspace into a ChatOps powerhouse. Slack Machine is built with an intuitive plugin system that lets you build bots quickly, but also allows for easy code organization.
For similar tasks

aiven-client
Aiven Client is the official command-line client for Aiven, a next-generation managed cloud services platform. It focuses on ease of adoption, high fault resilience, customer's peace of mind, and advanced features at competitive price points. The client allows users to interact with Aiven services through a command-line interface, providing functionalities such as authentication, project management, service exploration, service launching, service integrations, custom files management, team management, OAuth2 client configuration, autocomplete support, and auth helpers for connecting to services. Users can perform various tasks related to managing cloud services efficiently using the Aiven Client.
For similar jobs

minio
MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources

kong
Kong, or Kong API Gateway, is a cloud-native, platform-agnostic, scalable API Gateway distinguished for its high performance and extensibility via plugins. It also provides advanced AI capabilities with multi-LLM support. By providing functionality for proxying, routing, load balancing, health checking, authentication (and more), Kong serves as the central layer for orchestrating microservices or conventional API traffic with ease. Kong runs natively on Kubernetes thanks to its official Kubernetes Ingress Controller.

AI-in-a-Box
AI-in-a-Box is a curated collection of solution accelerators that can help engineers establish their AI/ML environments and solutions rapidly and with minimal friction, while maintaining the highest standards of quality and efficiency. It provides essential guidance on the responsible use of AI and LLM technologies, specific security guidance for Generative AI (GenAI) applications, and best practices for scaling OpenAI applications within Azure. The available accelerators include: Azure ML Operationalization in-a-box, Edge AI in-a-box, Doc Intelligence in-a-box, Image and Video Analysis in-a-box, Cognitive Services Landing Zone in-a-box, Semantic Kernel Bot in-a-box, NLP to SQL in-a-box, Assistants API in-a-box, and Assistants API Bot in-a-box.

awsome-distributed-training
This repository contains reference architectures and test cases for distributed model training with Amazon SageMaker Hyperpod, AWS ParallelCluster, AWS Batch, and Amazon EKS. The test cases cover different types and sizes of models as well as different frameworks and parallel optimizations (Pytorch DDP/FSDP, MegatronLM, NemoMegatron...).

generative-ai-cdk-constructs
The AWS Generative AI Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions in code to create predictable and repeatable infrastructure, called constructs. The goal of AWS Generative AI CDK Constructs is to help developers build generative AI solutions using pattern-based definitions for their architecture. The patterns defined in AWS Generative AI CDK Constructs are high level, multi-service abstractions of AWS CDK constructs that have default configurations based on well-architected best practices. The library is organized into logical modules using object-oriented techniques to create each architectural pattern model.

model_server
OpenVINO™ Model Server (OVMS) is a high-performance system for serving models. Implemented in C++ for scalability and optimized for deployment on Intel architectures, the model server uses the same architecture and API as TensorFlow Serving and KServe while applying OpenVINO for inference execution. Inference service is provided via gRPC or REST API, making deploying new algorithms and AI experiments easy.

dify-helm
Deploy langgenius/dify, an LLM based chat bot app on kubernetes with helm chart.