
airbrussh
Airbrussh pretties up your SSHKit and Capistrano output
Stars: 512

Airbrussh is a Capistrano plugin that enhances the output of Capistrano's deploy command. It provides a more detailed and structured view of the deployment process, including color-coded output, timestamps, and improved formatting. Airbrussh aims to make the deployment logs easier to read and understand, helping developers troubleshoot issues and monitor deployments more effectively. It is a useful tool for teams working with Capistrano to streamline their deployment workflows and improve visibility into the deployment process.
README:
Airbrussh is a concise log formatter for Capistrano and SSHKit. It displays well-formatted, useful log output that is easy to read. Airbrussh also saves Capistrano's verbose output to a separate log file just in case you need additional details for troubleshooting.
As of April 2016, Airbrussh is bundled with Capistrano 3.5, and is Capistrano's default formatter! There is nothing additional to install or enable. Continue reading to learn more about Airbrussh's features and configuration options.
If you aren't yet using Capistrano 3.5 (or wish to use Airbrussh with SSHKit directly), refer to the advanced/legacy usage section for installation instructions.
For more details on how exactly Airbrussh affects Capistrano's output and the reasoning behind it, check out the blog post: Introducing Airbrussh.
Airbrussh is enabled by default in Capistrano 3.5 and newer. To manually enable Airbrussh (for example, when upgrading an existing project), set the Capistrano format like this:
# In deploy.rb
set :format, :airbrussh
When you run a Capistrano command, Airbrussh provides the following information in its output:
- Name of Capistrano task being executed
- When each task started (minutes:seconds elapsed since the deploy began)
- The SSH command-line strings that are executed; for Capistrano tasks that involve running multiple commands, the numeric prefix indicates the command in the sequence, starting from
01
- Stdout and stderr output from each command
- The duration of each command execution, per server
For brevity, Airbrussh does not show everything that Capistrano is doing. For example, it will omit Capistrano's test
commands, which can be noisy and confusing. Airbrussh also hides things like environment variables, as well as cd
and env
invocations. To see a full audit of Capistrano's execution, including exactly what commands were run on each server, look at log/capistrano.log
.
You can customize many aspects of Airbrussh's output. In Capistrano 3.5 and newer, this is done via the :format_options
variable, like this:
# Pass options to Airbrussh
set :format_options, color: false, truncate: 80
Here are the options you can use, and their effects (note that the defaults may be different depending on where Airbrussh is used; these are the defaults used by Capistrano 3.5):
Option | Default | Usage |
---|---|---|
banner |
nil |
Provide a string (e.g. "Capistrano started!") that will be printed when Capistrano starts up. |
color |
:auto |
Use true or false to enable or disable ansi color. If set to :auto , Airbrussh automatically uses color based on whether the output is a TTY, or if the SSHKIT_COLOR environment variable is set. |
command_output |
true |
Set to :stdout , :stderr , or true to display the SSH output received via stdout, stderr, or both, respectively. Set to false to not show any SSH output, for a minimal look. |
context |
Airbrussh::Rake::Context |
Defines the execution context. Targeted towards uses of Airbrussh outside of Rake/Capistrano. Alternate implementations should provide the definition for current_task_name , register_new_command , and position . |
log_file |
log/capistrano.log |
Capistrano's verbose output is saved to this file to facilitate debugging. Set to nil to disable completely. |
truncate |
:auto |
Set to a number (e.g. 80) to truncate the width of the output to that many characters, or false to disable truncation. If :auto , output is automatically truncated to the width of the terminal window, if it can be determined. |
task_prefix |
nil |
A string to prefix to task output. Handy for output collapsing like buildkite's --- prefix |
Airbrussh is not displaying the output of my commands! For example, I run tail
in one of my capistrano tasks and airbrussh doesn't show anything. How do I fix this?
Make sure Airbrussh is configured to show SSH output.
set :format_options, command_output: true
I haven't upgraded to Capistrano 3.5 yet. Can I still use Airbrussh?
Yes! Capistrano 3.4.x is also supported. Refer to the advanced/legacy usage section for installation instructions.
Does Airbrussh work with Capistrano 2?
No, Capistrano 3 is required. We recommend Capistrano 3.4.0 or higher. Capistrano 3.5.0 and higher have Airbrussh enabled by default, with no installation needed.
Does Airbrussh work with JRuby?
JRuby is not officially supported or tested, but may work. You must disable automatic truncation to work around a known bug in the JRuby 9.0 standard library. See #62 for more details.
set :format_options, truncate: false
I have a question that’s not answered here or elsewhere in the README.
Please open a GitHub issue and we’ll be happy to help!
Although Airbrussh is built into Capistrano 3.5.0 and higher, it is also available as a plug-in for older versions. Airbrussh has been tested with MRI 1.9+, Capistrano 3.4.0+, and SSHKit 1.6.1+.
Add this line to your application's Gemfile:
gem "airbrussh", require: false
And then execute:
$ bundle
Finally, add this line to your application's Capfile:
require "airbrussh/capistrano"
Important: explicitly setting Capistrano's :format
option in your deploy.rb will override airbrussh. Remove this line if you have it:
# Remove this
set :format, :pretty
Capistrano 3.4.x doesn't have the :format_options
configuration system, so you will need to configure Airbrussh using this technique:
Airbrussh.configure do |config|
config.color = false
config.command_output = true
# etc.
end
Refer to the configuration section above for the list of supported options.
If you are using SSHKit directly (i.e. without Capistrano), you can use Airbrussh like this:
require "airbrussh"
SSHKit.config.output = Airbrussh::Formatter.new($stdout)
# You can also pass configuration options like this
SSHKit.config.output = Airbrussh::Formatter.new($stdout, color: false)
Airbrussh started life as custom logging code within the capistrano-mb collection of opinionated Capistrano recipes. In February 2015, the logging code was refactored into a standalone gem with its own configuration and documentation, and renamed airbrussh
. In February 2016, Airbrussh was added as the default formatter in Capistrano 3.5.0.
Airbrussh now has a stable feature set, excellent test coverage, is being used for production deployments, and has reached 1.0.0! If you have ideas for improvements to Airbrussh, please open a GitHub issue.
Contributions are welcome! Read CONTRIBUTING.md to get started.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for airbrussh
Similar Open Source Tools

airbrussh
Airbrussh is a Capistrano plugin that enhances the output of Capistrano's deploy command. It provides a more detailed and structured view of the deployment process, including color-coded output, timestamps, and improved formatting. Airbrussh aims to make the deployment logs easier to read and understand, helping developers troubleshoot issues and monitor deployments more effectively. It is a useful tool for teams working with Capistrano to streamline their deployment workflows and improve visibility into the deployment process.

verl-tool
The verl-tool is a versatile command-line utility designed to streamline various tasks related to version control and code management. It provides a simple yet powerful interface for managing branches, merging changes, resolving conflicts, and more. With verl-tool, users can easily track changes, collaborate with team members, and ensure code quality throughout the development process. Whether you are a beginner or an experienced developer, verl-tool offers a seamless experience for version control operations.

SpecForge
SpecForge is a powerful tool for generating API specifications from code. It helps developers to easily create and maintain accurate API documentation by extracting information directly from the codebase. With SpecForge, users can streamline the process of documenting APIs, ensuring consistency and reducing manual effort. The tool supports various programming languages and frameworks, making it versatile and adaptable to different development environments. By automating the generation of API specifications, SpecForge enhances collaboration between developers and stakeholders, improving overall project efficiency and quality.

contextgem
Contextgem is a Ruby gem that provides a simple way to manage context-specific configurations in your Ruby applications. It allows you to define different configurations based on the context in which your application is running, such as development, testing, or production. This helps you keep your configuration settings organized and easily accessible, making it easier to maintain and update your application. With Contextgem, you can easily switch between different configurations without having to modify your code, making it a valuable tool for managing complex applications with multiple environments.

promptl
Promptl is a versatile command-line tool designed to streamline the process of creating and managing prompts for user input in various programming projects. It offers a simple and efficient way to prompt users for information, validate their input, and handle different scenarios based on their responses. With Promptl, developers can easily integrate interactive prompts into their scripts, applications, and automation workflows, enhancing user experience and improving overall usability. The tool provides a range of customization options and features, making it suitable for a wide range of use cases across different programming languages and environments.

WorkflowAI
WorkflowAI is a powerful tool designed to streamline and automate various tasks within the workflow process. It provides a user-friendly interface for creating custom workflows, automating repetitive tasks, and optimizing efficiency. With WorkflowAI, users can easily design, execute, and monitor workflows, allowing for seamless integration of different tools and systems. The tool offers advanced features such as conditional logic, task dependencies, and error handling to ensure smooth workflow execution. Whether you are managing project tasks, processing data, or coordinating team activities, WorkflowAI simplifies the workflow management process and enhances productivity.

LightLLM
LightLLM is a lightweight library for linear and logistic regression models. It provides a simple and efficient way to train and deploy machine learning models for regression tasks. The library is designed to be easy to use and integrate into existing projects, making it suitable for both beginners and experienced data scientists. With LightLLM, users can quickly build and evaluate regression models using a variety of algorithms and hyperparameters. The library also supports feature engineering and model interpretation, allowing users to gain insights from their data and make informed decisions based on the model predictions.

aiounifi
Aiounifi is a Python library that provides a simple interface for interacting with the Unifi Controller API. It allows users to easily manage their Unifi network devices, such as access points, switches, and gateways, through automated scripts or applications. With Aiounifi, users can retrieve device information, perform configuration changes, monitor network performance, and more, all through a convenient and efficient API wrapper. This library simplifies the process of integrating Unifi network management into custom solutions, making it ideal for network administrators, developers, and enthusiasts looking to automate and streamline their network operations.

pentest-agent
Pentest Agent is a lightweight and versatile tool designed for conducting penetration testing on network systems. It provides a user-friendly interface for scanning, identifying vulnerabilities, and generating detailed reports. The tool is highly customizable, allowing users to define specific targets and parameters for testing. Pentest Agent is suitable for security professionals and ethical hackers looking to assess the security posture of their systems and networks.

mcp-fundamentals
The mcp-fundamentals repository is a collection of fundamental concepts and examples related to microservices, cloud computing, and DevOps. It covers topics such as containerization, orchestration, CI/CD pipelines, and infrastructure as code. The repository provides hands-on exercises and code samples to help users understand and apply these concepts in real-world scenarios. Whether you are a beginner looking to learn the basics or an experienced professional seeking to refresh your knowledge, mcp-fundamentals has something for everyone.

Memento
Memento is a lightweight and user-friendly version control tool designed for small to medium-sized projects. It provides a simple and intuitive interface for managing project versions and collaborating with team members. With Memento, users can easily track changes, revert to previous versions, and merge different branches. The tool is suitable for developers, designers, content creators, and other professionals who need a streamlined version control solution. Memento simplifies the process of managing project history and ensures that team members are always working on the latest version of the project.

langfuse-docs
Langfuse Docs is a repository for langfuse.com, built on Nextra. It provides guidelines for contributing to the documentation using GitHub Codespaces and local development setup. The repository includes Python cookbooks in Jupyter notebooks format, which are converted to markdown for rendering on the site. It also covers media management for images, videos, and gifs. The stack includes Nextra, Next.js, shadcn/ui, and Tailwind CSS. Additionally, there is a bundle analysis feature to analyze the production build bundle size using @next/bundle-analyzer.

Companion
Companion is a software tool designed to provide support and enhance development. It offers various features and functionalities to assist users in their projects and tasks. The tool aims to be user-friendly and efficient, helping individuals and teams to streamline their workflow and improve productivity.

parlant
Parlant is a structured approach to building and guiding customer-facing AI agents. It allows developers to create and manage robust AI agents, providing specific feedback on agent behavior and helping understand user intentions better. With features like guidelines, glossary, coherence checks, dynamic context, and guided tool use, Parlant offers control over agent responses and behavior. Developer-friendly aspects include instant changes, Git integration, clean architecture, and type safety. It enables confident deployment with scalability, effective debugging, and validation before deployment. Parlant works with major LLM providers and offers client SDKs for Python and TypeScript. The tool facilitates natural customer interactions through asynchronous communication and provides a chat UI for testing new behaviors before deployment.

MaiBot
MaiBot is an intelligent QQ group chat bot based on a large language model. It is developed using the nonebot2 framework, with LLM providing conversation abilities, MongoDB for data persistence support, and NapCat as the QQ protocol endpoint support. The project is in active development stage, with features like chat functionality, emoji functionality, schedule management, memory function, knowledge base function, and relationship function planned for future updates. The project aims to create a 'life form' active in QQ group chats, focusing on companionship and creating a more human-like presence rather than a perfect assistant. The application generates content from AI models, so users are advised to discern carefully and not use it for illegal purposes.

PentestGPT
PentestGPT provides advanced AI and integrated tools to help security teams conduct comprehensive penetration tests effortlessly. Scan, exploit, and analyze web applications, networks, and cloud environments with ease and precision, without needing expert skills. The tool utilizes Supabase for data storage and management, and Vercel for hosting the frontend. It offers a local quickstart guide for running the tool locally and a hosted quickstart guide for deploying it in the cloud. PentestGPT aims to simplify the penetration testing process for security professionals and enthusiasts alike.
For similar tasks

airbrussh
Airbrussh is a Capistrano plugin that enhances the output of Capistrano's deploy command. It provides a more detailed and structured view of the deployment process, including color-coded output, timestamps, and improved formatting. Airbrussh aims to make the deployment logs easier to read and understand, helping developers troubleshoot issues and monitor deployments more effectively. It is a useful tool for teams working with Capistrano to streamline their deployment workflows and improve visibility into the deployment process.
For similar jobs

airbrussh
Airbrussh is a Capistrano plugin that enhances the output of Capistrano's deploy command. It provides a more detailed and structured view of the deployment process, including color-coded output, timestamps, and improved formatting. Airbrussh aims to make the deployment logs easier to read and understand, helping developers troubleshoot issues and monitor deployments more effectively. It is a useful tool for teams working with Capistrano to streamline their deployment workflows and improve visibility into the deployment process.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

ai-on-gke
This repository contains assets related to AI/ML workloads on Google Kubernetes Engine (GKE). Run optimized AI/ML workloads with Google Kubernetes Engine (GKE) platform orchestration capabilities. A robust AI/ML platform considers the following layers: Infrastructure orchestration that support GPUs and TPUs for training and serving workloads at scale Flexible integration with distributed computing and data processing frameworks Support for multiple teams on the same infrastructure to maximize utilization of resources

tidb
TiDB is an open-source distributed SQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It is MySQL compatible and features horizontal scalability, strong consistency, and high availability.

nvidia_gpu_exporter
Nvidia GPU exporter for prometheus, using `nvidia-smi` binary to gather metrics.

tracecat
Tracecat is an open-source automation platform for security teams. It's designed to be simple but powerful, with a focus on AI features and a practitioner-obsessed UI/UX. Tracecat can be used to automate a variety of tasks, including phishing email investigation, evidence collection, and remediation plan generation.

openinference
OpenInference is a set of conventions and plugins that complement OpenTelemetry to enable tracing of AI applications. It provides a way to capture and analyze the performance and behavior of AI models, including their interactions with other components of the application. OpenInference is designed to be language-agnostic and can be used with any OpenTelemetry-compatible backend. It includes a set of instrumentations for popular machine learning SDKs and frameworks, making it easy to add tracing to your AI applications.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students