
gonzo
Gonzo! The Go based TUI log analysis tool
Stars: 1643

Gonzo is a powerful, real-time log analysis terminal UI tool inspired by k9s. It allows users to analyze log streams with beautiful charts, AI-powered insights, and advanced filtering directly from the terminal. The tool provides features like live streaming log processing, OTLP support, interactive dashboard with real-time charts, advanced filtering options including regex support, and AI-powered insights such as pattern detection, anomaly analysis, and root cause suggestions. Users can also configure AI models from providers like OpenAI, LM Studio, and Ollama for intelligent log analysis. Gonzo is built with Bubble Tea, Lipgloss, Cobra, Viper, and OpenTelemetry, following a clean architecture with separate modules for TUI, log analysis, frequency tracking, OTLP handling, and AI integration.
README:
A powerful, real-time log analysis terminal UI inspired by k9s. Analyze log streams with beautiful charts, AI-powered insights, and advanced filtering - all from your terminal.
- Live streaming - Process logs as they arrive from stdin, files, or network
- OTLP native - First-class support for OpenTelemetry log format
- OTLP receiver - Built-in gRPC server to receive logs via OpenTelemetry protocol
- Format detection - Automatically detects JSON, logfmt, and plain text
- Custom formats - Define your own log formats with YAML configuration
- Severity tracking - Color-coded severity levels with distribution charts
- k9s-inspired layout - Familiar 2x2 grid interface
- Real-time charts - Word frequency, attributes, severity distribution, and time series
- Keyboard + mouse navigation - Vim-style shortcuts plus click-to-navigate and scroll wheel support
- Smart log viewer - Auto-scroll with intelligent pause/resume behavior
-
Fullscreen log viewer - Press
f
to open a dedicated fullscreen modal for log browsing with all navigation features - Global pause control - Spacebar pauses entire dashboard while buffering logs
- Modal details - Deep dive into individual log entries with expandable views
- Log Counts analysis - Detailed modal with heatmap visualization, pattern analysis by severity, and service distribution
- AI analysis - Get intelligent insights about log patterns and anomalies with configurable models
- Regex support - Filter logs with regular expressions
- Attribute search - Find logs by specific attribute values
- Severity filtering - Focus on errors, warnings, or specific log levels
- Interactive selection - Click or keyboard navigate to explore logs
- Built-in skins - 11+ beautiful themes including Dracula, Nord, Monokai, GitHub Light, and more
- Light and dark modes - Themes optimized for different lighting conditions
- Custom skins - Create your own color schemes with YAML configuration
- Semantic colors - Intuitive color mapping for different UI components
- Professional themes - ControlTheory original themes included
- Pattern detection - Automatically identify recurring issues
- Anomaly analysis - Spot unusual patterns in your logs
- Root cause suggestions - Get AI-powered debugging assistance
- Configurable models - Choose from GPT-4, GPT-3.5, or any custom model
- Multiple providers - Works with OpenAI, LM Studio, Ollama, or any OpenAI-compatible API
- Local AI support - Run completely offline with local models
go install github.com/control-theory/gonzo/cmd/gonzo@latest
brew install gonzo
Download the latest release for your platform from the releases page.
nix run github:control-theory/gonzo
git clone https://github.com/control-theory/gonzo.git
cd gonzo
make build
# Read logs directly from files
gonzo -f application.log
# Read from multiple files
gonzo -f application.log -f error.log -f debug.log
# Use glob patterns to read multiple files
gonzo -f "/var/log/*.log"
gonzo -f "/var/log/app/*.log" -f "/var/log/nginx/*.log"
# Follow log files in real-time (like tail -f)
gonzo -f /var/log/app.log --follow
gonzo -f "/var/log/*.log" --follow
# Analyze logs from stdin (traditional way)
cat application.log | gonzo
# Stream logs from kubectl
kubectl logs -f deployment/my-app | gonzo
# Follow system logs
tail -f /var/log/syslog | gonzo
# Analyze Docker container logs
docker logs -f my-container 2>&1 | gonzo
# With AI analysis (requires API key)
export OPENAI_API_KEY=sk-your-key-here
gonzo -f application.log --ai-model="gpt-4"
Gonzo supports custom log formats through YAML configuration files. This allows you to parse any structured log format without modifying the source code.
Some example custom formats are included in the repo, simply download, copy, or modify as you like! In order for the commands below to work, you must first download them and put them in the Gonzo config directory.
# Use a built-in custom format
gonzo --format=loki-stream -f loki_logs.json
# List available custom formats
ls ~/.config/gonzo/formats/
# Use your own custom format
gonzo --format=my-custom-format -f custom_logs.txt
Custom formats support:
- Flexible field mapping - Map any JSON/text fields to timestamp, severity, body, and attributes
- Batch processing - Automatically expand batch formats (like Loki) into individual log entries
- Auto-mapping - Automatically extract all unmapped fields as attributes
- Nested field extraction - Extract fields from deeply nested JSON structures
- Pattern-based parsing - Use regex patterns for unstructured text logs
For detailed information on creating custom formats, see the Custom Formats Guide.
Gonzo can receive logs directly via OpenTelemetry Protocol (OTLP) over both gRPC and HTTP:
# Start Gonzo as an OTLP receiver (both gRPC on port 4317 and HTTP on port 4318)
gonzo --otlp-enabled
# Use custom ports
gonzo --otlp-enabled --otlp-grpc-port=5317 --otlp-http-port=5318
# gRPC endpoint: localhost:4317
# HTTP endpoint: http://localhost:4318/v1/logs
Using gRPC:
exporters:
otlp/gonzo_grpc:
endpoint: localhost:4317
tls:
insecure: true
service:
pipelines:
logs:
receivers: [your_receivers]
processors: [your_processors]
exporters: [otlp/gonzo_grpc]
Using HTTP:
exporters:
otlphttp/gonzo_http:
endpoint: http://localhost:4318/v1/logs
service:
pipelines:
logs:
receivers: [your_receivers]
processors: [your_processors]
exporters: [otlphttp/gonzo_http]
Using gRPC:
from opentelemetry.exporter.otlp.proto.grpc._log_exporter import OTLPLogExporter
exporter = OTLPLogExporter(
endpoint="localhost:4317",
insecure=True
)
Using HTTP:
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
exporter = OTLPLogExporter(
endpoint="http://localhost:4318/v1/logs",
)
See examples/send_otlp_logs.py
for a complete example.
# Auto-select best available model (recommended) - file input
export OPENAI_API_KEY=sk-your-key-here
gonzo -f logs.json
# Or specify a particular model - file input
export OPENAI_API_KEY=sk-your-key-here
gonzo -f logs.json --ai-model="gpt-4"
# Follow logs with AI analysis
export OPENAI_API_KEY=sk-your-key-here
gonzo -f "/var/log/app.log" --follow --ai-model="gpt-4"
# Using local LM Studio (auto-selects first available)
export OPENAI_API_KEY="local-key"
export OPENAI_API_BASE="http://localhost:1234/v1"
gonzo -f logs.json
# Using Ollama (auto-selects best model like gpt-oss:20b)
export OPENAI_API_KEY="ollama"
export OPENAI_API_BASE="http://localhost:11434"
gonzo -f logs.json --follow
# Traditional stdin approach still works
export OPENAI_API_KEY=sk-your-key-here
cat logs.json | gonzo --ai-model="gpt-4"
Key/Mouse | Action |
---|---|
Tab / Shift+Tab
|
Navigate between panels |
Mouse Click |
Click on any section to switch to it |
โ /โ or k /j
|
Move selection up/down |
Mouse Wheel |
Scroll up/down to navigate selections |
โ /โ or h /l
|
Horizontal navigation |
Enter |
View log details or open analysis modal (Counts section) |
ESC |
Close modal/cancel |
Key | Action |
---|---|
Space |
Pause/unpause entire dashboard |
/ |
Enter filter mode (regex supported) |
s |
Search and highlight text in logs |
f |
Open fullscreen log viewer modal |
c |
Toggle Host/Service columns in log view |
r |
Reset all data (manual reset) |
u / U
|
Cycle update intervals (forward/backward) |
i |
AI analysis (in detail view) |
m |
Switch AI model (shows available models) |
? / h
|
Show help |
q / Ctrl+C
|
Quit |
Key | Action |
---|---|
Home |
Jump to top of log buffer (stops auto-scroll) |
End |
Jump to latest logs (resumes auto-scroll) |
PgUp / PgDn
|
Navigate by pages (10 entries at a time) |
โ /โ or k /j
|
Navigate entries with smart auto-scroll |
Key | Action |
---|---|
c |
Start chat with AI about current log |
Tab |
Switch between log details and chat pane |
m |
Switch AI model (works in modal too) |
Press Enter
on the Counts section to open a comprehensive analysis modal featuring:
- Time-series heatmap showing severity levels vs. time (1-minute resolution)
- 60-minute rolling window with automatic scaling per severity level
- Color-coded intensity using ASCII characters (โโโโ) with gradient effects
- Precise alignment with time headers showing minutes ago (60, 50, 40, ..., 10, 0)
- Receive time architecture - visualization based on when logs were received for reliable display
- Top 3 patterns per severity using drain3 pattern extraction algorithm
- Severity-specific tracking with dedicated drain3 instances for each level
- Real-time pattern detection as logs arrive and are processed
- Accurate pattern counts maintained separately for each severity level
- Top 3 services per severity showing which services generate each log level
- Service name extraction from common attributes (service.name, service, app, etc.)
- Real-time updates as new logs are processed and analyzed
- Fallback to host information when service names are not available
- Scrollable content using mouse wheel or arrow keys
- ESC to close and return to main dashboard
- Full-width display maximizing screen real estate for data visualization
- Real-time updates - data refreshes automatically as new logs arrive
The modal uses the same receive time architecture as the main dashboard, ensuring consistent and reliable visualization regardless of log timestamp accuracy or clock skew issues.
gonzo [flags]
gonzo [command]
Commands:
version Print version information
help Help about any command
completion Generate shell autocompletion
Flags:
-f, --file stringArray Files or file globs to read logs from (can specify multiple)
--follow Follow log files like 'tail -f' (watch for new lines in real-time)
--format string Log format to use (auto-detect if not specified). Can be: otlp, json, text, or a custom format name
-u, --update-interval duration Dashboard update interval (default: 1s)
-b, --log-buffer int Maximum log entries to keep (default: 1000)
-m, --memory-size int Maximum frequency entries (default: 10000)
--ai-model string AI model for analysis (auto-selects best available if not specified)
-s, --skin string Color scheme/skin to use (default, or name of a skin file)
--stop-words strings Additional stop words to filter out from analysis (adds to built-in list)
-t, --test-mode Run without TTY for testing
-v, --version Print version information
--config string Config file (default: $HOME/.config/gonzo/config.yml)
-h, --help Show help message
Create ~/.config/gonzo/config.yml
for persistent settings:
# File input configuration
files:
- "/var/log/app.log"
- "/var/log/error.log"
- "/var/log/*.log" # Glob patterns supported
follow: true # Enable follow mode (like tail -f)
# Update frequency for dashboard refresh
update-interval: 2s
# Buffer sizes
log-buffer: 2000
memory-size: 15000
# UI customization
skin: dracula # Choose from: default, dracula, nord, monokai, github-light, etc.
# Additional stop words to filter from analysis
stop-words:
- "log"
- "message"
- "debug"
# Development/testing
test-mode: false
# AI configuration
ai-model: "gpt-4"
See examples/config.yml for a complete configuration example with detailed comments.
Gonzo supports multiple AI providers for intelligent log analysis. Configure using command line flags and environment variables. You can switch between available models at runtime using the m
key.
# Set your API key
export OPENAI_API_KEY="sk-your-actual-key-here"
# Auto-select best available model (recommended)
cat logs.json | gonzo
# Or specify a particular model
cat logs.json | gonzo --ai-model="gpt-4"
# 1. Start LM Studio server with a model loaded
# 2. Set environment variables (IMPORTANT: include /v1 in URL)
export OPENAI_API_KEY="local-key"
export OPENAI_API_BASE="http://localhost:1234/v1"
# Auto-select first available model (recommended)
cat logs.json | gonzo
# Or specify the exact model name from LM Studio
cat logs.json | gonzo --ai-model="openai/gpt-oss-120b"
# 1. Start Ollama: ollama serve
# 2. Pull a model: ollama pull gpt-oss:20b
# 3. Set environment variables (note: no /v1 suffix needed)
export OPENAI_API_KEY="ollama"
export OPENAI_API_BASE="http://localhost:11434"
# Auto-select best model (prefers gpt-oss, llama3, mistral, etc.)
cat logs.json | gonzo
# Or specify a particular model
cat logs.json | gonzo --ai-model="gpt-oss:20b"
cat logs.json | gonzo --ai-model="llama3"
# For any OpenAI-compatible API endpoint
export OPENAI_API_KEY="your-api-key"
export OPENAI_API_BASE="https://api.your-provider.com/v1"
cat logs.json | gonzo --ai-model="your-model-name"
Once Gonzo is running, you can switch between available AI models without restarting:
-
Press
m
anywhere in the interface to open the model selection modal - Navigate with arrow keys, page up/down, or mouse wheel
- Select a model with Enter
- Cancel with Escape
The model selection modal shows:
- All available models from your configured AI provider
- Current active model (highlighted in green)
- Dynamic sizing based on terminal height
- Scroll indicators when there are many models
Note: Model switching requires the AI service to be properly configured and running. The modal will only appear if models are available from your AI provider.
When you don't specify the --ai-model
flag, Gonzo automatically selects the best available model:
Selection Priority:
-
OpenAI: Prefers
gpt-4
โgpt-3.5-turbo
โ first available -
Ollama: Prefers
gpt-oss:20b
โllama3
โmistral
โcodellama
โ first available - LM Studio: Uses first available model from the server
- Other providers: Uses first available model
Benefits:
- โ No need to know model names beforehand
- โ Works immediately with any AI provider
- โ Intelligent defaults for better performance
- โ
Still allows manual model selection with
m
key
Example: Instead of gonzo --ai-model="llama3"
, simply run gonzo
and it will auto-select llama3
if available.
LM Studio Issues:
- โ Ensure server is running and model is loaded
- โ
Use full model name:
--ai-model="openai/model-name"
- โ
Include
/v1
in base URL:http://localhost:1234/v1
- โ
Check available models:
curl http://localhost:1234/v1/models
Ollama Issues:
- โ
Start server:
ollama serve
- โ
Verify model:
ollama list
- โ
Test API:
curl http://localhost:11434/api/tags
- โ
Use correct URL:
http://localhost:11434
(no/v1
suffix) - โ
Model names include tags:
gpt-oss:20b
,llama3:8b
OpenAI Issues:
- โ Verify API key is valid and has credits
- โ Check model availability (gpt-4 requires API access)
Variable | Description |
---|---|
OPENAI_API_KEY |
API key for AI analysis (required for AI features) |
OPENAI_API_BASE |
Custom API endpoint (default: https://api.openai.com/v1) |
GONZO_FILES |
Comma-separated list of files/globs to read (equivalent to -f flags) |
GONZO_FOLLOW |
Enable follow mode (true/false) |
GONZO_UPDATE_INTERVAL |
Override update interval |
GONZO_LOG_BUFFER |
Override log buffer size |
GONZO_MEMORY_SIZE |
Override memory size |
GONZO_AI_MODEL |
Override default AI model |
GONZO_TEST_MODE |
Enable test mode |
NO_COLOR |
Disable colored output |
Enable shell completion for better CLI experience:
# Bash
source <(gonzo completion bash)
# Zsh
source <(gonzo completion zsh)
# Fish
gonzo completion fish | source
# PowerShell
gonzo completion powershell | Out-String | Invoke-Expression
For permanent setup, save the completion script to your shell's completion directory.
By leveraging K9s plugin system Gonzo integrates seamlessly with K9s for real-time Kubernetes log analysis.
Add this plugin to your $XDG_CONFIG_HOME/k9s/plugins.yaml
file:
plugins:
gonzo:
shortCut: Ctrl-L
description: "Gonzo log analysis"
scopes:
- po
command: sh
background: false
args:
- -c
- "kubectl logs -f $NAME -n $NAMESPACE --context $CONTEXT | gonzo"
โ ๏ธ NOTE: onmacOS
although it is not required, definingXDG_CONFIG_HOME=~/.config
is recommended in order to maintain consistency with Linux configuration practices.
- Launch k9s and navigate to pods
- Select a pod and press
ctrl-l
- Gonzo opens with live log streaming and analysis
Gonzo is built with:
- Bubble Tea - Terminal UI framework
- Lipgloss - Styling and layout
- Bubbles - TUI components
- Cobra - CLI framework
- Viper - Configuration management
- OpenTelemetry - Native OTLP support
- Large amounts of โ๏ธ
The architecture follows a clean separation:
cmd/gonzo/ # Main application entry
internal/
โโโ tui/ # Terminal UI implementation
โโโ analyzer/ # Log analysis engine
โโโ memory/ # Frequency tracking
โโโ otlplog/ # OTLP format handling
โโโ ai/ # AI integration
- Go 1.21 or higher
- Make (optional, for convenience)
# Quick build
make build
# Run tests
make test
# Build for all platforms
make cross-build
# Development mode (format, vet, test, build)
make dev
# Run unit tests
make test
# Run with race detection
make test-race
# Integration tests
make test-integration
# Test with sample data
make demo
Gonzo supports beautiful, customizable color schemes to match your terminal environment and personal preferences.
Be sure you download and place in the Gonzo config directory so Gonzo can find them.
# Use a dark theme
gonzo --skin=dracula
gonzo --skin=nord
gonzo --skin=monokai
# Use a light theme
gonzo --skin=github-light
gonzo --skin=solarized-light
gonzo --skin=vs-code-light
# Use Control Theory branded themes
gonzo --skin=controltheory-light # Light theme
gonzo --skin=controltheory-dark # Dark theme
Dark Themes ๐: default
, controltheory-dark
, dracula
, gruvbox
, monokai
, nord
, solarized-dark
Light Themes โ๏ธ: controltheory-light
, github-light
, solarized-light
, vs-code-light
, spring
See SKINS.md for complete documentation on:
- ๐ How to create custom color schemes
- ๐ฏ Color reference and semantic naming
- ๐ฆ Downloading community themes from GitHub
- ๐ง Advanced customization options
- ๐จ Design guidelines for accessibility
We love contributions! Please see CONTRIBUTING.md for details.
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Inspired by k9s for the amazing TUI patterns
- Built with Charm libraries for beautiful terminal UIs
- OpenTelemetry community for the OTLP specifications
- Usage Guide - Detailed usage instructions
- AWS CloudWatch Logs Usage Guide - Usage instructions for AWS CLI log tail and live tail with Gonzo
- Stern Usage Guide - Usage and examples for using Stern with Gonzo
- Victoria Logs Integration - Using Gonzo with Victoria Logs API
- Contributing Guide - How to contribute
- Changelog - Version history
Found a bug? Please open an issue with:
- Your OS and Go version
- Steps to reproduce
- Expected vs actual behavior
- Log samples (sanitized if needed)
If you find this project useful, please consider giving it a star! It helps others discover the tool.
Made with โค๏ธ by ControlTheory and the Gonzo community
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for gonzo
Similar Open Source Tools

gonzo
Gonzo is a powerful, real-time log analysis terminal UI tool inspired by k9s. It allows users to analyze log streams with beautiful charts, AI-powered insights, and advanced filtering directly from the terminal. The tool provides features like live streaming log processing, OTLP support, interactive dashboard with real-time charts, advanced filtering options including regex support, and AI-powered insights such as pattern detection, anomaly analysis, and root cause suggestions. Users can also configure AI models from providers like OpenAI, LM Studio, and Ollama for intelligent log analysis. Gonzo is built with Bubble Tea, Lipgloss, Cobra, Viper, and OpenTelemetry, following a clean architecture with separate modules for TUI, log analysis, frequency tracking, OTLP handling, and AI integration.

traceroot
TraceRoot is a tool that helps engineers debug production issues 10ร faster using AI-powered analysis of traces, logs, and code context. It accelerates the debugging process with AI-powered insights, integrates seamlessly into the development workflow, provides real-time trace and log analysis, code context understanding, and intelligent assistance. Features include ease of use, LLM flexibility, distributed services, AI debugging interface, and integration support. Users can get started with TraceRoot Cloud for a 7-day trial or self-host the tool. SDKs are available for Python and JavaScript/TypeScript.

jadx-mcp-server
JADX-MCP-SERVER is a standalone Python server that interacts with JADX-AI-MCP Plugin to analyze Android APKs using LLMs like Claude. It enables live communication with decompiled Android app context, uncovering vulnerabilities, parsing manifests, and facilitating reverse engineering effortlessly. The tool combines JADX-AI-MCP and JADX MCP SERVER to provide real-time reverse engineering support with LLMs, offering features like quick analysis, vulnerability detection, AI code modification, static analysis, and reverse engineering helpers. It supports various MCP tools for fetching class information, text, methods, fields, smali code, AndroidManifest.xml content, strings.xml file, resource files, and more. Tested on Claude Desktop, it aims to support other LLMs in the future, enhancing Android reverse engineering and APK modification tools connectivity for easier reverse engineering purely from vibes.

lmnr
Laminar is an all-in-one open-source platform designed for engineering AI products. It allows users to trace, evaluate, label, and analyze LLM data efficiently. The platform offers features such as automatic tracing of common AI frameworks and SDKs, local and online evaluations, simple UI for data labeling, dataset management, and scalability with gRPC communication. Laminar is built with a modern open-source stack including RabbitMQ, Postgres, Clickhouse, and Qdrant for semantic similarity search. It provides fast and beautiful dashboards for traces, evaluations, and labels, making it a comprehensive tool for AI product development.

req_llm
ReqLLM is a Req-based library for LLM interactions, offering a unified interface to AI providers through a plugin-based architecture. It brings composability and middleware advantages to LLM interactions, with features like auto-synced providers/models, typed data structures, ergonomic helpers, streaming capabilities, usage & cost extraction, and a plugin-based provider system. Users can easily generate text, structured data, embeddings, and track usage costs. The tool supports various AI providers like Anthropic, OpenAI, Groq, Google, and xAI, and allows for easy addition of new providers. ReqLLM also provides API key management, detailed documentation, and a roadmap for future enhancements.

tools
Strands Agents Tools is a community-driven project that provides a powerful set of tools for your agents to use. It bridges the gap between large language models and practical applications by offering ready-to-use tools for file operations, system execution, API interactions, mathematical operations, and more. The tools cover a wide range of functionalities including file operations, shell integration, memory storage, web infrastructure, HTTP client, Slack client, Python execution, mathematical tools, AWS integration, image and video processing, audio output, environment management, task scheduling, advanced reasoning, swarm intelligence, dynamic MCP client, parallel tool execution, browser automation, diagram creation, RSS feed management, and computer automation.

bifrost
Bifrost is a high-performance AI gateway that unifies access to multiple providers through a single OpenAI-compatible API. It offers features like automatic failover, load balancing, semantic caching, and enterprise-grade functionalities. Users can deploy Bifrost in seconds with zero configuration, benefiting from its core infrastructure, advanced features, enterprise and security capabilities, and developer experience. The repository structure is modular, allowing for maximum flexibility. Bifrost is designed for quick setup, easy configuration, and seamless integration with various AI models and tools.

azure-ai-docs
Azure AI Docs is a repository that provides detailed documentation and resources for developers looking to leverage Microsoft's AI services on the Azure platform. The repository covers a wide range of topics including machine learning, natural language processing, computer vision, and more. Developers can find tutorials, code samples, best practices, and guidelines to help them integrate AI capabilities into their applications seamlessly.

ai-manus
AI Manus is a general-purpose AI Agent system that supports running various tools and operations in a sandbox environment. It offers deployment with minimal dependencies, supports multiple tools like Terminal, Browser, File, Web Search, and messaging tools, allocates separate sandboxes for tasks, manages session history, supports stopping and interrupting conversations, file upload and download, and is multilingual. The system also provides user login and authentication. The project primarily relies on Docker for development and deployment, with model capability requirements and recommended Deepseek and GPT models.

docling
Docling simplifies document processing, parsing diverse formats including advanced PDF understanding, and providing seamless integrations with the general AI ecosystem. It offers features such as parsing multiple document formats, advanced PDF understanding, unified DoclingDocument representation format, various export formats, local execution capabilities, plug-and-play integrations with agentic AI tools, extensive OCR support, and a simple CLI. Coming soon features include metadata extraction, visual language models, chart understanding, and complex chemistry understanding. Docling is installed via pip and works on macOS, Linux, and Windows environments. It provides detailed documentation, examples, integrations with popular frameworks, and support through the discussion section. The codebase is under the MIT license and has been developed by IBM.

pointer
Pointer is a lightweight and efficient tool for analyzing and visualizing data structures in C and C++ programs. It provides a user-friendly interface to track memory allocations, pointer references, and data structures, helping developers to identify memory leaks, pointer errors, and optimize memory usage. With Pointer, users can easily navigate through complex data structures, visualize memory layouts, and debug pointer-related issues in their codebase. The tool offers interactive features such as memory snapshots, pointer tracking, and memory visualization, making it a valuable asset for C and C++ developers working on memory-intensive applications.

GEN-AI
GEN-AI is a versatile Python library for implementing various artificial intelligence algorithms and models. It provides a wide range of tools and functionalities to support machine learning, deep learning, natural language processing, computer vision, and reinforcement learning tasks. With GEN-AI, users can easily build, train, and deploy AI models for diverse applications such as image recognition, text classification, sentiment analysis, object detection, and game playing. The library is designed to be user-friendly, efficient, and scalable, making it suitable for both beginners and experienced AI practitioners.

ai21-python
The AI21 Labs Python SDK is a comprehensive tool for interacting with the AI21 API. It provides functionalities for chat completions, conversational RAG, token counting, error handling, and support for various cloud providers like AWS, Azure, and Vertex. The SDK offers both synchronous and asynchronous usage, along with detailed examples and documentation. Users can quickly get started with the SDK to leverage AI21's powerful models for various natural language processing tasks.

pdr_ai_v2
pdr_ai_v2 is a Python library for implementing machine learning algorithms and models. It provides a wide range of tools and functionalities for data preprocessing, model training, evaluation, and deployment. The library is designed to be user-friendly and efficient, making it suitable for both beginners and experienced data scientists. With pdr_ai_v2, users can easily build and deploy machine learning models for various applications, such as classification, regression, clustering, and more.

crystal
Crystal is an Electron desktop application that allows users to run, inspect, and test multiple Claude Code instances simultaneously using git worktrees. It provides features such as parallel sessions, git worktree isolation, session persistence, git integration, change tracking, notifications, and the ability to run scripts. Crystal simplifies the workflow by creating isolated sessions, iterating with Claude Code, reviewing diff changes, and squashing commits for a clean history. It is a tool designed for collaborative AI notebook editing and testing.

Shannon
Shannon is a battle-tested infrastructure for AI agents that solves problems at scale, such as runaway costs, non-deterministic failures, and security concerns. It offers features like intelligent caching, deterministic replay of workflows, time-travel debugging, WASI sandboxing, and hot-swapping between LLM providers. Shannon allows users to ship faster with zero configuration multi-agent setup, multiple AI patterns, time-travel debugging, and hot configuration changes. It is production-ready with features like WASI sandbox, token budget control, policy engine (OPA), and multi-tenancy. Shannon helps scale without breaking by reducing costs, being provider agnostic, observable by default, and designed for horizontal scaling with Temporal workflow orchestration.
For similar tasks

Awesome-LLM4Cybersecurity
The repository 'Awesome-LLM4Cybersecurity' provides a comprehensive overview of the applications of Large Language Models (LLMs) in cybersecurity. It includes a systematic literature review covering topics such as constructing cybersecurity-oriented domain LLMs, potential applications of LLMs in cybersecurity, and research directions in the field. The repository analyzes various benchmarks, datasets, and applications of LLMs in cybersecurity tasks like threat intelligence, fuzzing, vulnerabilities detection, insecure code generation, program repair, anomaly detection, and LLM-assisted attacks.

gonzo
Gonzo is a powerful, real-time log analysis terminal UI tool inspired by k9s. It allows users to analyze log streams with beautiful charts, AI-powered insights, and advanced filtering directly from the terminal. The tool provides features like live streaming log processing, OTLP support, interactive dashboard with real-time charts, advanced filtering options including regex support, and AI-powered insights such as pattern detection, anomaly analysis, and root cause suggestions. Users can also configure AI models from providers like OpenAI, LM Studio, and Ollama for intelligent log analysis. Gonzo is built with Bubble Tea, Lipgloss, Cobra, Viper, and OpenTelemetry, following a clean architecture with separate modules for TUI, log analysis, frequency tracking, OTLP handling, and AI integration.

shell_gpt
ShellGPT is a command-line productivity tool powered by AI large language models (LLMs). This command-line tool offers streamlined generation of shell commands, code snippets, documentation, eliminating the need for external resources (like Google search). Supports Linux, macOS, Windows and compatible with all major Shells like PowerShell, CMD, Bash, Zsh, etc.

holoinsight
HoloInsight is a cloud-native observability platform that provides low-cost and high-performance monitoring services for cloud-native applications. It offers deep insights through real-time log analysis and AI integration. The platform is designed to help users gain a comprehensive understanding of their applications' performance and behavior in the cloud environment. HoloInsight is easy to deploy using Docker and Kubernetes, making it a versatile tool for monitoring and optimizing cloud-native applications. With a focus on scalability and efficiency, HoloInsight is suitable for organizations looking to enhance their observability and monitoring capabilities in the cloud.

WatchAlert
WatchAlert is a lightweight monitoring and alerting engine tailored for cloud-native environments, focusing on observability and stability themes. It provides comprehensive monitoring and alerting support, including AI-powered alert analysis for efficient troubleshooting. WatchAlert integrates with various data sources such as Prometheus, VictoriaMetrics, Loki, Elasticsearch, AliCloud SLS, Jaeger, Kubernetes, and different network protocols for monitoring and supports alert notifications via multiple channels like Feishu, DingTalk, WeChat Work, email, and custom hooks. It is optimized for cloud-native environments, easy to use, offers flexible alert rule configurations, and specializes in stability scenarios to help users quickly identify and resolve issues, providing a reliable monitoring and alerting solution to enhance operational efficiency and reduce maintenance costs.

aitom
AITom is an open-source platform for AI-driven cellular electron cryo-tomography analysis. It is developed to process large amounts of Cryo-ET data, reconstruct, detect, classify, recover, and spatially model different cellular components using state-of-the-art machine learning approaches. The platform aims to automate cellular structure discovery and provide new insights into molecular biology and medical applications.

angular-node-java-ai
This repository contains a project that integrates Angular frontend, Node.js backend, Java services, and AI capabilities. The project aims to demonstrate a full-stack application with modern technologies and AI features. It showcases how to build a scalable and efficient system using Angular for the frontend, Node.js for the backend, Java for services, and AI for advanced functionalities.

page-assist
Page Assist is an open-source Chrome Extension that provides a Sidebar and Web UI for your Local AI model. It allows you to interact with your model from any webpage.
For similar jobs

sweep
Sweep is an AI junior developer that turns bugs and feature requests into code changes. It automatically handles developer experience improvements like adding type hints and improving test coverage.

teams-ai
The Teams AI Library is a software development kit (SDK) that helps developers create bots that can interact with Teams and Microsoft 365 applications. It is built on top of the Bot Framework SDK and simplifies the process of developing bots that interact with Teams' artificial intelligence capabilities. The SDK is available for JavaScript/TypeScript, .NET, and Python.

ai-guide
This guide is dedicated to Large Language Models (LLMs) that you can run on your home computer. It assumes your PC is a lower-end, non-gaming setup.

classifai
Supercharge WordPress Content Workflows and Engagement with Artificial Intelligence. Tap into leading cloud-based services like OpenAI, Microsoft Azure AI, Google Gemini and IBM Watson to augment your WordPress-powered websites. Publish content faster while improving SEO performance and increasing audience engagement. ClassifAI integrates Artificial Intelligence and Machine Learning technologies to lighten your workload and eliminate tedious tasks, giving you more time to create original content that matters.

chatbot-ui
Chatbot UI is an open-source AI chat app that allows users to create and deploy their own AI chatbots. It is easy to use and can be customized to fit any need. Chatbot UI is perfect for businesses, developers, and anyone who wants to create a chatbot.

BricksLLM
BricksLLM is a cloud native AI gateway written in Go. Currently, it provides native support for OpenAI, Anthropic, Azure OpenAI and vLLM. BricksLLM aims to provide enterprise level infrastructure that can power any LLM production use cases. Here are some use cases for BricksLLM: * Set LLM usage limits for users on different pricing tiers * Track LLM usage on a per user and per organization basis * Block or redact requests containing PIIs * Improve LLM reliability with failovers, retries and caching * Distribute API keys with rate limits and cost limits for internal development/production use cases * Distribute API keys with rate limits and cost limits for students

uAgents
uAgents is a Python library developed by Fetch.ai that allows for the creation of autonomous AI agents. These agents can perform various tasks on a schedule or take action on various events. uAgents are easy to create and manage, and they are connected to a fast-growing network of other uAgents. They are also secure, with cryptographically secured messages and wallets.

griptape
Griptape is a modular Python framework for building AI-powered applications that securely connect to your enterprise data and APIs. It offers developers the ability to maintain control and flexibility at every step. Griptape's core components include Structures (Agents, Pipelines, and Workflows), Tasks, Tools, Memory (Conversation Memory, Task Memory, and Meta Memory), Drivers (Prompt and Embedding Drivers, Vector Store Drivers, Image Generation Drivers, Image Query Drivers, SQL Drivers, Web Scraper Drivers, and Conversation Memory Drivers), Engines (Query Engines, Extraction Engines, Summary Engines, Image Generation Engines, and Image Query Engines), and additional components (Rulesets, Loaders, Artifacts, Chunkers, and Tokenizers). Griptape enables developers to create AI-powered applications with ease and efficiency.