
mcp-victoriametrics
The implementation of Model Context Protocol (MCP) server for VictoriaMetrics
Stars: 70

The VictoriaMetrics MCP Server is an implementation of Model Context Protocol (MCP) server for VictoriaMetrics. It provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. The server allows you to use almost all read-only APIs of VictoriaMetrics, enabling monitoring, observability, and debugging tasks related to your VictoriaMetrics instances. It also contains embedded up-to-date documentation and tools for exploring metrics, labels, alerts, and more. The server can be used for advanced automation and interaction capabilities for engineers and tools.
README:
The implementation of Model Context Protocol (MCP) server for VictoriaMetrics.
This provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. It can give you a comprehensive interface for monitoring, observability, and debugging tasks related to your VictoriaMetrics instances, enable advanced automation and interaction capabilities for engineers and tools.
This MCP server allows you to use almost all read-only APIs of VictoriaMetrics, i.e. all functions available in VMUI:
- Querying metrics and exploring data (even drawing graphs if your client supports it)
- Listing and exporting available metrics, labels, labels values and entire series
- Analyzing and testing your alerting and recording rules and alerts
- Showing parameters of your VictoriaMetrics instance
- Exploring cardinality of your data and metrics usage statistics
- Analyzing, tracing, prettifying and explaining your queries
- Debugging your relabeling rules, downsampling and retention policy configurations
- Integration with VictoriaMetrics Cloud
In addition, the MCP server contains embedded up-to-date documentation and is able to search it without online access.
More details about the exact available tools and prompts can be found in the Usage section.
You can combine functionality of tools, docs search in your prompts and invent great usage scenarios for your VictoriaMetrics instance. Just check the Dialog example section to see how it can work. And please note the fact that the quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
You can also combine the MCP server with other observability or doc search related MCP Servers and get even more powerful results.
There is a publicly available instance of the VictoriaMetrics MCP Server that you can use to test the features without installing it:
https://play-mcp.victoriametrics.com/mcp
It's available in Streamable HTTP mode and configured to work with Public VictoriaMetrics Playground.
Here is example of configuration for Claude Desktop:
- VictoriaMetrics or VictoriaMetrics Cloud instance (single-node or cluster)
- Go 1.24 or higher (if you want to build from source)
go install github.com/VictoriaMetrics-Community/mcp-victoriametrics/cmd/mcp-victoriametrics@latest
Just download the latest release from Releases page and put it to your PATH.
Example for Linux x86_64 (note that other architectures and platforms are also available):
latest=$(curl -s https://api.github.com/repos/VictoriaMetrics-Community/mcp-victoriametrics/releases/latest | grep 'tag_name' | cut -d\" -f4)
wget https://github.com/VictoriaMetrics-Community/mcp-victoriametrics/releases/download/$latest/mcp-victoriametrics_Linux_x86_64.tar.gz
tar axvf mcp-victoriametrics_Linux_x86_64.tar.gz
You can run VictoriaMetrics MCP Server using Docker.
This is the easiest way to get started without needing to install Go or build from source.
docker run -d --name mcp-victoriametrics \
-e MCP_SERVER_MODE=sse \
-e VM_INSTANCE_ENTRYPOINT=https://play.victoriametrics.com \
-e VM_INSTANCE_TYPE=cluster \
ghcr.io/victoriametrics-community/mcp-victoriametrics
You should replace environment variables with your own parameters.
Note that the MCP_SERVER_MODE=sse
flag is used to enable Server-Sent Events mode, which used by MCP clients to connect.
Alternatively, you can use MCP_SERVER_MODE=http
to enable Streamable HTTP mode. More details about server modes can be found in the Configuration section.
See available docker images in github registry.
Also see Using Docker instead of binary section for more details about using Docker with MCP server with clients in stdio mode.
For building binary from source code you can use the following approach:
-
Clone repo:
git clone https://github.com/VictoriaMetrics-Community/mcp-victoriametrics.git cd mcp-victoriametrics
-
Build binary from cloned source code:
make build # after that you can find binary mcp-victoriametrics and copy this file to your PATH or run inplace
-
Build image from cloned source code:
docker build -t mcp-victoriametrics . # after that you can use docker image mcp-victoriametrics for running or pushing
To install VictoriaMetrics MCP Server for your client automatically via Smithery, yo can use the following commands:
# Get the list of supported MCP clients
npx -y @smithery/cli list clients
#Available clients:
# claude
# cline
# windsurf
# roocode
# witsy
# enconvo
# cursor
# vscode
# vscode-insiders
# boltai
# amazon-bedrock
# Install VictoriaMetrics MCP server for your client
npx -y @smithery/cli install @VictoriaMetrics-Community/mcp-victoriametrics --client <YOUR-CLIENT-NAME>
# and follow the instructions
MCP Server for VictoriaMetrics is configured via environment variables:
Variable | Description | Required | Default | Allowed values |
---|---|---|---|---|
VM_INSTANCE_ENTRYPOINT / VMC_API_KEY
|
URL to VictoriaMetrics instance (it should be root / URL of vmsingle or vmselect) |
Yes (if you don't use VMC_API_KEY ) |
- | - |
VM_INSTANCE_TYPE |
Type of VictoriaMetrics instance | Yes (if you don't use VMC_API_KEY ) |
- |
single , cluster
|
VM_INSTANCE_BEARER_TOKEN |
Authentication token for VictoriaMetrics API | No | - | - |
VMC_API_KEY |
API key from VictoriaMetrics Cloud Console | No | - | - |
MCP_SERVER_MODE |
Server operation mode. See Modes for details. | No | stdio |
stdio , sse , http
|
MCP_LISTEN_ADDR |
Address for SSE or HTTP server to listen on | No | localhost:8080 |
- |
MCP_DISABLED_TOOLS |
Comma-separated list of tools to disable | No | - | - |
MCP_HEARTBEAT_INTERVAL |
Defines the heartbeat interval for the streamable-http protocol. It means the MCP server will send a heartbeat to the client through the GET connection, to keep the connection alive from being closed by the network infrastructure (e.g. gateways) |
No | 30s |
- |
You can use two options to connect to your VictoriaMetrics instance:
- Using
VM_INSTANCE_ENTRYPOINT
+VM_INSTANCE_TYPE
+VM_INSTANCE_BEARER_TOKEN
(optional) environment variables to connect to any single-node or cluster instance of VictoriaMetrics. - Using
VMC_API_KEY
environment variable to work with your VictoriaMetrics Cloud instances.
MCP Server supports the following modes of operation (transports):
-
stdio
- Standard input/output mode, where the server reads commands from standard input and writes responses to standard output. This is the default mode and is suitable for local servers. -
sse
- Server-Sent Events. Server will expose the/sse
and/message
endpoints for SSE connections. -
http
- Streamable HTTP. Server will expose the/mcp
endpoint for HTTP connections.
More info about traqnsports you can find in MCP docs:
# For a single-node instance
export VM_INSTANCE_ENTRYPOINT="http://localhost:8428"
export VM_INSTANCE_TYPE="single"
export VM_INSTANCE_BEARER_TOKEN="your-token"
# For a cluster
export VM_INSTANCE_ENTRYPOINT="https://play.victoriametrics.com"
export VM_INSTANCE_TYPE="cluster"
export MCP_DISABLED_TOOLS="export,metric_statistics,test_rules" # disable export, statistics and rules unit test tools
# For VictoriaMetrics Cloud
export VMC_API_KEY="<you-api-key>"
# Server mode
export MCP_SERVER_MODE="sse"
export MCP_LISTEN_ADDR="0.0.0.0:8080"
In SSE and HTTP modes the MCP server provides the following endpoints:
Endpoint | Description |
---|---|
/sse + /message
|
Endpoints for messages in SSE mode (for MCP clients that support SSE) |
/mcp |
HTTP endpoint for streaming messages in HTTP mode (for MCP clients that support Streamable HTTP) |
/metrics |
Metrics in Prometheus format for monitoring the MCP server |
/health/liveness |
Liveness check endpoint to ensure the server is running |
/health/readiness |
Readiness check endpoint to ensure the server is ready to accept requests |
Go to: Settings
-> Cursor Settings
-> MCP
-> Add new global MCP server
and paste the following configuration into your Cursor ~/.cursor/mcp.json
file:
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
}
}
}
}
See Cursor MCP docs for more info.
Add this to your Claude Desktop claude_desktop_config.json
file (you can find it if open Settings
-> Developer
-> Edit config
):
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
}
}
}
}
See Claude Desktop MCP docs for more info.
Run the command:
claude mcp add victoriametrics -- /path/to/mcp-victoriametrics \
-e VM_INSTANCE_ENTRYPOINT=<YOUR_VM_INSTANCE> \
-e VM_INSTANCE_TYPE=<YOUR_VM_INSTANCE_TYPE>
-e VM_INSTANCE_BEARER_TOKEN=<YOUR_VM_BEARER_TOKEN>
See Claude Code MCP docs for more info.
Add this to your VS Code MCP config file:
{
"servers": {
"victoriametrics": {
"type": "stdio",
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
}
}
}
}
See VS Code MCP docs for more info.
Add the following to your Zed config file:
"context_servers": {
"victoriametrics": {
"command": {
"path": "/path/to/mcp-victoriametrics",
"args": [],
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
}
},
"settings": {}
}
}
See Zed MCP docs for more info.
- Open
Settings
->Tools
->AI Assistant
->Model Context Protocol (MCP)
. - Click
Add (+)
- Select
As JSON
- Put the following to the input field:
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
}
}
}
}
Add the following to your Windsurf MCP config file.
{
"mcpServers": {
"victoriametrics": {
"command": "/path/to/mcp-victoriametrics",
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
}
}
}
}
See Windsurf MCP docs for more info.
You can run VictoriaMetrics MCP server using Docker instead of local binary.
You should replace run command in configuration examples above in the following way:
{
"mcpServers": {
"victoriametrics": {
"command": "docker",
"args": [
"run",
"-i", "--rm",
"-e", "VM_INSTANCE_ENTRYPOINT",
"-e", "VM_INSTANCE_TYPE",
"-e", "VM_INSTANCE_BEARER_TOKEN",
"ghcr.io/victoriametrics-community/mcp-victoriametrics",
],
"env": {
"VM_INSTANCE_ENTRYPOINT": "<YOUR_VM_INSTANCE>",
"VM_INSTANCE_TYPE": "<YOUR_VM_INSTANCE_TYPE>",
"VM_INSTANCE_BEARER_TOKEN": "<YOUR_VM_BEARER_TOKEN>"
}
}
}
}
After installing and configuring the MCP server, you can start using it with your favorite MCP client.
You can start dialog with AI assistant from the phrase:
Use MCP VictoriaMetrics in the following answers
But it's not required, you can just start asking questions and the assistant will automatically use the tools and documentation to provide you with the best answers. Just take a look into Dialog example section for better understanding what you can do with it.
MCP VictoriaMetrics provides numerous tools for interacting with your VictoriaMetrics instance.
Here's a list of common available tools:
Tool | Description |
---|---|
query |
Execute instant PromQL/MetricsQL queries |
query_range |
Execute range PromQL/MetricsQL queries over a time period |
metrics |
List available metrics |
labels |
List available label names |
label_values |
List values for a specific label |
series |
List available time series |
export |
Export raw time series data to JSON or CSV |
rules |
View alerting and recording rules |
alerts |
View current alerts (firing and pending) |
flags |
View non-default flags of the VictoriaMetrics instance |
metric_statistics |
Get metrics usage (in queries) statistics |
active_queries |
View currently executing queries |
top_queries |
View most frequent or slowest queries |
tsdb_status |
View TSDB cardinality statistics |
tenants |
List available tenants in multi-tenant cluster setup |
documentation |
Search in embedded VictoriaMetrics documentation |
metric_relabel_debug |
Debug Prometheus-compatible relabeling rules |
downsampling_filters_debug |
Debug downsampling configuration |
retention_filters_debug |
Debug retention filters configuration |
prettify_query |
Prettify and format PromQL/MetricsQL queries |
explain_query |
Parse PromQL/MetricsQL queries and explain how it works |
test_rules |
Unit-test alerting and recording rules using vmalert tool |
Here are some additional tools that are available for VictoriaMetrics Cloud (with specifying VMC_API_KEY
parameter) users:
Tool | Description |
---|---|
deployments |
List available deployments in VictoriaMetrics Cloud |
cloud_providers |
List available cloud providers in VictoriaMetrics Cloud |
regions |
List available cloud providers regions in VictoriaMetrics Cloud |
tiers |
List available deployment tiers in VictoriaMetrics Cloud |
access_tokens |
List available deployment access tokens in VictoriaMetrics Cloud |
rule_filenames |
List available alerting and recording rule filenames in VictoriaMetrics Cloud |
rule_file |
Content of a specific alerting or recording rule file in VictoriaMetrics Cloud |
The server includes pre-defined prompts for common tasks.
These are just examples at the moment, the prompt library will be added to in the future:
Prompt | Description |
---|---|
unused_metrics |
Find metrics that are never queried and create relabel configs to stop ingesting them |
documentation |
Search VictoriaMetrics documentation for specific topics |
rarely_used_metrics_with_high_cardinality |
Identify high-cardinality metrics that are rarely queried |
This is just example of dialog with Assistant using MCP Server that can help to understand how it can work (but's it's real dialog with using the Victoria Metrics MCP server with Claude Desktop and configured to public playground instance):
-
You: What tenants are available in my VM?
-
Assistant In your VictoriaMetrics instance, the following tenants are available
0:0
(default tenant),1:0
,10:0
,100:0
-
You: From now on, please use only default tenant
-
Assistant: I understand. From now on, I'll use only the default tenant (0:0) for all operations with your VictoriaMetrics instance.
-
You: Which metrics do we have?
-
Assistant: There are many metrics available in your VictoriaMetrics instance. The list includes metrics from various systems and components, such as:
- System metrics (
node_
,machine_
, etc.), - Kubernetes metrics (
kube_
,container_
,apiserver_
, etc.), - VictoriaMetrics-specific metrics (
vm_
,vmagent_
,vmalert_
, etc.), - Go runtime metrics (
go_
), - Prometheus-related metrics (
scrape_
,promhttp_
), - Various application metrics (
argocd_
,alertmanager_
).
This is a comprehensive list of over 2000 metrics that are available for querying. Is there a specific type of metric you're interested in exploring further?
- System metrics (
-
You: Ok, let's work with kubernetes metrics from now on. Please show me which of these metrics never queried?
-
Assistant: Based on the results, I can see that there are two Kubernetes metrics that have never been queried (query count is 0):
kube_persistentvolumeclaim_deletion_timestamp
- This metric provides information about when PersistentVolumeClaims were marked for deletion.kube_pod_init_container_status_last_terminated_reason
- This metric shows the reason why init containers in pods were last terminated.
These metrics might be valuable for monitoring and troubleshooting purposes:
- The PVC deletion timestamp metric could help track deletion patterns or identify stuck PVCs.
- The init container termination reason metric could help identify common failure patterns in your initialization containers.
Would you like me to explain more about what these metrics track or suggest some useful queries to start monitoring them?
-
You: Yes, please. And also, can you create relabeling rules to stop ingesting these metrics?
-
Assistant: I'll create a relabel configuration to exclude these never-queried metrics from collection. This will help reduce storage and processing overhead. Here's the relabel configuration you can add to your VictoriaMetrics scrape configuration:
relabel_configs: - action: drop source_labels: [__name__] regex: kube_persistentvolumeclaim_deletion_timestamp|kube_pod_init_container_status_last_terminated_reason
This configuration will:
- Look at each metric's name (the
__name__
label) - Check if it matches either of the specified metrics
- Drop the metrics that match, preventing them from being collected and stored
You can add this to your VictoriaMetrics configuration file or to the relevant scrape job configurations in your Prometheus configuration if you're using Prometheus as a scraper for VictoriaMetrics. If you want to test this relabel configuration before implementing it, you can use the metric relabel debug tool to see how it would affect your metrics.
- Look at each metric's name (the
-
You: Yes, please debug this relabel config on real time series to see how it would affect my metrics
-
Assistant: Perfect! The relabel config works as expected:
- The first two test cases show that both
kube_persistentvolumeclaim_deletion_timestamp
andkube_pod_init_container_status_last_terminated_reason
metrics are correctly dropped (resulting labels are empty{}
). - The third test case confirms that other metrics like
kube_pod_status_phase
are preserved and not affected by our drop rule.
You can confidently add this relabel configuration to your VictoriaMetrics or Prometheus configuration to stop collecting these unused metrics. This will save storage space and reduce processing overhead without affecting your current monitoring capabilities.
- The first two test cases show that both
-
You: But i'm using vmagent resource of VictoriaMetrics operator, could you please give me part of specification for my agent with this relabel config?
-
Assistant: You're right, since you're using the VictoriaMetrics Operator and the VMAgent custom resource, the configuration needs to be in the appropriate format for that CR. Here's how you can add the relabel configuration to your VMAgent specification:
apiVersion: operator.victoriametrics.com/v1beta1 kind: VMAgent metadata: name: your-vmagent-name spec: # ... your existing configuration ... # Add this section to configure global metric relabeling inlineRelabelConfig: - action: drop sourceLabels: ["__name__"] regex: kube_persistentvolumeclaim_deletion_timestamp|kube_pod_init_container_status_last_terminated_reason # ... rest of your configuration ...
Key points:
- In the VMAgent spec, you use
inlineRelabelConfig
for inline relabeling rules - The field is called
sourceLabels
(notsource_labels
as in Prometheus config) - This is a global relabeling that will be applied to all metrics before they're stored
- In the VMAgent spec, you use
-
You: I decided not to remove those metrics, but just lower the sample rate for them using Downsampling. What downsampling parameters i have now?
-
Assistant: I can see that you currently have the following downsampling configuration:
-downsampling.period="7d:1m,30d:5m,180d:30m"
This means:
- Data newer than 7 days is kept at its original resolution
- Data between 7-30 days old is downsampled to 1-minute resolution
- Data between 30-180 days old is downsampled to 5-minute resolution
- Data older than 180 days is downsampled to 30-minute resolution
Now, if you want to add specific downsampling rules for those rarely-queried metrics, you can use the downsampling filters debug tool to test how new rules would work. Let's test a configuration where we apply more aggressive downsampling to those specific metrics:
...getting series and using downsampling filter debugger...
Great! Now our test is working correctly. Based on the current configuration and our testing, here's a suggested downsampling configuration for your VMAgent CR that would apply more aggressive downsampling to those rarely-queried metrics:
apiVersion: operator.victoriametrics.com/v1beta1 kind: VMCluster metadata: name: monitoring-vm-cluster spec: # ... your existing configuration ... # Update the vmstorage section with the new downsampling parameters vmstorage: extraArgs: downsampling.period: '{__name__=\"kube_persistentvolumeclaim_deletion_timestamp\"}:1d:10m,{__name__=\"kube_pod_init_container_status_last_terminated_reason\"}:1d:10m,7d:1m,30d:5m,180d:30m' # ... rest of your configuration ...
This configuration:
- Applies 10-minute resolution downsampling for the rarely-queried metrics after just 1 day - Maintains your existing downsampling policy for all other metrics
The order of the downsampling rules is important - more specific rules should be listed first.
During this dialog, the assistant was using the following tools:
-
tenants
to get the list of available tenants -
documentation
to get information about functionality and data formats -
metrics
to get the list of available metrics -
metrics_staistics
to get the information about metrics usage -
series
to get the time series for debugging -
metric_relabel_debug
to debug relabeling rules -
flags
to get the information about instance parameters -
downsampling_filters_debug
to debug downsampling configuration
But you can use any other tools and combine them in your own way.
In SSE and HTTP modes the MCP Server provides metrics in Prometheus format (see endpoints) and you can find in repo simple grafana dashboard for these metrics.
- [x] Support "Prettify query" tool (done in
v0.0.5
) - [x] Support "Explain query" tool (done in
v0.0.6
) - [x] Support CI pipeline for building and pushing multiarch docker images (done in
v1.0.0
) - [ ] Support tool for analysis of Query execution statistics
- [ ] Support vmanomaly
- [x] Support tool for unit-testing of alerting and recording rules (done in
v0.0.7
) - [x] Support optional integration with VictoriaMetrics Cloud (via API keys) (done in
v0.0.9
) - [ ] Add some extra knowledge to server in addition to current documentation tool:
- [x] VictoriaMetrics blog posts (done in
v1.1.0
) - [ ] Github issues
- [ ] Public slack chat history
- [ ] CRD schemas
- [ ] Alerting and recording rule sets
- [x] VictoriaMetrics blog posts (done in
- [ ] Implement multitenant version of MCP (that will support several deployments)
- [ ] Add flags/configs validation tool
- [ ] Support tools for vmagent API
- [ ] Support new vmalert API
- [x] Enabling/disabling tools via configuration (done in
v0.0.8
) - [ ] Tools for Alertmanager APIs #6
- [ ] Support for metrics metadata in case of implementation in VictoriaMetrics
- [ ] Support authentication
AI services and agents along with MCP servers like this cannot guarantee the accuracy, completeness and reliability of results. You should double check the results obtained with AI.
The quality of the MCP Server and its responses depends very much on the capabilities of your client and the quality of the model you are using.
Contributions to the MCP VictoriaMetrics project are welcome!
Please feel free to submit issues, feature requests, or pull requests.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for mcp-victoriametrics
Similar Open Source Tools

mcp-victoriametrics
The VictoriaMetrics MCP Server is an implementation of Model Context Protocol (MCP) server for VictoriaMetrics. It provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. The server allows you to use almost all read-only APIs of VictoriaMetrics, enabling monitoring, observability, and debugging tasks related to your VictoriaMetrics instances. It also contains embedded up-to-date documentation and tools for exploring metrics, labels, alerts, and more. The server can be used for advanced automation and interaction capabilities for engineers and tools.

runpod-worker-comfy
runpod-worker-comfy is a serverless API tool that allows users to run any ComfyUI workflow to generate an image. Users can provide input images as base64-encoded strings, and the generated image can be returned as a base64-encoded string or uploaded to AWS S3. The tool is built on Ubuntu + NVIDIA CUDA and provides features like built-in checkpoints and VAE models. Users can configure environment variables to upload images to AWS S3 and interact with the RunPod API to generate images. The tool also supports local testing and deployment to Docker hub using Github Actions.

avante.nvim
avante.nvim is a Neovim plugin that emulates the behavior of the Cursor AI IDE, providing AI-driven code suggestions and enabling users to apply recommendations to their source files effortlessly. It offers AI-powered code assistance and one-click application of suggested changes, streamlining the editing process and saving time. The plugin is still in early development, with functionalities like setting API keys, querying AI about code, reviewing suggestions, and applying changes. Key bindings are available for various actions, and the roadmap includes enhancing AI interactions, stability improvements, and introducing new features for coding tasks.

pentagi
PentAGI is an innovative tool for automated security testing that leverages cutting-edge artificial intelligence technologies. It is designed for information security professionals, researchers, and enthusiasts who need a powerful and flexible solution for conducting penetration tests. The tool provides secure and isolated operations in a sandboxed Docker environment, fully autonomous AI-powered agent for penetration testing steps, a suite of 20+ professional security tools, smart memory system for storing research results, web intelligence for gathering information, integration with external search systems, team delegation system, comprehensive monitoring and reporting, modern interface, API integration, persistent storage, scalable architecture, self-hosted solution, flexible authentication, and quick deployment through Docker Compose.

repomix
Repomix is a powerful tool that packs your entire repository into a single, AI-friendly file. It is designed to format your codebase for easy understanding by AI tools like Large Language Models (LLMs), Claude, ChatGPT, and Gemini. Repomix offers features such as AI optimization, token counting, simplicity in usage, customization options, Git awareness, and security-focused checks using Secretlint. It allows users to pack their entire repository or specific directories/files using glob patterns, and even supports processing remote Git repositories. The tool generates output in plain text, XML, or Markdown formats, with options for including/excluding files, removing comments, and performing security checks. Repomix also provides a global configuration option, custom instructions for AI context, and a security check feature to detect sensitive information in files.

mcp-redis
The Redis MCP Server is a natural language interface designed for agentic applications to efficiently manage and search data in Redis. It integrates seamlessly with MCP (Model Content Protocol) clients, enabling AI-driven workflows to interact with structured and unstructured data in Redis. The server supports natural language queries, seamless MCP integration, full Redis support for various data types, search and filtering capabilities, scalability, and lightweight design. It provides tools for managing data stored in Redis, such as string, hash, list, set, sorted set, pub/sub, streams, JSON, query engine, and server management. Installation can be done from PyPI or GitHub, with options for testing, development, and Docker deployment. Configuration can be via command line arguments or environment variables. Integrations include OpenAI Agents SDK, Augment, Claude Desktop, and VS Code with GitHub Copilot. Use cases include AI assistants, chatbots, data search & analytics, and event processing. Contributions are welcome under the MIT License.

agenticSeek
AgenticSeek is a voice-enabled AI assistant powered by DeepSeek R1 agents, offering a fully local alternative to cloud-based AI services. It allows users to interact with their filesystem, code in multiple languages, and perform various tasks autonomously. The tool is equipped with memory to remember user preferences and past conversations, and it can divide tasks among multiple agents for efficient execution. AgenticSeek prioritizes privacy by running entirely on the user's hardware without sending data to the cloud.

nano-graphrag
nano-GraphRAG is a simple, easy-to-hack implementation of GraphRAG that provides a smaller, faster, and cleaner version of the official implementation. It is about 800 lines of code, small yet scalable, asynchronous, and fully typed. The tool supports incremental insert, async methods, and various parameters for customization. Users can replace storage components and LLM functions as needed. It also allows for embedding function replacement and comes with pre-defined prompts for entity extraction and community reports. However, some features like covariates and global search implementation differ from the original GraphRAG. Future versions aim to address issues related to data source ID, community description truncation, and add new components.

forge
Forge is a powerful open-source tool for building modern web applications. It provides a simple and intuitive interface for developers to quickly scaffold and deploy projects. With Forge, you can easily create custom components, manage dependencies, and streamline your development workflow. Whether you are a beginner or an experienced developer, Forge offers a flexible and efficient solution for your web development needs.

mistral-inference
Mistral Inference repository contains minimal code to run 7B, 8x7B, and 8x22B models. It provides model download links, installation instructions, and usage guidelines for running models via CLI or Python. The repository also includes information on guardrailing, model platforms, deployment, and references. Users can interact with models through commands like mistral-demo, mistral-chat, and mistral-common. Mistral AI models support function calling and chat interactions for tasks like testing models, chatting with models, and using Codestral as a coding assistant. The repository offers detailed documentation and links to blogs for further information.

pr-pilot
PR Pilot is an AI-powered tool designed to assist users in their daily workflow by delegating routine work to AI with confidence and predictability. It integrates seamlessly with popular development tools and allows users to interact with it through a Command-Line Interface, Python SDK, REST API, and Smart Workflows. Users can automate tasks such as generating PR titles and descriptions, summarizing and posting issues, and formatting README files. The tool aims to save time and enhance productivity by providing AI-powered solutions for common development tasks.

python-tgpt
Python-tgpt is a Python package that enables seamless interaction with over 45 free LLM providers without requiring an API key. It also provides image generation capabilities. The name _python-tgpt_ draws inspiration from its parent project tgpt, which operates on Golang. Through this Python adaptation, users can effortlessly engage with a number of free LLMs available, fostering a smoother AI interaction experience.

roast
Roast is a convention-oriented framework for creating structured AI workflows maintained by the Augmented Engineering team at Shopify. It provides a structured, declarative approach to building AI workflows with convention over configuration, built-in tools for file operations, search, and AI interactions, Ruby integration for custom steps, shared context between steps, step customization with AI models and parameters, session replay, parallel execution, function caching, and extensive instrumentation for monitoring workflow execution, AI calls, and tool usage.

parrot.nvim
Parrot.nvim is a Neovim plugin that prioritizes a seamless out-of-the-box experience for text generation. It simplifies functionality and focuses solely on text generation, excluding integration of DALLE and Whisper. It supports persistent conversations as markdown files, custom hooks for inline text editing, multiple providers like Anthropic API, perplexity.ai API, OpenAI API, Mistral API, and local/offline serving via ollama. It allows custom agent definitions, flexible API credential support, and repository-specific instructions with a `.parrot.md` file. It does not have autocompletion or hidden requests in the background to analyze files.

aiosmtpd
aiosmtpd is an asyncio-based SMTP server implementation that provides a modern and efficient way to handle SMTP and LMTP protocols in Python 3. It replaces the outdated asyncore and asynchat modules with asyncio for improved asynchronous I/O operations. The project aims to offer a more user-friendly, extendable, and maintainable solution for handling email protocols in Python applications. It is actively maintained by experienced Python developers and offers full documentation for easy integration and usage.

syncode
SynCode is a novel framework for the grammar-guided generation of Large Language Models (LLMs) that ensures syntactically valid output based on a Context-Free Grammar (CFG). It supports various programming languages like Python, Go, SQL, Math, JSON, and more. Users can define custom grammars using EBNF syntax. SynCode offers fast generation, seamless integration with HuggingFace Language Models, and the ability to sample with different decoding strategies.
For similar tasks

mcp-victoriametrics
The VictoriaMetrics MCP Server is an implementation of Model Context Protocol (MCP) server for VictoriaMetrics. It provides access to your VictoriaMetrics instance and seamless integration with VictoriaMetrics APIs and documentation. The server allows you to use almost all read-only APIs of VictoriaMetrics, enabling monitoring, observability, and debugging tasks related to your VictoriaMetrics instances. It also contains embedded up-to-date documentation and tools for exploring metrics, labels, alerts, and more. The server can be used for advanced automation and interaction capabilities for engineers and tools.

pandas-ai
PandasAI is a Python library that makes it easy to ask questions to your data in natural language. It helps you to explore, clean, and analyze your data using generative AI.

supersonic
SuperSonic is a next-generation BI platform that integrates Chat BI (powered by LLM) and Headless BI (powered by semantic layer) paradigms. This integration ensures that Chat BI has access to the same curated and governed semantic data models as traditional BI. Furthermore, the implementation of both paradigms benefits from the integration: * Chat BI's Text2SQL gets augmented with context-retrieval from semantic models. * Headless BI's query interface gets extended with natural language API. SuperSonic provides a Chat BI interface that empowers users to query data using natural language and visualize the results with suitable charts. To enable such experience, the only thing necessary is to build logical semantic models (definition of metric/dimension/tag, along with their meaning and relationships) through a Headless BI interface. Meanwhile, SuperSonic is designed to be extensible and composable, allowing custom implementations to be added and configured with Java SPI. The integration of Chat BI and Headless BI has the potential to enhance the Text2SQL generation in two dimensions: 1. Incorporate data semantics (such as business terms, column values, etc.) into the prompt, enabling LLM to better understand the semantics and reduce hallucination. 2. Offload the generation of advanced SQL syntax (such as join, formula, etc.) from LLM to the semantic layer to reduce complexity. With these ideas in mind, we develop SuperSonic as a practical reference implementation and use it to power our real-world products. Additionally, to facilitate further development we decide to open source SuperSonic as an extensible framework.

DeepBI
DeepBI is an AI-native data analysis platform that leverages the power of large language models to explore, query, visualize, and share data from any data source. Users can use DeepBI to gain data insight and make data-driven decisions.

WrenAI
WrenAI is a data assistant tool that helps users get results and insights faster by asking questions in natural language, without writing SQL. It leverages Large Language Models (LLM) with Retrieval-Augmented Generation (RAG) technology to enhance comprehension of internal data. Key benefits include fast onboarding, secure design, and open-source availability. WrenAI consists of three core services: Wren UI (intuitive user interface), Wren AI Service (processes queries using a vector database), and Wren Engine (platform backbone). It is currently in alpha version, with new releases planned biweekly.

opendataeditor
The Open Data Editor (ODE) is a no-code application to explore, validate and publish data in a simple way. It is an open source project powered by the Frictionless Framework. The ODE is currently available for download and testing in beta.

Chat2DB
Chat2DB is an AI-driven data development and analysis platform that enables users to communicate with databases using natural language. It supports a wide range of databases, including MySQL, PostgreSQL, Oracle, SQLServer, SQLite, MariaDB, ClickHouse, DM, Presto, DB2, OceanBase, Hive, KingBase, MongoDB, Redis, and Snowflake. Chat2DB provides a user-friendly interface that allows users to query databases, generate reports, and explore data using natural language commands. It also offers a variety of features to help users improve their productivity, such as auto-completion, syntax highlighting, and error checking.

llm-datasets
LLM Datasets is a repository containing high-quality datasets, tools, and concepts for LLM fine-tuning. It provides datasets with characteristics like accuracy, diversity, and complexity to train large language models for various tasks. The repository includes datasets for general-purpose, math & logic, code, conversation & role-play, and agent & function calling domains. It also offers guidance on creating high-quality datasets through data deduplication, data quality assessment, data exploration, and data generation techniques.
For similar jobs

aiscript
AiScript is a lightweight scripting language that runs on JavaScript. It supports arrays, objects, and functions as first-class citizens, and is easy to write without the need for semicolons or commas. AiScript runs in a secure sandbox environment, preventing infinite loops from freezing the host. It also allows for easy provision of variables and functions from the host.

askui
AskUI is a reliable, automated end-to-end automation tool that only depends on what is shown on your screen instead of the technology or platform you are running on.

bots
The 'bots' repository is a collection of guides, tools, and example bots for programming bots to play video games. It provides resources on running bots live, installing the BotLab client, debugging bots, testing bots in simulated environments, and more. The repository also includes example bots for games like EVE Online, Tribal Wars 2, and Elvenar. Users can learn about developing bots for specific games, syntax of the Elm programming language, and tools for memory reading development. Additionally, there are guides on bot programming, contributing to BotLab, and exploring Elm syntax and core library.

ain
Ain is a terminal HTTP API client designed for scripting input and processing output via pipes. It allows flexible organization of APIs using files and folders, supports shell-scripts and executables for common tasks, handles url-encoding, and enables sharing the resulting curl, wget, or httpie command-line. Users can put things that change in environment variables or .env-files, and pipe the API output for further processing. Ain targets users who work with many APIs using a simple file format and uses curl, wget, or httpie to make the actual calls.

LaVague
LaVague is an open-source Large Action Model framework that uses advanced AI techniques to compile natural language instructions into browser automation code. It leverages Selenium or Playwright for browser actions. Users can interact with LaVague through an interactive Gradio interface to automate web interactions. The tool requires an OpenAI API key for default examples and offers a Playwright integration guide. Contributors can help by working on outlined tasks, submitting PRs, and engaging with the community on Discord. The project roadmap is available to track progress, but users should exercise caution when executing LLM-generated code using 'exec'.

robocorp
Robocorp is a platform that allows users to create, deploy, and operate Python automations and AI actions. It provides an easy way to extend the capabilities of AI agents, assistants, and copilots with custom actions written in Python. Users can create and deploy tools, skills, loaders, and plugins that securely connect any AI Assistant platform to their data and applications. The Robocorp Action Server makes Python scripts compatible with ChatGPT and LangChain by automatically creating and exposing an API based on function declaration, type hints, and docstrings. It simplifies the process of developing and deploying AI actions, enabling users to interact with AI frameworks effortlessly.

Open-Interface
Open Interface is a self-driving software that automates computer tasks by sending user requests to a language model backend (e.g., GPT-4V) and simulating keyboard and mouse inputs to execute the steps. It course-corrects by sending current screenshots to the language models. The tool supports MacOS, Linux, and Windows, and requires setting up the OpenAI API key for access to GPT-4V. It can automate tasks like creating meal plans, setting up custom language model backends, and more. Open Interface is currently not efficient in accurate spatial reasoning, tracking itself in tabular contexts, and navigating complex GUI-rich applications. Future improvements aim to enhance the tool's capabilities with better models trained on video walkthroughs. The tool is cost-effective, with user requests priced between $0.05 - $0.20, and offers features like interrupting the app and primary display visibility in multi-monitor setups.

AI-Case-Sorter-CS7.1
AI-Case-Sorter-CS7.1 is a project focused on building a case sorter using machine vision and machine learning AI to sort cases by headstamp. The repository includes Arduino code and 3D models necessary for the project.