
MCP-PostgreSQL-Ops
πProfessional MCP server for PostgreSQL operations & monitoring: 30+ extension-independent tools for performance analysis, table bloat detection, autovacuum monitoring, schema introspection, and database management. Supports PostgreSQL 12-17.
Stars: 114

MCP-PostgreSQL-Ops is a repository containing scripts and tools for managing and optimizing PostgreSQL databases. It provides a set of utilities to automate common database administration tasks, such as backup and restore, performance tuning, and monitoring. The scripts are designed to simplify the operational aspects of running PostgreSQL databases, making it easier for administrators to maintain and optimize their database instances. With MCP-PostgreSQL-Ops, users can streamline their database management processes and improve the overall performance and reliability of their PostgreSQL deployments.
README:
MCP-PostgreSQL-Ops is a professional MCP server for PostgreSQL database operations, monitoring, and management. Supports PostgreSQL 12-17 with comprehensive database analysis, performance monitoring, and intelligent maintenance recommendations through natural language queries. Most features work independently, but advanced query analysis capabilities are enhanced when pg_stat_statements
and (optionally) pg_stat_monitor
extensions are installed.
- β Zero Configuration: Works with PostgreSQL 12-17 out-of-the-box with automatic version detection.
- β Natural Language: Ask questions like "Show me slow queries" or "Analyze table bloat."
- β Production Safe: Read-only operations, RDS/Aurora compatible with regular user permissions.
- β
Extension Enhanced: Optional
pg_stat_statements
andpg_stat_monitor
for advanced query analytics. - β Comprehensive Database Monitoring: Performance analysis, bloat detection, and maintenance recommendations.
- β
Smart Query Analysis: Slow query identification with
pg_stat_statements
andpg_stat_monitor
integration. - β Schema & Relationship Discovery: Database structure exploration with detailed relationship mapping.
- β VACUUM & Autovacuum Intelligence: Real-time maintenance monitoring and effectiveness analysis.
- β Multi-Database Operations: Seamless cross-database analysis and monitoring.
- β Enterprise-Ready: Safe read-only operations with RDS/Aurora compatibility.
- β Developer-Friendly: Simple codebase for easy customization and tool extension.
- Version-aware I/O statistics (enhanced on PostgreSQL 16+).
- Real-time connection and lock monitoring.
- Background process and checkpoint analysis.
- Replication status and WAL monitoring.
- Database capacity and bloat analysis.
Note: The
postgresql
container included indocker-compose.yml
is intended for quickstart testing purposes only. You can connect to your own PostgreSQL instance by adjusting the environment variables as needed.
If you want to use your own PostgreSQL instance instead of the built-in test container:
- Update the target PostgreSQL connection information in your
.env
file (see POSTGRES_HOST, POSTGRES_PORT, POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB).- In
docker-compose.yml
, comment out (disable) thepostgres
andpostgres-init-extensions
containers to avoid starting the built-in test database.
Note: While superuser privileges provide access to all databases and system information, the MCP server also works with regular user permissions for basic monitoring tasks.
git clone https://github.com/call518/MCP-PostgreSQL-Ops.git
cd MCP-PostgreSQL-Ops
### Check and modify .env file
cp .env.example .env
vim .env
### No need to modify defaults, but if using your own PostgreSQL server, edit below:
POSTGRES_HOST=host.docker.internal
POSTGRES_PORT=15432 # External port for host access (mapped to internal 5432)
POSTGRES_USER=postgres
POSTGRES_PASSWORD=changeme!@34
POSTGRES_DB=ecommerce # Default connection DB. Superusers can access all DBs.
Note:
PGDATA=/data/db
is preconfigured for the Percona PostgreSQL Docker image, which requires this specific path for proper write permissions.
# Start all containers including built-in PostgreSQL for testing
docker-compose up -d
# Alternative: If using your own PostgreSQL instance
# Comment out postgres and postgres-init-extensions services in docker-compose.yml
# Then use the custom configuration:
# docker-compose -f docker-compose.custom-db.yml up -d
β° Wait for Environment Setup: The initial environment setup takes a few minutes as containers are started in sequence:
- PostgreSQL container starts first with database initialization
- PostgreSQL Extensions container installs extensions and creates comprehensive test data (~83K records)
- MCP Server and MCPO Proxy containers start after PostgreSQL is ready
- OpenWebUI container starts last and may take additional time to load the web interface
π‘ Tip: Wait 2-3 minutes after running
docker-compose up -d
before accessing OpenWebUI to ensure all services are fully initialized.
π Check Container Status (Optional):
# Monitor container startup progress
docker-compose logs -f
# Check if all containers are running
docker-compose ps
# Verify PostgreSQL is ready
docker-compose logs postgres | grep "ready to accept connections"
- The list of MCP tool features provided by
swagger
can be found in the MCPO API Docs URL.- e.g:
http://localhost:8003/docs
- e.g:
- logging in to OpenWebUI with an admin account
- go to "Settings" β "Tools" from the top menu.
- Enter the
postgresql-ops
Tool address (e.g.,http://localhost:8003/postgresql-ops
) to connect MCP Tools. - Setup Ollama or OpenAI.
Congratulations! Your MCP PostgreSQL Operations server is now ready for use. You can start exploring your databases with natural language queries.
- "Show me the current active connections"
- "What are the slowest queries in the system?"
- "Analyze table bloat across all databases"
- "Show me database size information"
- "What tables need VACUUM maintenance?"
- Browse the Example Queries section below for more query examples
- Check out Tool Usage Examples with Screenshots for visual guides
- Explore the Tool Compatibility Matrix to understand available features
The create-test-data.sql
script is executed by the postgres-init-extensions
container (defined in docker-compose.yml) on first startup, automatically generating comprehensive test databases for MCP tool testing:
Database | Purpose | Schema & Tables | Scale |
---|---|---|---|
ecommerce | E-commerce system | public: categories, products, customers, orders, order_items | 10 categories, 500 products, 100 customers, 200 orders, 400 order items |
analytics | Analytics & reporting | public: page_views, sales_summary | 1,000 page views, 30 sales summaries |
inventory | Warehouse management | public: suppliers, inventory_items, purchase_orders | 10 suppliers, 100 items, 50 purchase orders |
hr_system | HR management | public: departments, employees, payroll | 5 departments, 50 employees, 150 payroll records |
Test users created: app_readonly
, app_readwrite
, analytics_user
, backup_user
Optimized for testing: Intentional table bloat, various indexes (used/unused), time-series data, complex relationships
Automatic Adaptation: All tools work transparently across supported versions - no configuration needed!
Tool Name | Extensions Required | PG 12 | PG 13 | PG 14 | PG 15 | PG 16 | PG 17 | System Views/Tables Used |
---|---|---|---|---|---|---|---|---|
get_server_info |
β None | β | β | β | β | β | β |
version() , pg_extension
|
get_active_connections |
β None | β | β | β | β | β | β | pg_stat_activity |
get_postgresql_config |
β None | β | β | β | β | β | β | pg_settings |
get_database_list |
β None | β | β | β | β | β | β | pg_database |
get_table_list |
β None | β | β | β | β | β | β | information_schema.tables |
get_table_schema_info |
β None | β | β | β | β | β | β |
information_schema.* , pg_indexes
|
get_database_schema_info |
β None | β | β | β | β | β | β |
pg_namespace , pg_class , pg_proc
|
get_table_relationships |
β None | β | β | β | β | β | β |
information_schema.* (constraints) |
get_user_list |
β None | β | β | β | β | β | β |
pg_user , pg_roles
|
get_index_usage_stats |
β None | β | β | β | β | β | β | pg_stat_user_indexes |
get_database_size_info |
β None | β | β | β | β | β | β | pg_database_size() |
get_table_size_info |
β None | β | β | β | β | β | β | pg_total_relation_size() |
get_vacuum_analyze_stats |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_current_database_info |
β None | β | β | β | β | β | β |
pg_database , current_database()
|
get_table_bloat_analysis |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_database_bloat_overview |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_autovacuum_status |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_autovacuum_activity |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_running_vacuum_operations |
β None | β | β | β | β | β | β | pg_stat_activity |
get_vacuum_effectiveness_analysis |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_table_bloat_analysis |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_database_bloat_overview |
β None | β | β | β | β | β | β | pg_stat_user_tables |
get_lock_monitoring |
β None | β | β | β | β | β | β |
pg_locks , pg_stat_activity
|
get_wal_status |
β None | β | β | β | β | β | β | pg_current_wal_lsn() |
get_database_stats |
β None | β | β | β | β | β | β | pg_stat_database |
get_table_io_stats |
β None | β | β | β | β | β | β | pg_statio_user_tables |
get_index_io_stats |
β None | β | β | β | β | β | β | pg_statio_user_indexes |
get_database_conflicts_stats |
β None | β | β | β | β | β | β | pg_stat_database_conflicts |
Tool Name | Extensions Required | PG 12 | PG 13 | PG 14 | PG 15 | PG 16 | PG 17 | Special Features |
---|---|---|---|---|---|---|---|---|
get_io_stats |
β None | β Basic | β Basic | β Basic | β Basic | β Enhanced | β Enhanced | PG16+: pg_stat_io support |
get_bgwriter_stats |
β None | β | β | β | β Special | β | β | PG15: Separate checkpointer stats |
get_replication_status |
β None | β Compatible | β Enhanced | β Enhanced | β Enhanced | β Enhanced | β Enhanced | PG13+: wal_status , safe_wal_size ; PG16+: enhanced WAL receiver |
get_all_tables_stats |
β None | β Compatible | β Enhanced | β Enhanced | β Enhanced | β Enhanced | β Enhanced | PG13+: n_ins_since_vacuum tracking for vacuum maintenance optimization |
get_user_functions_stats |
βοΈ Config Required | β | β | β | β | β | β | Requires track_functions=pl
|
Tool Name | Required Extension | PG 12 | PG 13 | PG 14 | PG 15 | PG 16 | PG 17 | Notes |
---|---|---|---|---|---|---|---|---|
get_pg_stat_statements_top_queries |
pg_stat_statements |
β Compatible | β Enhanced | β Enhanced | β Enhanced | β Enhanced | β Enhanced | PG12: total_time β total_exec_time ; PG13+: native total_exec_time
|
get_pg_stat_monitor_recent_queries |
pg_stat_monitor |
β Compatible | β Enhanced | β Enhanced | β Enhanced | β Enhanced | β Enhanced | PG12: total_time β total_exec_time ; PG13+: native total_exec_time
|
π PostgreSQL 18 Support: PostgreSQL 18 is currently in beta phase and not yet supported by Percona Distribution PostgreSQL. Support will be added once PostgreSQL 18 reaches stable release and distribution support becomes available.
(Recommended) Add to your Claude Desktop configuration file:
{
"mcpServers": {
"postgresql-ops": {
"command": "uvx",
"args": ["--python", "3.11", "mcp-postgresql-ops"],
"env": {
"POSTGRES_HOST": "127.0.0.1",
"POSTGRES_PORT": "15432",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "changeme!@34",
"POSTGRES_DB": "ecommerce"
}
}
}
}
"Show all active connections in a clear and readable html table format."
"Show all relationships for customers table in ecommerce database as a Mermaid diagram."
# Install the package
pip install mcp-postgresql-ops
# Or with uv (faster)
uv add mcp-postgresql-ops
# Verify installation
mcp-postgresql-ops --help
# Clone the repository
git clone https://github.com/call518/MCP-PostgreSQL-Ops.git
cd MCP-PostgreSQL-Ops
# Install with uv (recommended)
uv sync
uv run mcp-postgresql-ops --help
# Or with pip
pip install -e .
mcp-postgresql-ops --help
(Optional) Run with Local Source:
{
"mcpServers": {
"postgresql-ops": {
"command": "uv",
"args": ["run", "python", "-m", "mcp_postgresql_ops"],
"env": {
"POSTGRES_HOST": "127.0.0.1",
"POSTGRES_PORT": "15432",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "changeme!@34",
"POSTGRES_DB": "ecommerce"
}
}
}
}
# Stdio mode
uvx --python 3.11 mcp-postgresql-ops \
--type stdio
# HTTP mode
uvx --python 3.11 mcp-postgresql-ops
--type streamable-http \
--host 127.0.0.1 \
--port 8000 \
--log-level DEBUG
{
"mcpServers": {
"Postgresql-A": {
"command": "uvx",
"args": ["--python", "3.11", "mcp-postgresql-ops"],
"env": {
"POSTGRES_HOST": "a.foo.com",
"POSTGRES_PORT": "5432",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "postgres",
"POSTGRES_DB": "postgres"
}
},
"Postgresql-B": {
"command": "uvx",
"args": ["--python", "3.11", "mcp-postgresql-ops"],
"env": {
"POSTGRES_HOST": "b.bar.com",
"POSTGRES_PORT": "5432",
"POSTGRES_USER": "postgres",
"POSTGRES_PASSWORD": "postgres",
"POSTGRES_DB": "postgres"
}
}
}
}
# Method 1: Module execution (for development, requires PYTHONPATH)
PYTHONPATH=/path/to/MCP-PostgreSQL-Ops/src
python -m mcp_postgresql_ops \
--type stdio
# Method 2: Direct script (after uv installation in project directory)
uv run mcp-postgresql-ops \
--type stdio
# Method 3: Installed package script (after pip/uv install)
mcp-postgresql-ops \
--type stdio
# HTTP mode examples:
# Development mode
PYTHONPATH=/path/to/MCP-PostgreSQL-Ops/src
python -m mcp_postgresql_ops \
--type streamable-http \
--host 127.0.0.1 \
--port 8000 \
--log-level DEBUG
# Production mode (after installation)
mcp-postgresql-ops \
--type streamable-http \
--host 127.0.0.1 \
--port 8000 \
--log-level DEBUG
-
--type
: Transport type (stdio
orstreamable-http
) - Default:stdio
-
--host
: Host address for HTTP transport - Default:127.0.0.1
-
--port
: Port number for HTTP transport - Default:8000
-
--auth-enable
: Enable Bearer token authentication for streamable-http mode - Default:false
-
--secret-key
: Secret key for Bearer token authentication (required when auth enabled) -
--log-level
: Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) - Default:INFO
Variable | Description | Default | Project Default |
---|---|---|---|
PYTHONPATH |
Python module search path (only needed for development mode) | - | /app/src |
MCP_LOG_LEVEL |
Server logging verbosity (DEBUG, INFO, WARNING, ERROR) | INFO |
INFO |
FASTMCP_TYPE |
MCP transport protocol (stdio for CLI, streamable-http for web) | stdio |
streamable-http |
FASTMCP_HOST |
HTTP server bind address (0.0.0.0 for all interfaces) | 127.0.0.1 |
0.0.0.0 |
FASTMCP_PORT |
HTTP server port for MCP communication | 8000 |
8000 |
REMOTE_AUTH_ENABLE |
Enable Bearer token authentication for streamable-http mode (Default: false if undefined/null/empty) |
false |
false |
REMOTE_SECRET_KEY |
Secret key for Bearer token authentication (required when auth enabled) | - | your-secret-key-here |
PGSQL_VERSION |
PostgreSQL major version for Docker image selection | 17 |
17 |
PGDATA |
PostgreSQL data directory inside Docker container (Do not modify) | /var/lib/postgresql/data |
/data/db |
POSTGRES_HOST |
PostgreSQL server hostname or IP address | 127.0.0.1 |
host.docker.internal |
POSTGRES_PORT |
PostgreSQL server port number | 5432 |
15432 |
POSTGRES_USER |
PostgreSQL connection username (needs read permissions) | postgres |
postgres |
POSTGRES_PASSWORD |
PostgreSQL user password (supports special characters) | changeme!@34 |
changeme!@34 |
POSTGRES_DB |
Default database name for connections | testdb |
ecommerce |
POSTGRES_MAX_CONNECTIONS |
PostgreSQL max_connections configuration parameter | 200 |
200 |
DOCKER_EXTERNAL_PORT_OPENWEBUI |
Host port mapping for Open WebUI container | 8080 |
3003 |
DOCKER_EXTERNAL_PORT_MCP_SERVER |
Host port mapping for MCP server container | 8080 |
18003 |
DOCKER_EXTERNAL_PORT_MCPO_PROXY |
Host port mapping for MCPO proxy container | 8000 |
8003 |
DOCKER_INTERNAL_PORT_POSTGRESQL |
PostgreSQL container internal port | 5432 |
5432 |
Note: POSTGRES_DB
serves as the default target database for operations when no specific database is specified. In Docker environments, if set to a non-default name, this database will be automatically created during initial PostgreSQL startup.
Port Configuration: The built-in PostgreSQL container uses port mapping 15432:5432
where:
-
POSTGRES_PORT=15432
: External port for host access and MCP server connections -
DOCKER_INTERNAL_PORT_POSTGRESQL=5432
: Internal container port (PostgreSQL default) - When using external PostgreSQL servers, set
POSTGRES_PORT
to match your server's actual port
For more details, see the ## Tool Compatibility Matrix
Note: Most MCP tools work without any PostgreSQL extensions. section below. Some advanced performance analysis tools require the following extensions:
-- Query performance statistics (required only for get_pg_stat_statements_top_queries)
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
-- Advanced monitoring (optional, used by get_pg_stat_monitor_recent_queries)
CREATE EXTENSION IF NOT EXISTS pg_stat_monitor;
Quick Setup: For new PostgreSQL installations, add to postgresql.conf
:
shared_preload_libraries = 'pg_stat_statements'
Then restart PostgreSQL and run the CREATE EXTENSION commands above.
-
pg_stat_statements
is required only for slow query analysis tools. -
pg_stat_monitor
is optional and used for real-time query monitoring. - All other tools work without these extensions.
- PostgreSQL 12+ (tested with PostgreSQL 17)
- Python 3.11
- Network access to PostgreSQL server
- Read permissions on system catalogs
Tools affected by these settings:
-
get_user_functions_stats: Requires
track_functions = pl
ortrack_functions = all
-
get_table_io_stats & get_index_io_stats: More accurate timing with
track_io_timing = on
-
get_database_stats: Enhanced I/O timing with
track_io_timing = on
Verification: After applying any method, verify the settings:
SELECT name, setting, context FROM pg_settings WHERE name IN ('track_activities', 'track_counts', 'track_io_timing', 'track_functions') ORDER BY name;
name | setting | context
------------------+---------+-----------
track_activities | on | superuser
track_counts | on | superuser
track_functions | pl | superuser
track_io_timing | on | superuser
(4 rows)
Add the following to your postgresql.conf
:
# Basic statistics collection (usually enabled by default)
track_activities = on
track_counts = on
# Required for function statistics tools
track_functions = pl # Enables PL/pgSQL function statistics collection
# Optional but recommended for accurate I/O timing
track_io_timing = on # Enables I/O timing statistics collection
Then restart PostgreSQL server.
For Docker or command-line PostgreSQL startup:
# Docker example
docker run -d \
-e POSTGRES_PASSWORD=mypassword \
postgres:17 \
-c track_activities=on \
-c track_counts=on \
-c track_functions=pl \
-c track_io_timing=on
# Direct postgres command
postgres -D /data \
-c track_activities=on \
-c track_counts=on \
-c track_functions=pl \
-c track_io_timing=on
For managed PostgreSQL services where you cannot modify postgresql.conf
, use SQL commands to change settings dynamically:
-- Enable basic statistics collection (usually enabled by default)
ALTER SYSTEM SET track_activities = 'on';
ALTER SYSTEM SET track_counts = 'on';
-- Enable function statistics collection (requires superuser privileges)
ALTER SYSTEM SET track_functions = 'pl';
-- Enable I/O timing statistics (optional but recommended)
ALTER SYSTEM SET track_io_timing = 'on';
-- Reload configuration without restart (run separately)
SELECT pg_reload_conf();
Alternative for session-level testing:
-- Set for current session only (temporary)
SET track_activities = 'on';
SET track_counts = 'on';
SET track_functions = 'pl';
SET track_io_timing = 'on';
Note: When using command-line tools, run each SQL statement separately to avoid transaction block errors.
- This server is read-only and works with regular roles on RDS/Aurora. For advanced analysis enable pg_stat_statements; pg_stat_monitor is not available on managed engines.
- On RDS/Aurora, prefer DB Parameter Group over ALTER SYSTEM for persistent settings.
-- Verify preload setting SHOW shared_preload_libraries; -- Enable extension in target DB CREATE EXTENSION IF NOT EXISTS pg_stat_statements; -- Recommended visibility for monitoring GRANT pg_read_all_stats TO <app_user>;
-
get_server_info
- "Show PostgreSQL server version and extension status."
- "Check if pg_stat_statements is installed."
-
get_active_connections
- "Show all active connections."
- "List current sessions with database and user."
-
get_postgresql_config
- "Show all PostgreSQL configuration parameters."
- "Find all memory-related configuration settings."
-
get_database_list
- "List all databases and their sizes."
- "Show database list with owner information."
-
get_table_list
- "List all tables in the ecommerce database."
- "Show table sizes in the public schema."
-
get_table_schema_info
- "Show detailed schema information for the customers table in ecommerce database."
- "Get column details and constraints for products table in ecommerce database."
- "Analyze table structure with indexes and foreign keys for orders table in sales schema of ecommerce database."
- "Show schema overview for all tables in public schema of inventory database."
- π Features: Column types, constraints, indexes, foreign keys, table metadata
β οΈ Required:database_name
parameter must be specified
-
get_database_schema_info
- "Show all schemas in ecommerce database with their contents."
- "Get detailed information about sales schema in ecommerce database."
- "Analyze schema structure and permissions for inventory database."
- "Show schema overview with table counts and sizes for hr_system database."
- π Features: Schema owners, permissions, object counts, sizes, contents
β οΈ Required:database_name
parameter must be specified
-
get_table_relationships
- "Show all relationships for customers table in ecommerce database."
- "Analyze foreign key relationships for orders table in sales schema of ecommerce database."
- "Get database-wide relationship overview for ecommerce database."
- "Find all tables that reference products table in ecommerce database."
- "Show cross-schema relationships in inventory database."
- π Features: Foreign key relationships (inbound/outbound), cross-schema dependencies, constraint details
β οΈ Required:database_name
parameter must be specified- π‘ Usage: Leave
table_name
empty for database-wide relationship analysis
-
get_user_list
- "List all database users and their roles."
- "Show user permissions for a specific database."
-
get_index_usage_stats
- "Analyze index usage efficiency."
- "Find unused indexes in the current database."
-
get_database_size_info
- "Show database capacity analysis."
- "Find the largest databases by size."
-
get_table_size_info
- "Show table and index size analysis."
- "Find largest tables in a specific schema."
-
get_vacuum_analyze_stats
- "Show recent VACUUM and ANALYZE operations."
- "List tables needing VACUUM."
-
get_current_database_info
- "What database am I connected to?"
- "Show current database information and connection details."
- "Display database encoding, collation, and size information."
- π Features: Database name, encoding, collation, size, connection limits
- π§ PostgreSQL 12-17: Fully compatible, no extensions required
-
get_table_bloat_analysis
- "Analyze table bloat in the current database."
- "Show dead tuple ratios and bloat estimates for user_logs table pattern."
- "Find tables with high bloat that need VACUUM maintenance."
- "Analyze bloat in specific schema with minimum 100 dead tuples."
- π Features: Dead tuple ratios, bloat size estimates, VACUUM recommendations, pattern filtering
- π§ PostgreSQL 12-17: Fully compatible, no extensions required
- π‘ Usage: Extension-Independent approach using pg_stat_user_tables
-
get_database_bloat_overview
- "Show database-wide bloat summary by schema."
- "Get high-level view of storage efficiency across all schemas."
- "Identify schemas requiring maintenance attention."
- π Features: Schema-level aggregation, total bloat estimates, maintenance status
- π§ PostgreSQL 12-17: Fully compatible, no extensions required
-
get_autovacuum_status
- "Check autovacuum configuration and trigger conditions."
- "Show tables needing immediate autovacuum attention."
- "Analyze autovacuum threshold percentages for public schema."
- "Find tables approaching autovacuum trigger points."
- π Features: Trigger threshold analysis, urgency classification, configuration status
- π§ PostgreSQL 12-17: Fully compatible, no extensions required
- π‘ Usage: Extension-Independent autovacuum monitoring using pg_stat_user_tables
-
get_autovacuum_activity
- "Show autovacuum activity patterns for the last 48 hours."
- "Monitor autovacuum execution frequency and timing."
- "Find tables with irregular autovacuum patterns."
- "Analyze recent autovacuum and autoanalyze history."
- π Features: Activity patterns, execution frequency, timing analysis
- π§ PostgreSQL 12-17: Fully compatible, no extensions required
- π‘ Usage: Historical autovacuum pattern analysis
-
get_running_vacuum_operations
- "Show currently running VACUUM and ANALYZE operations."
- "Monitor active maintenance operations and their progress."
- "Check if any VACUUM operations are blocking queries."
- "Find long-running maintenance operations."
- π Features: Real-time operation status, elapsed time, impact level, process details
- π§ PostgreSQL 12-17: Fully compatible, no extensions required
- π‘ Usage: Real-time maintenance monitoring using pg_stat_activity
-
get_vacuum_effectiveness_analysis
- "Analyze VACUUM effectiveness and maintenance patterns."
- "Compare manual VACUUM vs autovacuum efficiency."
- "Find tables with suboptimal maintenance patterns."
- "Check VACUUM frequency vs table activity ratios."
- π Features: Maintenance pattern analysis, effectiveness assessment, DML-to-VACUUM ratios
- π§ PostgreSQL 12-17: Fully compatible, no extensions required
- π‘ Usage: Strategic VACUUM analysis using existing statistics
-
get_table_bloat_analysis
- "Analyze table bloat in the public schema."
- "Show tables with high dead tuple ratios in ecommerce database."
- "Find tables requiring VACUUM maintenance."
- "Check bloat for tables with more than 5000 dead tuples."
- π Features: Dead tuple ratios, estimated bloat size, VACUUM recommendations
β οΈ Required: Specifydatabase_name
for cross-database analysis
-
get_database_bloat_overview
- "Show database-wide bloat summary by schema."
- "Get bloat overview for inventory database."
- "Identify schemas with highest bloat ratios."
- "Database maintenance planning with bloat statistics."
- π Features: Schema-level aggregation, maintenance priorities, size recommendations
-
get_lock_monitoring
- "Show all current locks and blocked sessions."
- "Show only blocked sessions with granted=false filter."
- "Monitor locks by specific user with username filter."
- "Check exclusive locks with mode filter."
-
get_wal_status
- "Show WAL status and archiving information."
- "Monitor WAL generation and current LSN position."
-
get_replication_status
- "Check replication connections and lag status."
- "Monitor replication slots and WAL receiver status."
-
get_database_stats
- "Show comprehensive database performance metrics."
- "Analyze transaction commit ratios and I/O statistics."
- "Monitor buffer cache hit ratios and temporary file usage."
-
get_bgwriter_stats
- "Analyze checkpoint performance and timing."
- "Show me checkpoint performance."
- "Show background writer efficiency statistics."
- "Monitor buffer allocation and fsync patterns."
-
get_user_functions_stats
- "Analyze user-defined function performance."
- "Show function call counts and execution times."
- "Identify performance bottlenecks in custom functions."
β οΈ Requires:track_functions = pl
in postgresql.conf
-
get_table_io_stats
- "Analyze table I/O performance and buffer hit ratios."
- "Identify tables with poor buffer cache performance."
- "Monitor TOAST table I/O statistics."
- π‘ Enhanced with:
track_io_timing = on
for accurate timing
-
get_index_io_stats
- "Show index I/O performance and buffer efficiency."
- "Identify indexes causing excessive disk I/O."
- "Monitor index cache-friendliness patterns."
- π‘ Enhanced with:
track_io_timing = on
for accurate timing
-
get_database_conflicts_stats
- "Check replication conflicts on standby servers."
- "Analyze conflict types and resolution statistics."
- "Monitor standby server query cancellation patterns."
- "Monitor WAL generation and current LSN position."
-
get_replication_status
- "Check replication connections and lag status."
- "Monitor replication slots and WAL receiver status."
-
get_io_stats (New!)
- "Show comprehensive I/O statistics." (PostgreSQL 16+ provides detailed breakdown)
- "Analyze I/O statistics."
- "Analyze buffer cache efficiency and I/O timing."
- "Monitor I/O patterns by backend type and context."
- π PG16+: Full pg_stat_io with timing, backend types, and contexts
- π PG12-15: Basic pg_statio_* fallback with buffer hit ratios
-
get_bgwriter_stats (Enhanced!)
- "Show background writer and checkpoint performance."
- π PG15: Separate checkpointer and bgwriter statistics (unique feature)
- π PG12-14, 16+: Combined bgwriter stats (includes checkpointer data)
-
get_server_info (Enhanced!)
- "Show server version and compatibility features."
- "Check server compatibility."
- "Check what MCP tools are available on this PostgreSQL version."
- "Displays feature availability matrix and upgrade recommendations."
-
get_all_tables_stats (Enhanced!)
- "Show comprehensive statistics for all tables." (version-compatible for PG12-17)
- "Include system tables with include_system=true parameter."
- "Analyze table access patterns and maintenance needs."
- π PG13+: Tracks insertions since vacuum (
n_ins_since_vacuum
) for optimal maintenance scheduling - π PG12: Compatible mode with NULL for unsupported columns
-
get_pg_stat_statements_top_queries (Requires
pg_stat_statements
)- "Show top 10 slowest queries."
- "Analyze slow queries in the inventory database."
- π Version-Compatible: PG12 uses
total_time
βtotal_exec_time
mapping; PG13+ uses native columns - π‘ Cross-Version: Automatically adapts query structure for PostgreSQL 12-17 compatibility
-
get_pg_stat_monitor_recent_queries (Optional, uses
pg_stat_monitor
)- "Show recent queries in real time."
- "Monitor query activity for the last 5 minutes."
- π Version-Compatible: PG12 uses
total_time
βtotal_exec_time
mapping; PG13+ uses native columns - π‘ Cross-Version: Automatically adapts query structure for PostgreSQL 12-17 compatibility
π‘ Pro Tip: All tools support multi-database operations using the database_name
parameter. This allows PostgreSQL superusers to analyze and monitor multiple databases from a single MCP server instance.
- Check PostgreSQL server status
- Verify connection parameters in
.env
file - Ensure network connectivity
- Check user permissions
- Run
get_server_info
to check extension status - Install missing extensions:
CREATE EXTENSION pg_stat_statements; CREATE EXTENSION pg_stat_monitor;
- Restart PostgreSQL if needed
-
"No data found" for function statistics: Check
track_functions
settingSHOW track_functions; -- Should be 'pl' or 'all'
Quick fix for managed services (AWS RDS, etc.):
ALTER SYSTEM SET track_functions = 'pl'; SELECT pg_reload_conf();
-
Missing I/O timing data: Enable timing collection
SHOW track_io_timing; -- Should be 'on'
Quick fix:
ALTER SYSTEM SET track_io_timing = 'on'; SELECT pg_reload_conf();
-
Apply configuration changes:
-
Self-managed: Add settings to
postgresql.conf
and restart server -
Managed services: Use
ALTER SYSTEM SET
+SELECT pg_reload_conf()
-
Temporary testing: Use
SET parameter = value
for current session - Generate some database activity to populate statistics
-
Self-managed: Add settings to
- Use
limit
parameters to reduce result size - Run monitoring during off-peak hours
- Check database load before running analysis
For more details, see the ## Tool Compatibility Matrix
-
Run compatibility check first:
# "Use get_server_info to check version and available features"
-
Understanding feature availability:
- PostgreSQL 16-17: All features available
- PostgreSQL 15+: Separate checkpointer stats
- PostgreSQL 14+: Parallel query tracking
- PostgreSQL 12-13: Core functionality only
-
If a tool shows "Not Available":
- Feature requires newer PostgreSQL version
- Tool will automatically use best available alternative
- Consider upgrading PostgreSQL for enhanced monitoring
# Clone and setup for development
git clone https://github.com/call518/MCP-PostgreSQL-Ops.git
cd MCP-PostgreSQL-Ops
uv sync
# Test with MCP Inspector (loads .env automatically)
./scripts/run-mcp-inspector-local.sh
# Direct execution methods:
# 1. Using uv run (recommended for development)
uv run mcp-postgresql-ops --log-level DEBUG
# 2. Module execution (requires PYTHONPATH)
PYTHONPATH=src python -m mcp_postgresql_ops --log-level DEBUG
# 3. After installation
mcp-postgresql-ops --log-level DEBUG
# Test version compatibility (requires different PostgreSQL versions)
# Modify POSTGRES_HOST in .env to point to different versions
# Run tests (if you add any)
uv run pytest
The MCP server automatically adapts to PostgreSQL versions 12-17. To test across versions:
- Set up test databases: Different PostgreSQL versions (12, 14, 15, 16, 17)
- Run compatibility tests: Point to each version and verify tool behavior
- Check feature detection: Ensure proper version detection and feature availability
- Verify fallback behavior: Confirm graceful degradation on older versions
- All tools are read-only - no data modification capabilities
- Sensitive information (passwords) are masked in outputs
- No direct SQL execution - only predefined queries
- Follows principle of least privilege
π€ Got ideas? Found bugs? Want to add cool features?
We're always excited to welcome new contributors! Whether you're fixing a typo, adding a new monitoring tool, or improving documentation - every contribution makes this project better.
Ways to contribute:
- π Report issues or bugs
- π‘ Suggest new PostgreSQL monitoring features
- π Improve documentation
- π Submit pull requests
- β Star the repo if you find it useful!
Pro tip: The codebase is designed to be super friendly for adding new tools. Check out the existing @mcp.tool()
functions in mcp_main.py
.
[MCPO Swagger URL] http://localhost:8003/postgresql-ops/docs
For streamable-http
mode, this MCP server supports Bearer token authentication to secure remote access. This is especially important when running the server in production environments.
Default Policy:
REMOTE_AUTH_ENABLE
defaults tofalse
if undefined, null, or empty. This ensures backward compatibility and prevents startup errors when the variable is not set.
Enable Authentication:
# In .env file
REMOTE_AUTH_ENABLE=true
REMOTE_SECRET_KEY=your-secure-secret-key-here
Or via CLI:
# Module method
python -m mcp_postgresql_ops --type streamable-http --auth-enable --secret-key your-secure-secret-key-here
# Script method
mcp-postgresql-ops --type streamable-http --auth-enable --secret-key your-secure-secret-key-here
- stdio mode (Default): Local-only access, no authentication needed
-
streamable-http + REMOTE_AUTH_ENABLE=false: Remote access without authentication
β οΈ NOT RECOMMENDED for production - streamable-http + REMOTE_AUTH_ENABLE=true: Remote access with Bearer token authentication β RECOMMENDED for production
When authentication is enabled, MCP clients must include the Bearer token in the Authorization header:
{
"mcpServers": {
"postgresql-ops": {
"type": "streamable-http",
"url": "http://your-server:8000/mcp",
"headers": {
"Authorization": "Bearer your-secure-secret-key-here"
}
}
}
}
- Always enable authentication when using streamable-http mode in production
- Use strong, randomly generated secret keys (32+ characters recommended)
- Use HTTPS when possible (configure reverse proxy with SSL/TLS)
- Restrict network access using firewalls or network policies
- Rotate secret keys regularly for enhanced security
- Monitor access logs for unauthorized access attempts
When authentication fails, the server returns:
- 401 Unauthorized for missing or invalid tokens
- Detailed error messages in JSON format for debugging
Freely use, modify, and distribute under the MIT License.
Other MCP servers by the same author:
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for MCP-PostgreSQL-Ops
Similar Open Source Tools

MCP-PostgreSQL-Ops
MCP-PostgreSQL-Ops is a repository containing scripts and tools for managing and optimizing PostgreSQL databases. It provides a set of utilities to automate common database administration tasks, such as backup and restore, performance tuning, and monitoring. The scripts are designed to simplify the operational aspects of running PostgreSQL databases, making it easier for administrators to maintain and optimize their database instances. With MCP-PostgreSQL-Ops, users can streamline their database management processes and improve the overall performance and reliability of their PostgreSQL deployments.

deepflow
DeepFlow is an open-source project that provides deep observability for complex cloud-native and AI applications. It offers Zero Code data collection with eBPF for metrics, distributed tracing, request logs, and function profiling. DeepFlow is integrated with SmartEncoding to achieve Full Stack correlation and efficient access to all observability data. With DeepFlow, cloud-native and AI applications automatically gain deep observability, removing the burden of developers continually instrumenting code and providing monitoring and diagnostic capabilities covering everything from code to infrastructure for DevOps/SRE teams.

langwatch
LangWatch is a monitoring and analytics platform designed to track, visualize, and analyze interactions with Large Language Models (LLMs). It offers real-time telemetry to optimize LLM cost and latency, a user-friendly interface for deep insights into LLM behavior, user analytics for engagement metrics, detailed debugging capabilities, and guardrails to monitor LLM outputs for issues like PII leaks and toxic language. The platform supports OpenAI and LangChain integrations, simplifying the process of tracing LLM calls and generating API keys for usage. LangWatch also provides documentation for easy integration and self-hosting options for interested users.

open-webui-tools
Open WebUI Tools Collection is a set of tools for structured planning, arXiv paper search, Hugging Face text-to-image generation, prompt enhancement, and multi-model conversations. It enhances LLM interactions with academic research, image generation, and conversation management. Tools include arXiv Search Tool and Hugging Face Image Generator. Function Pipes like Planner Agent offer autonomous plan generation and execution. Filters like Prompt Enhancer improve prompt quality. Installation and configuration instructions are provided for each tool and pipe.

mcp-server-mysql
The MCP Server for MySQL based on NodeJS is a Model Context Protocol server that provides access to MySQL databases. It enables users to inspect database schemas and execute SQL queries. The server offers tools for executing SQL queries, providing comprehensive database information, security features like SQL injection prevention, performance optimizations, monitoring, and debugging capabilities. Users can configure the server using environment variables and advanced options. The server supports multi-DB mode, schema-specific permissions, and includes troubleshooting guidelines for common issues. Contributions are welcome, and the project roadmap includes enhancing query capabilities, security features, performance optimizations, monitoring, and expanding schema information.

claude-flow
Claude-Flow is a workflow automation tool designed to streamline and optimize business processes. It provides a user-friendly interface for creating and managing workflows, allowing users to automate repetitive tasks and improve efficiency. With features such as drag-and-drop workflow builder, customizable templates, and integration with popular business tools, Claude-Flow empowers users to automate their workflows without the need for extensive coding knowledge. Whether you are a small business owner looking to streamline your operations or a project manager seeking to automate task assignments, Claude-Flow offers a flexible and scalable solution to meet your workflow automation needs.

Hexabot
Hexabot Community Edition is an open-source chatbot solution designed for flexibility and customization, offering powerful text-to-action capabilities. It allows users to create and manage AI-powered, multi-channel, and multilingual chatbots with ease. The platform features an analytics dashboard, multi-channel support, visual editor, plugin system, NLP/NLU management, multi-lingual support, CMS integration, user roles & permissions, contextual data, subscribers & labels, and inbox & handover functionalities. The directory structure includes frontend, API, widget, NLU, and docker components. Prerequisites for running Hexabot include Docker and Node.js. The installation process involves cloning the repository, setting up the environment, and running the application. Users can access the UI admin panel and live chat widget for interaction. Various commands are available for managing the Docker services. Detailed documentation and contribution guidelines are provided for users interested in contributing to the project.

batteries-included
Batteries Included is an all-in-one platform for building and running modern applications, simplifying cloud infrastructure complexity. It offers production-ready capabilities through an intuitive interface, focusing on automation, security, and enterprise-grade features. The platform includes databases like PostgreSQL and Redis, AI/ML capabilities with Jupyter notebooks, web services deployment, security features like SSL/TLS management, and monitoring tools like Grafana dashboards. Batteries Included is designed to streamline infrastructure setup and management, allowing users to concentrate on application development without dealing with complex configurations.

aigne-hub
AIGNE Hub is a unified AI gateway that manages connections to multiple LLM and AIGC providers, eliminating the complexity of handling API keys, usage tracking, and billing across different AI services. It provides self-hosting capabilities, multi-provider management, unified security, usage analytics, flexible billing, and seamless integration with the AIGNE framework. The tool supports various AI providers and deployment scenarios, catering to both enterprise self-hosting and service provider modes. Users can easily deploy and configure AI providers, enable billing, and utilize core capabilities such as chat completions, image generation, embeddings, and RESTful APIs. AIGNE Hub ensures secure access, encrypted API key management, user permissions, and audit logging. Built with modern technologies like AIGNE Framework, Node.js, TypeScript, React, SQLite, and Blocklet for cloud-native deployment.

FastDeploy
FastDeploy is an inference and deployment toolkit for large language models and visual language models based on PaddlePaddle. It provides production-ready deployment solutions with core acceleration technologies such as load-balanced PD disaggregation, unified KV cache transmission, OpenAI API server compatibility, comprehensive quantization format support, advanced acceleration techniques, and multi-hardware support. The toolkit supports various hardware platforms like NVIDIA GPUs, Kunlunxin XPUs, Iluvatar GPUs, Enflame GCUs, and Hygon DCUs, with plans for expanding support to Ascend NPU and MetaX GPU. FastDeploy aims to optimize resource utilization, throughput, and performance for inference and deployment tasks.

context-portal
Context-portal is a versatile tool for managing and visualizing data in a collaborative environment. It provides a user-friendly interface for organizing and sharing information, making it easy for teams to work together on projects. With features such as customizable dashboards, real-time updates, and seamless integration with popular data sources, Context-portal streamlines the data management process and enhances productivity. Whether you are a data analyst, project manager, or team leader, Context-portal offers a comprehensive solution for optimizing workflows and driving better decision-making.

ml-retreat
ML-Retreat is a comprehensive machine learning library designed to simplify and streamline the process of building and deploying machine learning models. It provides a wide range of tools and utilities for data preprocessing, model training, evaluation, and deployment. With ML-Retreat, users can easily experiment with different algorithms, hyperparameters, and feature engineering techniques to optimize their models. The library is built with a focus on scalability, performance, and ease of use, making it suitable for both beginners and experienced machine learning practitioners.

WorkflowAI
WorkflowAI is a powerful tool designed to streamline and automate various tasks within the workflow process. It provides a user-friendly interface for creating custom workflows, automating repetitive tasks, and optimizing efficiency. With WorkflowAI, users can easily design, execute, and monitor workflows, allowing for seamless integration of different tools and systems. The tool offers advanced features such as conditional logic, task dependencies, and error handling to ensure smooth workflow execution. Whether you are managing project tasks, processing data, or coordinating team activities, WorkflowAI simplifies the workflow management process and enhances productivity.

hyper-mcp
hyper-mcp is a fast and secure MCP server that enables adding AI capabilities to applications through WebAssembly plugins. It supports writing plugins in various languages, distributing them via standard OCI registries, and running them in resource-constrained environments. The tool offers sandboxing with WASM for limiting access, cross-platform compatibility, and deployment flexibility. Security features include sandboxed plugins, memory-safe execution, secure plugin distribution, and fine-grained access control. Users can configure the tool for global or project-specific use, start the server with different transport options, and utilize available plugins for tasks like time calculations, QR code generation, hash generation, IP retrieval, and webpage fetching.

tools
Strands Agents Tools is a community-driven project that provides a powerful set of tools for your agents to use. It bridges the gap between large language models and practical applications by offering ready-to-use tools for file operations, system execution, API interactions, mathematical operations, and more. The tools cover a wide range of functionalities including file operations, shell integration, memory storage, web infrastructure, HTTP client, Slack client, Python execution, mathematical tools, AWS integration, image and video processing, audio output, environment management, task scheduling, advanced reasoning, swarm intelligence, dynamic MCP client, parallel tool execution, browser automation, diagram creation, RSS feed management, and computer automation.

baibot
Baibot is a versatile chatbot framework designed to simplify the process of creating and deploying chatbots. It provides a user-friendly interface for building custom chatbots with various functionalities such as natural language processing, conversation flow management, and integration with external APIs. Baibot is highly customizable and can be easily extended to suit different use cases and industries. With Baibot, developers can quickly create intelligent chatbots that can interact with users in a seamless and engaging manner, enhancing user experience and automating customer support processes.
For similar tasks

MCP-PostgreSQL-Ops
MCP-PostgreSQL-Ops is a repository containing scripts and tools for managing and optimizing PostgreSQL databases. It provides a set of utilities to automate common database administration tasks, such as backup and restore, performance tuning, and monitoring. The scripts are designed to simplify the operational aspects of running PostgreSQL databases, making it easier for administrators to maintain and optimize their database instances. With MCP-PostgreSQL-Ops, users can streamline their database management processes and improve the overall performance and reliability of their PostgreSQL deployments.

awesome-llm-courses
Awesome LLM Courses is a curated list of online courses focused on Large Language Models (LLMs). The repository aims to provide a comprehensive collection of free available courses covering various aspects of LLMs, including fundamentals, engineering, and applications. The courses are suitable for individuals interested in natural language processing, AI development, and machine learning. The list includes courses from reputable platforms such as Hugging Face, Udacity, DeepLearning.AI, Cohere, DataCamp, and more, offering a wide range of topics from pretraining LLMs to building AI applications with LLMs. Whether you are a beginner looking to understand the basics of LLMs or an intermediate developer interested in advanced topics like prompt engineering and generative AI, this repository has something for everyone.
For similar jobs

db2rest
DB2Rest is a modern low-code REST DATA API platform that simplifies the development of intelligent applications. It seamlessly integrates existing and new databases with language models (LMs/LLMs) and vector stores, enabling the rapid delivery of context-aware, reasoning applications without vendor lock-in.

mage-ai
Mage is an open-source data pipeline tool for transforming and integrating data. It offers an easy developer experience, engineering best practices built-in, and data as a first-class citizen. Mage makes it easy to build, preview, and launch data pipelines, and provides observability and scaling capabilities. It supports data integrations, streaming pipelines, and dbt integration.

airbyte
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's no-code Connector Builder or low-code CDK. Airbyte is used by data engineers and analysts at companies of all sizes to build and manage their data pipelines.

labelbox-python
Labelbox is a data-centric AI platform for enterprises to develop, optimize, and use AI to solve problems and power new products and services. Enterprises use Labelbox to curate data, generate high-quality human feedback data for computer vision and LLMs, evaluate model performance, and automate tasks by combining AI and human-centric workflows. The academic & research community uses Labelbox for cutting-edge AI research.

telemetry-airflow
This repository codifies the Airflow cluster that is deployed at workflow.telemetry.mozilla.org (behind SSO) and commonly referred to as "WTMO" or simply "Airflow". Some links relevant to users and developers of WTMO: * The `dags` directory in this repository contains some custom DAG definitions * Many of the DAGs registered with WTMO don't live in this repository, but are instead generated from ETL task definitions in bigquery-etl * The Data SRE team maintains a WTMO Developer Guide (behind SSO)

airflow
Apache Airflow (or simply Airflow) is a platform to programmatically author, schedule, and monitor workflows. When workflows are defined as code, they become more maintainable, versionable, testable, and collaborative. Use Airflow to author workflows as directed acyclic graphs (DAGs) of tasks. The Airflow scheduler executes your tasks on an array of workers while following the specified dependencies. Rich command line utilities make performing complex surgeries on DAGs a snap. The rich user interface makes it easy to visualize pipelines running in production, monitor progress, and troubleshoot issues when needed.

airbyte-platform
Airbyte is an open-source data integration platform that makes it easy to move data from any source to any destination. With Airbyte, you can build and manage data pipelines without writing any code. Airbyte provides a library of pre-built connectors that make it easy to connect to popular data sources and destinations. You can also create your own connectors using Airbyte's low-code Connector Development Kit (CDK). Airbyte is used by data engineers and analysts at companies of all sizes to move data for a variety of purposes, including data warehousing, data analysis, and machine learning.

chronon
Chronon is a platform that simplifies and improves ML workflows by providing a central place to define features, ensuring point-in-time correctness for backfills, simplifying orchestration for batch and streaming pipelines, offering easy endpoints for feature fetching, and guaranteeing and measuring consistency. It offers benefits over other approaches by enabling the use of a broad set of data for training, handling large aggregations and other computationally intensive transformations, and abstracting away the infrastructure complexity of data plumbing.