
gis-mcp
A Model Context Protocol (MCP) server implementation that connects Large Language Models (LLMs) to GIS operations using GIS libraries, enabling AI assistants to perform geospatial operations and transformations.
Stars: 53

This repository contains a Geographic Information System (GIS) tool for performing Minimum Cumulative Path (MCP) analysis. The tool allows users to calculate the optimal path that minimizes cumulative cost between multiple locations on a map. It is particularly useful for urban planning, transportation route optimization, and environmental impact assessment. The tool supports various cost functions such as distance, travel time, and resource consumption, providing flexibility for different applications. Users can visualize the results on interactive maps and export the analysis outputs for further processing. The tool is implemented in Python and leverages popular GIS libraries such as GeoPandas and NetworkX for efficient spatial analysis.
README:
A Model Context Protocol (MCP) server implementation that connects Large Language Models (LLMs) to GIS operations using GIS libraries, enabling AI assistants to perform geospatial operations and transformations.
๐ Website: gis-mcp.com
Current version is 0.8.0
Version 0.9.0 (Beta) is under active development. We welcome contributions and developers to join us in building this project.
- Features
- Prerequisites
- Installation
- Available Functions
- Client Development
- Planned Features
- Contributing
- License
- Related Projects
- Support
- Badges
GIS MCP Server empowers AI assistants with advanced geospatial intelligence. Key features include:
- ๐น Comprehensive Geometry Operations โ Perform intersection, union, buffer, difference, and other geometric transformations with ease.
- ๐น Advanced Coordinate Transformations โ Effortlessly reproject and transform geometries between coordinate reference systems.
- ๐น Accurate Measurements โ Compute distances, areas, lengths, and centroids precisely.
- ๐น Spatial Analysis & Validation โ Validate geometries, run proximity checks, and perform spatial overlays or joins.
- ๐น Raster & Vector Support โ Process raster layers, compute indices like NDVI, clip, resample, and merge with vector data.
- ๐น Spatial Statistics & Modeling โ Leverage PySAL for spatial autocorrelation, clustering, and neighborhood analysis.
- ๐น Easy Integration โ Connect seamlessly with MCP-compatible clients like Claude Desktop or Cursor IDE.
- ๐น Flexible & Extensible โ Supports Python-based GIS libraries and is ready for custom tools or workflow extensions.
๐ Tip: With GIS MCP Server, your AI can now โthink spatially,โ unlocking new capabilities for environmental analysis, mapping, and location intelligence.
- Python 3.10 or higher
- MCP-compatible client (like Claude Desktop or Cursor)
- Internet connection for package installation
Choose the installation method that best suits your needs:
To install GIS MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @mahdin75/gis-mcp --client claude
The pip installation is recommended for most users:
- Install uv package manager:
pip install uv
- Create the Virtual Environment (Python 3.10+):
uv venv --python=3.10
- Install the package:
uv pip install gis-mcp
- Start the server:
gis-mcp
To use the pip installation with Claude or Cursor, add the following configuration:
Claude Desktop:
Windows:
{
"mcpServers": {
"gis-mcp": {
"command": "C:\\Users\\YourUsername\\.venv\\Scripts\\gis-mcp",
"args": []
}
}
}
Linux/Mac:
{
"mcpServers": {
"gis-mcp": {
"command": "/home/YourUsername/.venv/bin/gis-mcp",
"args": []
}
}
}
Cursor IDE (create .cursor/mcp.json
):
Windows:
{
"mcpServers": {
"gis-mcp": {
"command": "C:\\Users\\YourUsername\\.venv\\Scripts\\gis-mcp",
"args": []
}
}
}
Linux/Mac:
{
"mcpServers": {
"gis-mcp": {
"command": "/home/YourUsername/.venv/bin/gis-mcp",
"args": []
}
}
}
After configuration:
- Make sure to replace
YourUsername
with your actual username - For development installation, replace
/path/to/gis-mcp
with the actual path to your project - Restart your IDE to apply the changes
- You can now use all GIS operations through Claude or Cursor!
For contributors and developers:
- Install uv package manager:
pip install uv
- Create the Virtual Environment:
uv venv --python=3.10
- Install the package in development mode:
uv pip install -e .
- Start the server:
python -m gis_mcp
To use the development installation with Claude or Cursor, add the following configuration:
Claude Desktop:
Windows:
{
"mcpServers": {
"gis-mcp": {
"command": "C:\\path\\to\\gis-mcp\\.venv\\Scripts\\python",
"args": ["-m", "gis_mcp"]
}
}
}
Linux/Mac:
{
"mcpServers": {
"gis-mcp": {
"command": "/path/to/gis-mcp/.venv/bin/python",
"args": ["-m", "gis_mcp"]
}
}
}
Cursor IDE (create .cursor/mcp.json
):
Windows:
{
"mcpServers": {
"gis-mcp": {
"command": "C:\\path\\to\\gis-mcp\\.venv\\Scripts\\python",
"args": ["-m", "gis_mcp"]
}
}
}
Linux/Mac:
{
"mcpServers": {
"gis-mcp": {
"command": "/path/to/gis-mcp/.venv/bin/python",
"args": ["-m", "gis_mcp"]
}
}
}
After configuration:
- Make sure to replace
YourUsername
with your actual username - For development installation, replace
/path/to/gis-mcp
with the actual path to your project - Restart your IDE to apply the changes
- You can now use all GIS operations through Claude or Cursor!
This section provides a comprehensive list of all available functions organized by library.
Basic Geometric Operations:
-
buffer
- Create buffer around geometry -
intersection
- Find intersection of two geometries -
union
- Combine two geometries -
difference
- Find difference between geometries -
symmetric_difference
- Find symmetric difference
Geometric Properties:
-
convex_hull
- Calculate convex hull -
envelope
- Get bounding box -
minimum_rotated_rectangle
- Get minimum rotated rectangle -
get_centroid
- Get centroid point -
get_bounds
- Get geometry bounds -
get_coordinates
- Extract coordinate array -
get_geometry_type
- Get geometry type name
Transformations:
-
rotate_geometry
- Rotate geometry by angle -
scale_geometry
- Scale geometry by factors -
translate_geometry
- Move geometry by offset
Advanced Operations:
-
triangulate_geometry
- Create triangulation -
voronoi
- Create Voronoi diagram -
unary_union_geometries
- Union multiple geometries
Measurements:
-
get_length
- Calculate geometry length -
get_area
- Calculate geometry area
Validation & Utilities:
-
is_valid
- Check geometry validity -
make_valid
- Fix invalid geometry -
simplify
- Simplify geometry -
snap_geometry
- Snap to reference geometry -
nearest_point_on_geometry
- Find nearest point -
normalize_geometry
- Normalize orientation -
geometry_to_geojson
- Convert to GeoJSON -
geojson_to_geometry
- Convert from GeoJSON
Coordinate Transformations:
-
transform_coordinates
- Transform point coordinates -
project_geometry
- Project geometry between CRS
CRS Information:
-
get_crs_info
- Get detailed CRS information -
get_available_crs
- List available CRS systems -
get_utm_zone
- Get UTM zone for coordinates -
get_utm_crs
- Get UTM CRS for coordinates -
get_geocentric_crs
- Get geocentric CRS
Geodetic Calculations:
-
get_geod_info
- Get ellipsoid information -
calculate_geodetic_distance
- Calculate distance on ellipsoid -
calculate_geodetic_point
- Calculate point at distance/azimuth -
calculate_geodetic_area
- Calculate area on ellipsoid
I/O Operations:
-
read_file_gpd
- Read geospatial file with preview -
write_file_gpd
- Export GeoDataFrame to file
Join & Merge Operations:
-
append_gpd
- Concatenate GeoDataFrames vertically -
merge_gpd
- Database-style attribute joins -
overlay_gpd
- Spatial overlay operations -
dissolve_gpd
- Dissolve by attribute -
explode_gpd
- Split multi-part geometries
Spatial Operations:
-
clip_vector
- Clip geometries -
sjoin_gpd
- Spatial joins -
sjoin_nearest_gpd
- Nearest neighbor spatial joins -
point_in_polygon
- Point-in-polygon tests
Basic Raster Operations:
-
metadata_raster
- Get raster metadata -
get_raster_crs
- Get raster CRS -
extract_band
- Extract single band -
raster_band_statistics
- Calculate band statistics -
raster_histogram
- Compute pixel histograms
Raster Processing:
-
clip_raster_with_shapefile
- Clip raster with polygons -
resample_raster
- Resample by scale factor -
reproject_raster
- Reproject to new CRS -
tile_raster
- Split into tiles
Raster Analysis:
-
compute_ndvi
- Calculate vegetation index -
raster_algebra
- Mathematical operations on bands -
concat_bands
- Combine single-band rasters -
weighted_band_sum
- Weighted band combination
Advanced Analysis:
-
zonal_statistics
- Statistics within polygons -
reclassify_raster
- Reclassify pixel values -
focal_statistics
- Moving window statistics -
hillshade
- Generate hillshade from DEM -
write_raster
- Write array to raster file
Spatial Autocorrelation:
-
morans_i
- Global Moran's I statistic -
gearys_c
- Global Geary's C statistic -
gamma_statistic
- Gamma index -
getis_ord_g
- Global Getis-Ord G statistic
Local Statistics:
-
moran_local
- Local Moran's I -
getis_ord_g_local
- Local Getis-Ord G* -
join_counts_local
- Local join counts
Global Statistics:
-
join_counts
- Binary join counts test -
adbscan
- Adaptive density-based clustering
Spatial Weights:
-
weights_from_shapefile
- Create weights from shapefile -
distance_band_weights
- Distance-based weights -
knn_weights
- K-nearest neighbors weights -
build_transform_and_save_weights
- Build, transform, and save weights -
ols_with_spatial_diagnostics_safe
- OLS regression with spatial diagnostics -
build_and_transform_weights
- Build and transform weights
Boundary Download:
-
download_boundaries
- Download GADM administrative boundaries and save as GeoJSON
Climate Data Download:
-
download_climate_data
- Download climate data (ERA5 or other CDS datasets)
Ecology Data Download and Info:
-
get_species_info
โ Retrieve taxonomic information for a given species name -
download_species_occurrences
โ Download occurrence records for a given species and save as JSON
Movement Data Download and Routing (via OSMnx):
-
download_street_network
โ Download a street network for a given place and save as GraphML -
calculate_shortest_path
โ Calculate the shortest path between two points using a saved street network
Land Cover from Planetary Computer:
-
download_worldcover
โ Download ESA WorldCover for AOI/year; optional crop and reprojection -
compute_s2_ndvi
โ Compute NDVI from Sentinel-2 L2A; crop and reprojection supported
STAC-based Satellite Download:
-
download_satellite_imagery
โ Download and stack bands from STAC items (e.g., Sentinel-2, Landsat), with optional crop and reprojection
Total Functions Available: 89
Example usage of the tools:
Tool: buffer
Parameters: {
"geometry": "POINT(0 0)",
"distance": 10,
"resolution": 16,
"join_style": 1,
"mitre_limit": 5.0,
"single_sided": false
}
Tool: transform_coordinates
Parameters: {
"coordinates": [0, 0],
"source_crs": "EPSG:4326",
"target_crs": "EPSG:3857"
}
Tool: calculate_geodetic_distance
Parameters: {
"point1": [0, 0],
"point2": [10, 10],
"ellps": "WGS84"
}
- Implement advanced spatial indexing
- Implement network analysis capabilities
- Add support for 3D geometries
- Implement performance optimizations
- Add support for more GIS libraries
We welcome contributions! Here's how you can help:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
Please ensure your PR description clearly describes the problem and solution. Include the relevant issue number if applicable.
This project is licensed under the MIT License - see the LICENSE file for details.
Project Name | Category | Description |
---|---|---|
Model Context Protocol | MCP Related | The core MCP implementation |
Shapely | Geospatial Analysis | Python package for manipulation and analysis of geometric objects |
PyProj | Geospatial Analysis | Python interface to PROJ library |
GeoPandas | Geospatial Analysis | Python package for working with geospatial data |
Rasterio | Geospatial Analysis | Python package for reading and writing geospatial raster data |
PySAL | Geospatial Analysis | Python spatial analysis library for geospatial data science |
cdsapi | Geospatial Data Collecting | Python API to access the Copernicus Climate Data Store (CDS) |
pygadm | Geospatial Data Collecting | Easy access to administrative boundary defined by GADM from Python scripts |
pygbif | Geospatial Data Collecting | Python client for the GBIF API (ecology and biodiversity data) |
OSMnx | Geospatial Data Collecting | Python package for downloading, modeling, and analyzing street networks and urban features from OpenStreetMap |
pystac-client | Geospatial Data Collecting | Python client for STAC catalogs; search and access spatiotemporal assets |
Planetary Computer SDK for Python | Geospatial Data Collecting | Python SDK for Microsoft Planetary Computer; auth and helpers for STAC/COGs |
For support, please open an issue in the GitHub repository.
Join our Discord community for discussions, updates, and support:
Made with contrib.rocks.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for gis-mcp
Similar Open Source Tools

gis-mcp
This repository contains a Geographic Information System (GIS) tool for performing Minimum Cumulative Path (MCP) analysis. The tool allows users to calculate the optimal path that minimizes cumulative cost between multiple locations on a map. It is particularly useful for urban planning, transportation route optimization, and environmental impact assessment. The tool supports various cost functions such as distance, travel time, and resource consumption, providing flexibility for different applications. Users can visualize the results on interactive maps and export the analysis outputs for further processing. The tool is implemented in Python and leverages popular GIS libraries such as GeoPandas and NetworkX for efficient spatial analysis.

mcp-context-forge
MCP Context Forge is a powerful tool for generating context-aware data for machine learning models. It provides functionalities to create diverse datasets with contextual information, enhancing the performance of AI algorithms. The tool supports various data formats and allows users to customize the context generation process easily. With MCP Context Forge, users can efficiently prepare training data for tasks requiring contextual understanding, such as sentiment analysis, recommendation systems, and natural language processing.

mcphost
MCPHost is a CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP). It acts as a host in the MCP client-server architecture, allowing language models to access external tools and data sources, maintain consistent context across interactions, and execute commands safely. The tool supports interactive conversations with Claude 3.5 Sonnet and Ollama models, multiple concurrent MCP servers, dynamic tool discovery and integration, configurable server locations and arguments, and a consistent command interface across model types.

dranet
Dranet is a Python library for analyzing and visualizing data from neural networks. It provides tools for interpreting model predictions, understanding feature importance, and evaluating model performance. With Dranet, users can gain insights into how neural networks make decisions and improve model transparency and interpretability.

SolarLLMZeroToAll
SolarLLMZeroToAll is a comprehensive repository that provides a step-by-step guide and resources for learning and implementing Solar Longitudinal Learning Machines (SolarLLM) from scratch. The repository covers various aspects of SolarLLM, including theory, implementation, and applications, making it suitable for beginners and advanced users interested in solar energy forecasting and machine learning. The materials include detailed explanations, code examples, datasets, and visualization tools to facilitate understanding and practical implementation of SolarLLM models.

miles-credit
CREDIT is an open software platform for training and deploying AI atmospheric prediction models. It offers fast models with flexible configuration options for input data and neural network architecture. The user-friendly interface enables quick setup and iteration. Developed by the MILES group and NSF National Center for Atmospheric Research, CREDIT combines advanced AI/ML with atmospheric science expertise. It provides a stable release with various models, training, and deployment options, with ongoing development. Detailed documentation is available for installation, training, deployment, config file interpretation, and API usage.

DB-GPT
DB-GPT is an open source AI native data app development framework with AWEL(Agentic Workflow Expression Language) and agents. It aims to build infrastructure in the field of large models, through the development of multiple technical capabilities such as multi-model management (SMMF), Text2SQL effect optimization, RAG framework and optimization, Multi-Agents framework collaboration, AWEL (agent workflow orchestration), etc. Which makes large model applications with data simpler and more convenient.

cellm
Cellm is an Excel extension that allows users to leverage Large Language Models (LLMs) like ChatGPT within cell formulas. It enables users to extract AI responses to text ranges, making it useful for automating repetitive tasks that involve data processing and analysis. Cellm supports various models from Anthropic, Mistral, OpenAI, and Google, as well as locally hosted models via Llamafiles, Ollama, or vLLM. The tool is designed to simplify the integration of AI capabilities into Excel for tasks such as text classification, data cleaning, content summarization, entity extraction, and more.

atomic-agents
The Atomic Agents framework is a modular and extensible tool designed for creating powerful applications. It leverages Pydantic for data validation and serialization. The framework follows the principles of Atomic Design, providing small and single-purpose components that can be combined. It integrates with Instructor for AI agent architecture and supports various APIs like Cohere, Anthropic, and Gemini. The tool includes documentation, examples, and testing features to ensure smooth development and usage.

context-portal
Context-portal is a versatile tool for managing and visualizing data in a collaborative environment. It provides a user-friendly interface for organizing and sharing information, making it easy for teams to work together on projects. With features such as customizable dashboards, real-time updates, and seamless integration with popular data sources, Context-portal streamlines the data management process and enhances productivity. Whether you are a data analyst, project manager, or team leader, Context-portal offers a comprehensive solution for optimizing workflows and driving better decision-making.

datasets
Datasets is a repository that provides a collection of various datasets for machine learning and data analysis projects. It includes datasets in different formats such as CSV, JSON, and Excel, covering a wide range of topics including finance, healthcare, marketing, and more. The repository aims to help data scientists, researchers, and students access high-quality datasets for training models, conducting experiments, and exploring data analysis techniques.

deepflow
DeepFlow is an open-source project that provides deep observability for complex cloud-native and AI applications. It offers Zero Code data collection with eBPF for metrics, distributed tracing, request logs, and function profiling. DeepFlow is integrated with SmartEncoding to achieve Full Stack correlation and efficient access to all observability data. With DeepFlow, cloud-native and AI applications automatically gain deep observability, removing the burden of developers continually instrumenting code and providing monitoring and diagnostic capabilities covering everything from code to infrastructure for DevOps/SRE teams.

llm_rl
llm_rl is a repository that combines llm (language model) and rl (reinforcement learning) techniques. It likely focuses on using language models in reinforcement learning tasks, such as natural language understanding and generation. The repository may contain implementations of algorithms that leverage both llm and rl to improve performance in various tasks. Developers interested in exploring the intersection of language models and reinforcement learning may find this repository useful for research and experimentation.

inspect_ai
Inspect AI is a framework developed by the UK AI Safety Institute for evaluating large language models. It offers various built-in components for prompt engineering, tool usage, multi-turn dialog, and model graded evaluations. Users can extend Inspect by adding new elicitation and scoring techniques through additional Python packages. The tool aims to provide a comprehensive solution for assessing the performance and safety of language models.

raft
RAFT (Reusable Accelerated Functions and Tools) is a C++ header-only template library with an optional shared library that contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.

arcade-ai
Arcade AI is a developer-focused tooling and API platform designed to enhance the capabilities of LLM applications and agents. It simplifies the process of connecting agentic applications with user data and services, allowing developers to concentrate on building their applications. The platform offers prebuilt toolkits for interacting with various services, supports multiple authentication providers, and provides access to different language models. Users can also create custom toolkits and evaluate their tools using Arcade AI. Contributions are welcome, and self-hosting is possible with the provided documentation.
For similar tasks

gis-mcp
This repository contains a Geographic Information System (GIS) tool for performing Minimum Cumulative Path (MCP) analysis. The tool allows users to calculate the optimal path that minimizes cumulative cost between multiple locations on a map. It is particularly useful for urban planning, transportation route optimization, and environmental impact assessment. The tool supports various cost functions such as distance, travel time, and resource consumption, providing flexibility for different applications. Users can visualize the results on interactive maps and export the analysis outputs for further processing. The tool is implemented in Python and leverages popular GIS libraries such as GeoPandas and NetworkX for efficient spatial analysis.
For similar jobs

weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.

agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.

oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.

LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.

VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.

kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.

PyRIT
PyRIT is an open access automation framework designed to empower security professionals and ML engineers to red team foundation models and their applications. It automates AI Red Teaming tasks to allow operators to focus on more complicated and time-consuming tasks and can also identify security harms such as misuse (e.g., malware generation, jailbreaking), and privacy harms (e.g., identity theft). The goal is to allow researchers to have a baseline of how well their model and entire inference pipeline is doing against different harm categories and to be able to compare that baseline to future iterations of their model. This allows them to have empirical data on how well their model is doing today, and detect any degradation of performance based on future improvements.

Azure-Analytics-and-AI-Engagement
The Azure-Analytics-and-AI-Engagement repository provides packaged Industry Scenario DREAM Demos with ARM templates (Containing a demo web application, Power BI reports, Synapse resources, AML Notebooks etc.) that can be deployed in a customerโs subscription using the CAPE tool within a matter of few hours. Partners can also deploy DREAM Demos in their own subscriptions using DPoC.