AIforEarthDataSets
Notebooks and documentation for AI-for-Earth-managed datasets on Azure
Stars: 263
The Microsoft AI for Earth program hosts geospatial data on Azure that is important to environmental sustainability and Earth science. This repo hosts documentation and demonstration notebooks for all the data that is managed by AI for Earth. It also serves as a "staging ground" for the Planetary Computer Data Catalog.
README:
The Microsoft AI for Earth program hosts geospatial data on Azure that is important to environmental sustainability and Earth science. This repo hosts documentation and demonstration notebooks for all the data that is managed by AI for Earth. It also serves as a "staging ground" for the Planetary Computer Data Catalog.
If you have feedback about any of this data, or want to request additions to our data program, email [email protected]
.
- AI for Earth Data Sets
-
Data sets
- ALOS World 3D
- ASTER L1T (2000-2006)
- Copernicus DEM
- Daymet
- Deltares Global Flood Maps
- Deltares Global Water Availability
- Esri 10m Land Cover
- Global Biodiversity Information Facility (GBIF)
- Harmonized Global Biomass
- Harmonized Landsat Sentinel-2
- High Resolution Electricity Access (HREA)
- High Resolution Ocean Surface Wave Hindcast
- Labeled Information Library of Alexandria: Biology and Conservation (LILA BC)
- Landsat TM/MSS Collection 2
- Landsat 7 Collection 2 Level-2
- Landsat 8 Collection 2 Level-2
- MODIS (40 individual products)
- Monitoring Trends in Burn Severity Mosaics
- National Solar Radiation Database
- NASADEM
- NREL Puerto Rico 100 (PR100)
- NREL PV Rooftop Database
- NOAA Climate Data Records (CDR)
- NOAA Climate Forecast System (CFS)
- NOAA Digital Coast Imagery
- NOAA GFS Warm Start Initial Conditions
- NOAA GOES-R
- NOAA Global Ensemble Forecast System (GEFS)
- NOAA Global Forecast System (GFS)
- NOAA Global Hydro Estimator (GHE)
- NOAA High-Resolution Rapid Refresh (HRRR)
- NOAA Integrated Surface Data (ISD)
- NOAA Monthly US Climate Gridded Dataset (NClimGrid)
- NOAA National Water Model
- NOAA Rapid Refresh (RAP)
- NOAA US Climate Normals
- National Agriculture Imagery Program
- National Land Cover Database
- NatureServe Map of Biodiversity Importance (MoBI)
- Ocean Observatories Initiative CamHD
- Sentinel-1 GRD
- Sentinel-1 SLC
- Sentinel-2 L2A
- Sentinel-3 L2
- Sentinel-5P
- TerraClimate
- UK Met Office CSSP China 20CRDS
- UK Met Office Global Weather Data for COVID-19 Analysis
- University of Miami Coupled Model for Hurricanes Ike and Sandy
- USFS Forest Inventory and Analysis
- USGS 3DEP Seamless DEMs
- USGS Gap Land Cover
- Legal stuff
Global topographic information from the JAXA ALOS PRISM instrument.
The ASTER instrument, launched on-board NASA's Terra satellite in 1999, provides multispectral images of the Earth at 15m-90m resolution. This data set represents ASTER data from 2000-2006.
Global topographic information from the Copernicus program.
Estimates of daily weather parameters in North America on a one-kilometer grid, with monthly and annual summaries.
Global estimates of coastal inundation under various sea level rise conditions and return periods at 90m, 1km, and 5km resolutions. Also includes estimated coastal inundation caused by named historical storm events going back several decades.
Simulations of historical daily reservoir variations for 3,236 locations across the globe for the period 1970-2020 using the distributed wflow_sbm model. The model outputs long-term daily information on reservoir volume, inflow and outflow dynamics, as well as information on upstream hydrological forcing.
Global estimates of 10-class land use/land cover (LULC) for 2020, derived from ESA Sentinel-2 imagery at 10m resolution, produced by Impact Observatory.
Exports of global species occurrence data from the GBIF network.
Global maps of aboveground and belowground biomass carbon density for the year 2010 at 300m resolution.
Satellite imagery from the Landsat 8 and Sentinel-2 satellites, aligned to a common grid and processed to compatible color spaces.
Settlement-level measures of electricity access, reliability, and usage derived from VIIRS satellite imagery.
Long-term wave hindcast data for the U.S. Exclusive Economic Zone (EEZ), developed by the U.S. Department of Energy's Water Power Technologies Office.
AI for Earth and partners have assembled a repository of labeled information related to wildlife conservation, particularly wildlife imagery.
Global optical imagery from the Landsat MSS and TM instruments, which imaged the Earth from 1972 to 2013, aboard the Landsat 1-5 satellites.
Landsat TM/MSS data are in preview; access is granted by request.
Global optical imagery from the Landsat 7 satellite, which has imaged the Earth since 1999.
Landsat 7 data are in preview; access is granted by request.
Global optical imagery from the Landsat 8 satellite, which has imaged the Earth since 2013.
Satellite imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS).
Annual burn severity mosaics for the continental United States and Alaska.
Hourly and half-hourly values of the three most common measurements of solar radiation – global horizontal, direct normal, and diffuse horizontal irradiance - along with meteorological data.
Global topographic information from the NASADEM program.
A collection of geospasial data useful for renewable energy development in Puerto Rico. The dataset is curated by the National Renewable Energy Laboratory.
A lidar-derived, geospatially-resolved dataset of suitable roof surfaces and their PV technical potential for 128 metropolitan regions in the United States.
Historical global climate information.
Model output data from the NOAA NCEP Climate Forecast System Version 2.
High resolution (1 meter or less) imagery collected by a number of sources and contributed to the NOAA Digital Coast
Warm start initial conditions for the NOAA Global Forecast System.
Weather imagery from the GOES-16, GOES-17, and GOES-18 satellites.
Model output data from the NOAA Global Ensemble Forecast System.
Model output data from the NOAA Global Forecast System.
Global rainfall estimates in 15-minute intervals.
Weather forecasts for North America at 3km spatial resolution and 15 minute temporal resolution.
Historical global weather information.
Gridded climate data for the US from 1895 to the present.
Data from the National Water Model.
Weather forecasts for North America at 13km resolution.
Typical climate conditions for the United States from 1981 to the present.
NAIP provides US-wide, high-resolution aerial imagery. This data set includes NAIP images from 2010 to the present.
US-wide data on land cover and land cover change at a 30m resolution with a 16-class legend.
Habitat information for 2,216 imperiled species occurring in the conterminous United States.
Video data from the Ocean Observatories Initiative seafloor camera deployed at Axial Volcano on the Juan de Fuca Ridge.
Global synthetic aperture radar (SAR) data from 2017-present, projected to ground range.
Sentinel-1 GRD data are in preview; access is granted by request.
Global synthetic aperture radar (SAR) data for the last 90 days.
Sentinel-1 SLC data are in preview; access is granted by request.
Global optical imagery at 10m resolution from 2016-present.
Global multispectral imagery at 300m resolution, with a revisit rate of less than two days, from 2016-present.
Sentinel-3 data are in preview; access is granted by request.
Global atmospheric data from 2018-present.
Sentinel-5P data are in preview; access is granted by request.
Monthly climate and climatic water balance for global terrestrial surfaces from 1958-2019.
Historical climate data for China, from 1851-2010.
Data for COVID-19 researchers exploring relationships between COVID-19 and environmental factors.
Modeled wind, wave, and current data for Hurricanes Ike and Sandy, produced by the National Renewable Energy Laboratory.
Status and trends on U.S. forest location, health, growth, mortality, and production, from the US Forest Service's Forest Inventory and Analysis (FIA) program.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
For Tasks:
Click tags to check more tools for each tasksFor Jobs:
Alternative AI tools for AIforEarthDataSets
Similar Open Source Tools
AIforEarthDataSets
The Microsoft AI for Earth program hosts geospatial data on Azure that is important to environmental sustainability and Earth science. This repo hosts documentation and demonstration notebooks for all the data that is managed by AI for Earth. It also serves as a "staging ground" for the Planetary Computer Data Catalog.
awesome-hallucination-detection
This repository provides a curated list of papers, datasets, and resources related to the detection and mitigation of hallucinations in large language models (LLMs). Hallucinations refer to the generation of factually incorrect or nonsensical text by LLMs, which can be a significant challenge for their use in real-world applications. The resources in this repository aim to help researchers and practitioners better understand and address this issue.
AIL-framework
AIL framework is a modular framework to analyze potential information leaks from unstructured data sources like pastes from Pastebin or similar services or unstructured data streams. AIL framework is flexible and can be extended to support other functionalities to mine or process sensitive information (e.g. data leak prevention).
ail-framework
AIL framework is a modular framework to analyze potential information leaks from unstructured data sources like pastes from Pastebin or similar services or unstructured data streams. AIL framework is flexible and can be extended to support other functionalities to mine or process sensitive information (e.g. data leak prevention).
veScale
veScale is a PyTorch Native LLM Training Framework. It provides a set of tools and components to facilitate the training of large language models (LLMs) using PyTorch. veScale includes features such as 4D parallelism, fast checkpointing, and a CUDA event monitor. It is designed to be scalable and efficient, and it can be used to train LLMs on a variety of hardware platforms.
AirTrail
AirTrail is a web application that allows users to track their flights and view their flight history. It features an interactive world map to view flights, flight history tracking, statistics insights, multiple user management with user authentication, responsive design, dark mode, and flight import from various sources.
edgeai
Embedded inference of Deep Learning models is quite challenging due to high compute requirements. TI’s Edge AI software product helps optimize and accelerate inference on TI’s embedded devices. It supports heterogeneous execution of DNNs across cortex-A based MPUs, TI’s latest generation C7x DSP, and DNN accelerator (MMA). The solution simplifies the product life cycle of DNN development and deployment by providing a rich set of tools and optimized libraries.
flower
Flower is a framework for building federated learning systems. It is designed to be customizable, extensible, framework-agnostic, and understandable. Flower can be used with any machine learning framework, for example, PyTorch, TensorFlow, Hugging Face Transformers, PyTorch Lightning, scikit-learn, JAX, TFLite, MONAI, fastai, MLX, XGBoost, Pandas for federated analytics, or even raw NumPy for users who enjoy computing gradients by hand.
joliGEN
JoliGEN is an integrated framework for training custom generative AI image-to-image models. It implements GAN, Diffusion, and Consistency models for various image translation tasks, including domain and style adaptation with conservation of semantics. The tool is designed for real-world applications such as Controlled Image Generation, Augmented Reality, Dataset Smart Augmentation, and Synthetic to Real transforms. JoliGEN allows for fast and stable training with a REST API server for simplified deployment. It offers a wide range of options and parameters with detailed documentation available for models, dataset formats, and data augmentation.
h2ogpt
h2oGPT is an Apache V2 open-source project that allows users to query and summarize documents or chat with local private GPT LLMs. It features a private offline database of any documents (PDFs, Excel, Word, Images, Video Frames, Youtube, Audio, Code, Text, MarkDown, etc.), a persistent database (Chroma, Weaviate, or in-memory FAISS) using accurate embeddings (instructor-large, all-MiniLM-L6-v2, etc.), and efficient use of context using instruct-tuned LLMs (no need for LangChain's few-shot approach). h2oGPT also offers parallel summarization and extraction, reaching an output of 80 tokens per second with the 13B LLaMa2 model, HYDE (Hypothetical Document Embeddings) for enhanced retrieval based upon LLM responses, a variety of models supported (LLaMa2, Mistral, Falcon, Vicuna, WizardLM. With AutoGPTQ, 4-bit/8-bit, LORA, etc.), GPU support from HF and LLaMa.cpp GGML models, and CPU support using HF, LLaMa.cpp, and GPT4ALL models. Additionally, h2oGPT provides Attention Sinks for arbitrarily long generation (LLaMa-2, Mistral, MPT, Pythia, Falcon, etc.), a UI or CLI with streaming of all models, the ability to upload and view documents through the UI (control multiple collaborative or personal collections), Vision Models LLaVa, Claude-3, Gemini-Pro-Vision, GPT-4-Vision, Image Generation Stable Diffusion (sdxl-turbo, sdxl) and PlaygroundAI (playv2), Voice STT using Whisper with streaming audio conversion, Voice TTS using MIT-Licensed Microsoft Speech T5 with multiple voices and Streaming audio conversion, Voice TTS using MPL2-Licensed TTS including Voice Cloning and Streaming audio conversion, AI Assistant Voice Control Mode for hands-free control of h2oGPT chat, Bake-off UI mode against many models at the same time, Easy Download of model artifacts and control over models like LLaMa.cpp through the UI, Authentication in the UI by user/password via Native or Google OAuth, State Preservation in the UI by user/password, Linux, Docker, macOS, and Windows support, Easy Windows Installer for Windows 10 64-bit (CPU/CUDA), Easy macOS Installer for macOS (CPU/M1/M2), Inference Servers support (oLLaMa, HF TGI server, vLLM, Gradio, ExLLaMa, Replicate, OpenAI, Azure OpenAI, Anthropic), OpenAI-compliant, Server Proxy API (h2oGPT acts as drop-in-replacement to OpenAI server), Python client API (to talk to Gradio server), JSON Mode with any model via code block extraction. Also supports MistralAI JSON mode, Claude-3 via function calling with strict Schema, OpenAI via JSON mode, and vLLM via guided_json with strict Schema, Web-Search integration with Chat and Document Q/A, Agents for Search, Document Q/A, Python Code, CSV frames (Experimental, best with OpenAI currently), Evaluate performance using reward models, and Quality maintained with over 1000 unit and integration tests taking over 4 GPU-hours.
prompting
This repository contains the official codebase for Bittensor Subnet 1 (SN1) v1.0.0+, released on 22nd January 2024. It defines an incentive mechanism to create a distributed conversational AI for Subnet 1. Validators and miners are based on large language models (LLM) using internet-scale datasets and goal-driven behavior to drive human-like conversations. The repository requires python3.9 or higher and provides compute requirements for running validators and miners. Users can run miners or validators using specific commands and are encouraged to run on the testnet before deploying on the main network. The repository also highlights limitations and provides resources for understanding the architecture and methodology of SN1.
carla
CARLA is an open-source simulator for autonomous driving research. It provides open-source code, protocols, and digital assets (urban layouts, buildings, vehicles) for developing, training, and validating autonomous driving systems. CARLA supports flexible specification of sensor suites and environmental conditions.
chronos-forecasting
Chronos is a family of pretrained time series forecasting models based on language model architectures. A time series is transformed into a sequence of tokens via scaling and quantization, and a language model is trained on these tokens using the cross-entropy loss. Once trained, probabilistic forecasts are obtained by sampling multiple future trajectories given the historical context. Chronos models have been trained on a large corpus of publicly available time series data, as well as synthetic data generated using Gaussian processes.
Reflection_Tuning
Reflection-Tuning is a project focused on improving the quality of instruction-tuning data through a reflection-based method. It introduces Selective Reflection-Tuning, where the student model can decide whether to accept the improvements made by the teacher model. The project aims to generate high-quality instruction-response pairs by defining specific criteria for the oracle model to follow and respond to. It also evaluates the efficacy and relevance of instruction-response pairs using the r-IFD metric. The project provides code for reflection and selection processes, along with data and model weights for both V1 and V2 methods.
opendataeditor
The Open Data Editor (ODE) is a no-code application to explore, validate and publish data in a simple way. It is an open source project powered by the Frictionless Framework. The ODE is currently available for download and testing in beta.
For similar tasks
AIforEarthDataSets
The Microsoft AI for Earth program hosts geospatial data on Azure that is important to environmental sustainability and Earth science. This repo hosts documentation and demonstration notebooks for all the data that is managed by AI for Earth. It also serves as a "staging ground" for the Planetary Computer Data Catalog.
AIR-1
AIR-1 is a compact sensor device designed for monitoring various environmental parameters such as gas levels, particulate matter, temperature, and humidity. It features multiple sensors for detecting gases like CO, alcohol, H2, NO2, NH3, CO2, as well as particulate matter, VOCs, NOx, and more. The device is designed with a focus on accuracy and efficient heat management in a small form factor, making it suitable for indoor air quality monitoring and environmental sensing applications.
For similar jobs
AIforEarthDataSets
The Microsoft AI for Earth program hosts geospatial data on Azure that is important to environmental sustainability and Earth science. This repo hosts documentation and demonstration notebooks for all the data that is managed by AI for Earth. It also serves as a "staging ground" for the Planetary Computer Data Catalog.
Awesome-LWMs
Awesome Large Weather Models (LWMs) is a curated collection of articles and resources related to large weather models used in AI for Earth and AI for Science. It includes information on various cutting-edge weather forecasting models, benchmark datasets, and research papers. The repository serves as a hub for researchers and enthusiasts to explore the latest advancements in weather modeling and forecasting.
weave
Weave is a toolkit for developing Generative AI applications, built by Weights & Biases. With Weave, you can log and debug language model inputs, outputs, and traces; build rigorous, apples-to-apples evaluations for language model use cases; and organize all the information generated across the LLM workflow, from experimentation to evaluations to production. Weave aims to bring rigor, best-practices, and composability to the inherently experimental process of developing Generative AI software, without introducing cognitive overhead.
agentcloud
AgentCloud is an open-source platform that enables companies to build and deploy private LLM chat apps, empowering teams to securely interact with their data. It comprises three main components: Agent Backend, Webapp, and Vector Proxy. To run this project locally, clone the repository, install Docker, and start the services. The project is licensed under the GNU Affero General Public License, version 3 only. Contributions and feedback are welcome from the community.
oss-fuzz-gen
This framework generates fuzz targets for real-world `C`/`C++` projects with various Large Language Models (LLM) and benchmarks them via the `OSS-Fuzz` platform. It manages to successfully leverage LLMs to generate valid fuzz targets (which generate non-zero coverage increase) for 160 C/C++ projects. The maximum line coverage increase is 29% from the existing human-written targets.
LLMStack
LLMStack is a no-code platform for building generative AI agents, workflows, and chatbots. It allows users to connect their own data, internal tools, and GPT-powered models without any coding experience. LLMStack can be deployed to the cloud or on-premise and can be accessed via HTTP API or triggered from Slack or Discord.
VisionCraft
The VisionCraft API is a free API for using over 100 different AI models. From images to sound.
kaito
Kaito is an operator that automates the AI/ML inference model deployment in a Kubernetes cluster. It manages large model files using container images, avoids tuning deployment parameters to fit GPU hardware by providing preset configurations, auto-provisions GPU nodes based on model requirements, and hosts large model images in the public Microsoft Container Registry (MCR) if the license allows. Using Kaito, the workflow of onboarding large AI inference models in Kubernetes is largely simplified.