ForAINet

ForAINet

official source code for paper entitled "Automated forest inventory: analysis of high-density airborne LiDAR point clouds with 3D deep learning"

Stars: 52

Visit
 screenshot

This repository contains the official code for the paper 'Automated forest inventory: analysis of high-density airborne LiDAR point clouds with 3D deep learning'. It provides tools for point cloud segmentation experiments based on different settings, tree parameters extraction, handling large point clouds through tiling, predicting, and merging workflows. Additionally, it includes commands for training, testing, and evaluating the models, along with the necessary datasets and pretrained models.

README:

Automated forest inventory: analysis of high-density airborne LiDAR point clouds with 3D deep learning

This repository represents the official code for paper entitled "Automated forest inventory: analysis of high-density airborne LiDAR point clouds with 3D deep learning".

Set up environment

Please refer to our previous repo:

https://github.com/prs-eth/PanopticSegForLargeScalePointCloud

It includes the detailed steps and issues that might happen but already resolved.

FOR-Instance dataset

Please replace the old raw files with our new raw files:

For example, data_set1_5classes contains the data for "basic setting" in Table 4 in our paper.

  1. dataset for settings "basic setting", "+ binary semantic loss", "+ class weights", "+ height weights", "+ region weights", "+ elastic distortion and subsampling", "+ TreeMix"

You can download it: DOI

  1. For other setting to be added here.

Commands for running point cloud segmentation experiments based on different settings:

cd /$YOURPATH$/ForAINet/PointCloudSegmentation
  1. Experiment for "basic setting" in the paper.
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1 models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1 job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ binary semantic loss" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1 models=panoptic/FORpartseg_3heads_BiLoss model_name=PointGroup-PAPER training=treeins_set1_addBiLoss job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ class weights" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_classweight models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1_nw8_classweight job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ height weights" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_classweight models=panoptic/FORpartseg_3heads_heightweight model_name=PointGroup-PAPER training=treeins_set1_heightweight job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ region weights" setting in the paper
# Command for training

# To be added
  1. Experiment for "+ intensity" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_add_intensity models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1_intensity job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ return number" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_add_return_num models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1_return_num job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ scan angle rank" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_add_scan_angle_rank models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1_scan_angle_rank job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ hand-crafted features" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_add_all_20010 models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1_addallFea_20010 job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ elastic distortion and subsampling" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_curved_subsam models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1_addCurvedSubsample job_name=#YOUR_JOB_NAME#
  1. Experiment for "+ TreeMix" setting in the paper
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_treemix3d models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=treeins_set1_mixtree job_name=#YOUR_JOB_NAME#
  1. Experiments for data with different point density
# Command for training
python train.py task=panoptic data=panoptic/treeins_set1_treemix3d_pd#POINT_DENSITY# models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=mixtree_#POINT_DENSITY# job_name=#YOUR_JOB_NAME#

# take point density=10 as an example
python train.py task=panoptic data=panoptic/treeins_set1_treemix3d_pd10 models=panoptic/FORpartseg_3heads model_name=PointGroup-PAPER training=mixtree_10 job_name=#YOUR_JOB_NAME#
  1. Commands for testing. Remember to change "checkpoint_dir" parameter to your path.

Our pretrained model could be download here: https://www.dropbox.com/scl/fi/mv4nxe60cco86fd2u9f3z/PointGroup-PAPER.pt?rlkey=ua6093kehk0youpo8g3a6g0nm&st=wiqv3a0u&dl=0

# Command for test
# remember to change the following 2 parameters in eval.yaml:
# 1. "checkpoint_dir" to your log files path
# 2. "data" is the paths for your test files
python eval.py

# Command for output the final evaluation file
# replace parameter "test_sem_path" by your path
python evaluation_stats_FOR.py

Commands for running tree parameters extraction code:

cd /$YOURPATH$/ForAINet/tree_metrics
# remember to adjust parameters based on your dataset
python measurement.py

Commands for running code for extracting manually extracted geometric features:

# Please note that our code is based on the Superpoint Graphs repository, which can be found at https://github.com/loicland/superpoint_graph. We have included our custom partition_FORdata.py file.
cd /$YOURPATH$/ForAINet/superpoint_graph/partition
python partition_FORdata.py

Handling large point clouds: a workflow for tiling, predicting, and merging:

For large point clouds, we provide the code to process them seamlessly. The workflow involves the following steps:

  1. Splitting the point cloud: use split_largePC_to_tiles.py to divide the large point cloud into fixed-size tiles (default: 50m tiles with 5m overlap).
  2. Predicting for each tile: run predictions on each tile using generate_eval_command.py.
  3. Merging results: combine the results of all tiles back into the original point cloud using merge_tiles.py. All these operations can be easily executed with the large_PC_predict.sh command:
# modify parameters in large_PC_predict.sh:
# base_path: your project directory
# tile_size and overlap
# src_dir: specify the directory where your model is stored

# modify parameters in exampleeval.yaml:
# checkpoint_dir: the location of your model checkpoint
# data.fold: the paths of the point cloud files you want to test

bash large_PC_predict.sh

Citing

If you find our work useful, please do not hesitate to cite it:

@article{
  xiang2024automated,
  title={Automated forest inventory: analysis of high-density airborne LiDAR point clouds with 3D deep learning},
  author={Binbin Xiang, Maciej Wielgosz, Theodora Kontogianni, Torben Peters, Stefano Puliti, Rasmus Astrup, Konrad Schindler},
  journal={Remote Sensing of Environment},
  volume={305},
  pages={114078},
  year={2024},
  publisher={Elsevier}
}

For Tasks:

Click tags to check more tools for each tasks

For Jobs:

Alternative AI tools for ForAINet

Similar Open Source Tools

For similar tasks

For similar jobs