Shipwrecks detection using bathymetric data

  • 🔬 Data Science
  • 🥠 Deep Learning and Instance Segmentation

Introduction

In this notebook, we will use bathymetry data provided by NOAA to detect shipwrecks from the Shell Bank Basin area located near New York City in United States. A Bathymetric Attributed Grid (BAG) is a two-band imagery where one of the bands is elevation and the other is uncertainty (define uncertainty of elevation value). We have applied deep learning methods after pre-processing the data (which is explained in Preprocess bathymetric data) for the detection.

One important step in pre-processing is applying shaded relief function provided in ArcGIS which is also used by NOAA in one of their BAG visualizations here. Shaded Relief is a 3D representation of the terrain which differentiate the shipwrecks distinctly from the background and reveals them. This is created by merging the Elevation-coded images and Hillshade method where a 3-band imagery is returned which is easy to interpret as compared to the raw bathymetry image. Subsequently, the images are exported as "RCNN Masks" to train a MaskRCNN model provided by ArcGIS API for Python for detecting the shipwrecks.

The notebook presents the use of deep learning methods to automate the identification of submerged shipwrecks which could be useful for hydrographic offices, archaeologists, historians who otherwise would spend a lot of time doing it manually.

Necessary imports

import os
from pathlib import Path
from datetime import datetime as dt

from arcgis.gis import GIS
from arcgis.raster.functions import RFT  
from arcgis.learn import prepare_data, MaskRCNN

Connect to your GIS

gis = GIS('https://pythonapi.playground.esri.com/portal', 'arcgis_python', 'amazing_arcgis_123')

Get the data for analysis

bathymetry_img = gis.content.get('8a08107910764b6d8418204800d3f8a4')
bathymetry_img
Bathymetrydata
BathymetrydataImagery Layer by api_data_owner
Last Modified: March 18, 2020
0 comments, 11 views
training_data_wrecks = gis.content.get('3384140ab9cc40a2ac0c41b71a9b4ec9')
training_data_wrecks
training_data_wrecks
training_data_wrecksMap Image Layer by api_data_owner
Last Modified: March 16, 2020
0 comments, 26 views

Preprocess bathymetric data

We are applying some preprocessing to the bathymetry data so that we can export the data for training a deep learning model. The preprocessing steps include mapping 'No Data' pixels value to '-1' and applying Shaded Relief function to the output raster. The resultant raster after applying Shaded Relief function will be a 3-band imagery that we can use to export data using Export Training Data for Deep Learning tool in ArcGIS Pro 2.5, for training our deep learning model.

All the preprocessing steps are recorded in the form of a Raster function template which you can use in ArcGIS Pro to generate the processed raster.

shaded_relief_rft = gis.content.get('b0e3651e936e47f8bbf4144aea59e065')
shaded_relief_rft
shaded_Relief
RFT_Shaded_Relief-RasterFunctionRaster function template by api_data_owner
Last Modified: March 18, 2020
0 comments, 56 views
shaded_relief_ob = RFT(shaded_relief_rft)
# ! conda install -c anaconda graphviz -y
# shaded_relief_ob.draw_graph()

We need to add this custom raster function to ArcGIS Pro using Import functions option in the 'Custom' tab of 'Raster Functions'

Once we apply the Raster function template on the bathymetry data, we will get the output image below. We will use this image to export training data for our deep learning model.

shaded_relief = gis.content.get('f09fd4cfcfae4dba860897a3d6d52926')
shaded_relief
shaded_Relief_DetectObject_MaskRCNN_t4_130e_pm
MaskRCNN Production model for Shipwrecks detectionMap Image Layer by api_data_owner
Last Modified: March 31, 2020
0 comments, 43 views

Export training data

Export training data using 'Export Training data for deep learning' tool, click here for detailed documentation:

  • Set 'shaded_relief' as Input Raster.
  • Set a location where you want to export the training data in Output Folder parameter, it can be an existing folder or the tool will create that for you.
  • Set the 'training_data_wrecks' as input to the Input Feature Class Or Classified Raster parameter.
  • Set Class Field Value as 'ecode'.
  • Set Image Format as 'TIFF format'
  • Tile Size X & Tile Size Y can be set to 256.
  • Stride X & Stride Y can be set to 50.
  • Select 'RCNN Masks' as the Meta Data Format because we are training a 'MaskRCNN Model'.
  • In 'Environments' tab set an optimum Cell Size. For this example, as we have performing the analysis on the bathymetry data with 50 cm resolution, so, we used '0.5' as the cell size.
arcpy.ia.ExportTrainingDataForDeepLearning(in_raster="shaded_Relief_CopyRaster",
out_folder=r"\256X256_multiple_cellsize_stride50",
in_class_data="training_data_wrecks", 
image_chip_format="TIFF",
tile_size_x=256,
tile_size_y=256, 
stride_x=50,
stride_y=50,
output_nofeature_tiles="ONLY_TILES_WITH_FEATURES",
metadata_format="RCNN_Masks",
start_index=0,
class_value_field="ecode",
buffer_radius=0,
in_mask_polygons=None,
rotation_angle=0, 
reference_system="MAP_SPACE",
processing_mode="PROCESS_AS_MOSAICKED_IMAGE",
blacken_around_feature="NO_BLACKEN",
crop_mode="FIXED_SIZE")``` 

Train the model

As we have already exported our training data, we will now train our model using ArcGIS API for Python. We will be using arcgis.learn module which contains tools and deep learning capabilities. Documentation is available here to install and setup environment.

Prepare data

We can always apply multiple transformations to our training data when training a model that can help generalize the model better. Though, we do some standard data augmentations, we can enhance them further based on the data at hand, to increase data size, and avoid occurring.

Let us have look, how we can do it using Fastai's image transformation library.

from fastai.vision.transform import crop, rotate, brightness, contrast, rand_zoom
train_tfms = [rotate(degrees=30,                              # defining a transform using rotate with degrees fixed to
                     p=0.5),                                  # a value, but by passing an argument p.
              
              crop(size=224,                                  # crop of the image to return image of size 224. The position 
                   p=1.,                                      # is given by (col_pct, row_pct), with col_pct and row_pct
                   row_pct=(0, 1),                            # being normalized between 0 and 1.
                   col_pct=(0, 1)),                           
              
              brightness(change=(0.4, 0.6)),                  # Applying change in brightness of image.
              
              contrast(scale=(1.0, 1.5)),                     # Applying scale to contrast of image.
              
              rand_zoom(scale=(1.,1.2))]                      # Randomized version of zoom.

val_tfms = [crop(size=224,                                    # cropping the image to same size for validation datasets
                 p=1.0,                                       # as in training datasets.
                 row_pct=0.5, 
                 col_pct=0.5)]

transforms = (train_tfms, val_tfms)                           # tuple containing transformations for data augmentation 
                                                              # of training and validation datasets respectively.

We would specify the path to our training data and a few hyper parameters.

  • path: path of folder containing training data.
  • batch_size: No of images your model will train on each step inside an epoch, it directly depends on the memory of your graphic card.
  • transforms: tuple containing Fast.ai transforms for data augmentation of training and validation datasets respectively.

This function will return a fast.ai databunch, we will use this in the next step to train a model.

gis = GIS('home')
training_data = gis.content.get('91178e9303af49b0b9ae09c0d32ec164')
training_data
shipwrecks_detection_using_bathymetric_data
Image Collection by api_data_owner
Last Modified: August 28, 2020
0 comments, 0 views
filepath = training_data.download(file_name=training_data.name)
import zipfile
with zipfile.ZipFile(filepath, 'r') as zip_ref:
    zip_ref.extractall(Path(filepath).parent)
data_path = Path(os.path.join(os.path.splitext(filepath)[0]))
data = prepare_data(path=data_path, batch_size=8, transforms=transforms)

Visualize a few samples from your training data

To make sense of training data we will use the show_batch() method in arcgis.learn. This method randomly picks few samples from the training data and visualizes them.

rows: number of rows we want to see the results for.

data.show_batch(rows=5)
<Figure size 1080x1800 with 15 Axes>

Load model architecture

arcgis.learn provides the MaskRCNN model for instance segmentation tasks, which is based on a pretrained convnet, like ResNet that acts as the 'backbone'. More details about MaskRCNN can be found here.

model = MaskRCNN(data)

Find an optimal learning rate

Learning rate is one of the most important hyperparameters in model training. Here, we explore a range of learning rate to guide us to choose the best one. We will use the lr_find() method to find an optimum learning rate at which we can train a robust model.

lr = model.lr_find()
<Figure size 432x288 with 1 Axes>
3.630780547701014e-05

Fit the model

To train the model, we use the fit() method. To start, we will train our model for 80 epochs. Epoch defines how many times model is exposed to entire training set. We have passes three parameters to fit() method:

  • epochs: Number of cycles of training on the data.
  • lr: Learning rate to be used for training the model.
  • wd: Weight decay to be used.
model.fit(epochs=80, lr=lr, wd=0.1)
epochtrain_lossvalid_losstime
00.8837350.90216656:25
10.8147530.84642858:18
20.7803050.80526857:48
30.7324310.76976953:34
40.6925940.73942747:35
50.6562960.69730357:13
60.6051970.66520243:11
70.6092030.63508743:09
80.5823120.60818443:13
90.5695690.59790543:17
100.5429240.57270743:30
110.5446350.54728754:32
120.5200060.54838855:43
130.5000810.53654357:37
140.5115510.52689157:28
150.4927520.52457058:10
160.4755080.52052158:07
170.4950500.50973757:58
180.4790580.50081458:06
190.4728250.50152758:05
200.4586310.48430258:17
210.4788410.50496152:08
220.4566990.47286049:15
230.4391400.47552957:53
240.4319070.48045157:59
250.4416820.46992658:14
260.4325390.47189958:36
270.4202690.46386458:15
280.4286660.45626258:31
290.4055110.46979358:14
300.4187830.45074159:26
310.4001440.45660558:34
320.4273800.44964156:08
330.4044020.44800047:59
340.4036710.44759957:55
350.3889110.44873058:26
360.4158380.44021058:16
370.4029940.44020158:21
380.3748950.43314958:23
390.3852910.43454758:14
400.3904630.43228558:18
410.3704660.42736758:03
420.3954300.44568158:21
430.3679670.42972558:06
440.3729460.42631147:39
450.3767780.42824253:26
460.3780030.42253857:56
470.3926060.42564258:30
480.3749200.41265557:56
490.3816980.41586758:11
500.3663530.41631157:48
510.3774350.40709258:35
520.3707880.41000358:43
530.3933650.41041958:21
540.3551970.40644958:09
550.3594740.40533254:09
560.3576010.40489348:12
570.3667750.40005258:24
580.3586490.39884158:31
590.3445610.39898759:00
600.3614930.40171458:18
610.3523090.39029758:27
620.3422830.39542258:41
630.3415920.39288359:12
640.3617980.39238458:28
650.3508220.39085358:33
660.3487300.38392458:35
670.3428950.38359948:18
680.3415200.38453244:22
690.3397220.38511243:54
700.3417950.38432343:48
710.3497060.38397643:54
720.3267550.38166843:45
730.3253120.38194243:54
740.3354860.38142243:43
750.3454880.38108443:56
760.3433280.38146843:54
770.3267940.38101944:05
780.3334490.38059744:00
790.3380540.38058743:51

As you can see, both the losses (valid_loss and train_loss) started from a higher value and ended up to a lower value, that tells our model has learnt well. Let us do an accuracy assessment to validate our observation.

Accuracy Assessment

We can compute the average precision score for the model we just trained in order to do the accuracy assessment. Average precision computes average precision on the validation set for each class. We can compute the Average Precision Score by calling model.average_precision_score. It takes the following parameters:

  • detect_thresh: The probability above which a detection will be considered for computing average precision.
  • iou_thresh: The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
  • mean: If False, returns class-wise average precision otherwise returns mean average precision.
model.average_precision_score(detect_thresh=0.3, iou_thresh=0.3, mean=False)
100.00% [270/270 03:14<00:00]
{'1': 0.9417281930692071}

The model has an average precision score of 0.94 which proves that the model has learnt well. Let us now see it's results on validation set.

Visualize results in validation set

The code below will pick a few random samples and show us ground truth and respective model predictions side by side. This allows us to validate the results of your model in the notebook itself. Once satisfied we can save the model and use it further in our workflow. The model.show_results() method can be used to display the detected ship wrecks. Each detection is visualized as a mask by default.

model.show_results(rows=5, thresh=0.5)
<Figure size 720x1800 with 10 Axes>

Save the model

We would now save the model which we just trained as a 'Deep Learning Package' or '.dlpk' format. Deep Learning package is the standard format used to deploy deep learning models on the ArcGIS platform.

We will use the save() method to save the model and by default it will be saved to a folder 'models' inside our training data folder itself.

model.save('Shipwrecks_80e')

The saved model can be downloaded from here for inferencing purposes.

Model inference

The saved model can be used to detect shipwrecks masks using the Detect Objects Using Deep Learning tool available in ArcGIS Pro, or ArcGIS Image Server. For this sample, we will use the bathymetry data processed using the shaded relief raster function template to detect shipwrecks.

arcpy.ia.DetectObjectsUsingDeepLearning(in_raster="shaded_Relief_CopyRaster",
out_detected_objects=r"\\ShipwrecksDetectObjects_80e",
in_model_definition=r"\\models\Shipwrecks_80e\Shipwrecks_80e.emd",
model_arguments ="padding 56;batch_size 4;threshold 0.3;return_bboxes False",
run_nms="NMS",
confidence_score_field="Confidence",
class_value_field="Class",
max_overlap_ratio=0,
processing_mode="PROCESS_AS_MOSAICKED_IMAGE")

The output of the model is a layer of detected shipwrecks which is shown below:

A subset of detected shipwrecks

To view the above results in a webmap click here.

Conclusion

This notebook showcased how instance segmentation models like MaskRCNN can be used to automatically detect shipwrecks using bathymetry data. This notebook also showcased how custom transformations, irrespective of already present standard transformations, based on the data, can be added while preparing data in order to achieve better performance.

Your browser is no longer supported. Please upgrade your browser for the best experience. See our browser deprecation post for more details.