- URL:
- https://<rasteranalysistools-url>/TrainDeepLearningModel
- Methods:
GET
- Version Introduced:
- 10.8
Description
The Train
task is used to train a deep learning model using the output from the Export
operation. It generates the deep learning model package (*.dlpk
) and adds it to an enterprise portal. You can also use this task to write the deep learning model package to a file share data store location.
11.2
Cloud store and cloud raster store support was added for the in
and output
parameters.
Portal items URLs are also supported as input for the pretrained
parameter.
Request parameters
Parameter | Details |
---|---|
(Required) | This is the input location for the training sample data. It can be the path of the output location in the file share data store, file share raster store, cloud raster store, or a shared file system path. The training sample data folder must be the output from the The following are file share raster store path examples: Examples
|
(Required) | This is the output location for the trained deep learning model package ( Example:
|
(Required) | The model type to use for training the deep learning model. This parameter supports model types for image translation, object classification, object detection, object tracker, and pixel classification. The model types that are supported for each type of processing and the supported values for this parameter are listed below. Image translation values: Object classification values: Object detection values: Object tracker values: Panoptic segmentation values: Pixel classification values: |
(Optional) | This is where you list additional deep learning parameters and arguments for experiments and refinement, such as a confidence threshold for adjusting sensitivity. The names of the arguments are populated from reading the Python module. When you set
When you set
When you set
All model types support the Syntax: The value pairs of arguments and their values. Example
|
(Optional) | The number of training samples to be processed for training at one time. If the server has a powerful GPU, this number can be increased to 16, 36, 64, and so on. Example
|
(Optional) | The maximum number of epochs for training the model. One epoch means the whole training dataset will be passed forward and backward through the deep neural network once. Example
|
(Optional) | The rate at which the weights are updated during the training. It is a small positive value in the range between 0.0 and 1.0. If the learning rate is set to 0, it will extract the optimal learning rate from the learning curve during the training process. Example
|
(Optional) | Specifies the preconfigured neural network to be used as an architecture for training the new model. See the Backbone model values section below for more information. Values: Example
|
(Optional) | The percentage of training sample data that will be used for validating the model. Example
|
(Optional) | The pretrained model to be used for fine-tuning the new model. It is a Example
|
(Optional) | Specifies whether early stopping will be implemented. If Values: |
(Optional) | Overwrites an existing deep learning model package ( If the
|
(Optional) | Environment settings that affect task operation. This parameter has the following settings:
Example
|
| Specifies whether the backbone layers in the pretrained model will be frozen so that the weights and biases in the backbone layers remain unaltered. If Values: |
| The response format. The default response format is Values: |
Backbone model values
The accepted preconfigured neural network values that can be submitted with the backbone
parameter are described below.
Value | Description |
---|---|
| The preconfigured model will be a convolutional neural network trained on the ImageNet dataset that contains more than 1 million images and is 53 layers deep. |
| The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 121 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation. |
| The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 161 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation. |
| The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 169 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation. |
| The preconfigured model will be a dense network trained on the ImageNet dataset that contains more than 1 million images and is 201 layers deep. Unlike RESNET, which combines the layer using summation, DenseNet combines the layers using concatenation. |
| The preconfigured model trained on the ImageNet database and is 54 layers deep geared toward Edge device computing, since it uses less memory. |
| The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 18 layers deep. |
| The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 34 layers deep. This is the default. |
| The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 50 layers deep. |
| The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 101 layers deep. |
| The preconfigured model will be a residual network trained on the ImageNet dataset that contains more than 1 million images and is 152 layers deep. |
| The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 11 layers deep. |
| The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 11 layers. |
| The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 13 layers deep. |
| The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 13 layers. |
| The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 16 layers deep. |
| The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 16 layers. |
| The preconfigured model will be a convolution neural network trained on the ImageNet dataset that contains more than 1 million images to classify images into 1,000 object categories and is 19 layers deep. |
| The preconfigured model is based on the VGG network but with batch normalization, which normalizes each layer in the network. It trained on the ImageNet dataset and has 19 layers. |
Example usage
The following is a sample request URL for Train
:
https://services.myserver.com/arcgis/rest/services/System/RasterAnalysisTools/GPServer/TrainDeepLearningModel
Response
When you submit a request, the task assigns a unique job ID for the transaction.
Syntax:
{ "jobId": "<unique job identifier>", "jobStatus": "<job status>" }
After the initial request is submitted, you can use the job
to periodically check the status of the job and messages, as described in Check job status. Once the job has successfully completed, use the job
to retrieve the results. To track the status, you can make a request of the following form:
https://<rasterAnalysisTools-url>/TrainDeepLearningModel/jobs/<jobId>
When the status of the job request is esri
, you can access the results of the analysis by making a request of the following form:
https://<rasterAnalysisTools-url>/TrainDeepLearningModel/jobs/<jobId>/results/out_item
JSON Response example
The response returns the .dlpk
portal item, which has title
, type
, filename
, file
, id
, and folder
properties.
{
"title": "dlpk_name",
"type": "Deep Learning Package",
"multipart": True,
"tags": "imagery"
"typeKeywords": "Deep Learning, Raster"
"filename": "dlpk_name",
"file": "\\servername\rasterstore\mytrainedmodel.dlpk",
"id": "f121390b85ef419790479fc75b493efd",
"folderId": "dfwerfbd3ec25584d0d8f4"
}
However, if a data store path is specified as the value for output
, the output will be the data store location.
{
"paramName": "out_item",
"dataType": "GPString",
"value": {"uri": "/fileShares/yourFileShareFolderName/trainedModel/trainedModel.dlpk"}"value": {"uri": "/fileShares/yourFileShareFolderName/trainedModel/trainedModel.dlpk"}
}