arcgis.learn module

Functions for calling the Deep Learning Tools.

Data Preparation Methods

export_training_data

arcgis.learn.export_training_data(input_raster, input_class_data=None, chip_format=None, tile_size=None, stride_size=None, metadata_format=None, classvalue_field=None, buffer_radius=None, output_location=None, context=None, input_mask_polygons=None, rotation_angle=0, reference_system='MAP_SPACE', process_all_raster_items=False, blacken_around_feature=False, fix_chip_size=True, *, gis=None, future=False, **kwargs)

Function is designed to generate training sample image chips from the input imagery data with labeled vector data or classified images. The output of this service tool is the data store string where the output image chips, labels and metadata files are going to be stored.

Argument

Description

input_raster

Required. Raster layer that needs to be exported for training.

input_class_data

Labeled data, either a feature layer or image layer. Vector inputs should follow a training sample format as generated by the ArcGIS Pro Training Sample Manager. Raster inputs should follow a classified raster format as generated by the Classify Raster tool.

chip_format

Optional string. The raster format for the image chip outputs.

  • TIFF: TIFF format

  • PNG: PNG format

  • JPEG: JPEG format

  • MRF: MRF (Meta Raster Format)

tile_size

Optional dictionary. The size of the image chips.

Example: {“x”: 256, “y”: 256}

stride_size

Optional dictionary. The distance to move in the X and Y when creating the next image chip. When stride is equal to the tile size, there will be no overlap. When stride is equal to half of the tile size, there will be 50% overlap.

Example: {“x”: 128, “y”: 128}

metadata_format

Optional string. The format of the output metadata labels. There are 4 options for output metadata labels for the training data, KITTI Rectangles, PASCAL VOCrectangles, Classified Tiles (a class map) and RCNN_Masks. If your input training sample data is a feature class layer such as building layer or standard classification training sample file, use the KITTI or PASCAL VOC rectangle option. The output metadata is a .txt file or .xml file containing the training sample data contained in the minimum bounding rectangle. The name of the metadata file matches the input source image name. If your input training sample data is a class map, use the Classified Tiles as your output metadata format option.

  • KITTI_rectangles: The metadata follows the same format as the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) Object Detection Evaluation dataset. The KITTI dataset is a vision benchmark suite. This is the default.The label files are plain text files. All values, both numerical or strings, are separated by spaces, and each row corresponds to one object. This format can be used with FasterRCNN, RetinaNet, SingleShotDetector and YOLOv3 models.

  • PASCAL_VOC_rectangles: The metadata follows the same format as the Pattern Analysis, Statistical Modeling and Computational Learning, Visual Object Classes (PASCAL_VOC) dataset. The PASCAL VOC dataset is a standardized image data set for object class recognition.The label files are XML files and contain information about image name, class value, and bounding box(es). This format can be used with FasterRCNN, RetinaNet, SingleShotDetector and YOLOv3 models.

  • Classified_Tiles: This option will output one classified image chip per input image chip. No other meta data for each image chip. Only the statistics output has more information on the classes such as class names, class values, and output statistics. This format can be used with BDCNEdgeDetector, DeepLab, HEDEdgeDetector, MultiTaskRoadExtractor, PSPNetClassifier and UnetClassifier models.

  • RCNN_Masks: This option will output image chips that have a mask on the areas where the sample exists. The model generates bounding boxes and segmentation masks for each instance of an object in the image. This format can be used with MaskRCNN model.

  • Labeled_Tiles: This option will label each output tile with a specific class. This format is used for image classification. This format can be used with FeatureClassifier model.

  • Multi-labeled Tiles: Each output tile will be labeled with one or more classes. For example, a tile may be labeled agriculture and also cloudy. This format is used for object classification. This format can be used with FeatureClassifier model.

  • Export Tiles: The output will be image chips with no label. This format is used for image enhancement techniques such as Super Resolution and Change Detection. This format can be used with ChangeDetector, CycleGAN, Pix2Pix and SuperResolution models.

classvalue_field

Optional string. Specifies the field which contains the class values. If no field is specified, the system will look for a ‘value’ or ‘classvalue’ field. If this feature does not contain a class field, the system will presume all records belong the 1 class.

buffer_radius

Optional integer. Specifies a radius for point feature classes to specify training sample area.

output_location

This is the output location for training sample data. It can be the server data store path or a shared file system path.

Example:

Server datastore path - /fileShares/deeplearning/rooftoptrainingsamples /rasterStores/rasterstorename/rooftoptrainingsamples /cloudStores/cloudstorename/rooftoptrainingsamples

File share path - \\servername\deeplearning\rooftoptrainingsamples

context

Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:

  • exportAllTiles - Choose if the image chips with overlapped labeled data will be exported.

    • True - Export all the image chips, including those that do not overlap labeled data.

    • False - Export only the image chips that overlap the labelled data. This is the default.

  • startIndex - Allows you to set the start index for the sequence of image chips. This lets you append more image chips to an existing sequence. The default value is 0.

  • cellSize - cell size can be set using this key in context parameter

  • extent - Sets the processing extent used by the function

Setting context parameter will override the values set using arcgis.env variable for this particular function.(cellSize, extent)

eg: {“exportAllTiles” : False, “startIndex”: 0 }

input_mask_polygons

Optional feature layer. The feature layer that delineates the area where image chips will be created. Only image chips that fall completely within the polygons will be created.

rotation_angle

Optional float. The rotation angle that will be used to generate additional image chips.

An image chip will be generated with a rotation angle of 0, which means no rotation. It will then be rotated at the specified angle to create an additional image chip. The same training samples will be captured at multiple angles in multiple image chips for data augmentation. The default rotation angle is 0.

reference_system

Optional string. Specifies the type of reference system to be used to interpret the input image. The reference system specified should match the reference system used to train the deep learning model.

  • MAP_SPACE : The input image is in a map-based coordinate system. This is the default.

  • IMAGE_SPACE : The input image is in image space, viewed from the direction of the sensor that captured the image, and rotated such that the tops of buildings and trees point upward in the image.

  • PIXEL_SPACE : The input image is in image space, with no rotation and no distortion.

process_all_raster_items

Optional bool. Specifies how all raster items in an image service will be processed.

  • False : all raster items in the image service will be mosaicked together and processed. This is the default.

  • True : all raster items in the image service will be processed as separate images.

blacken_around_feature

Optional bool. Specifies whether to blacken the pixels around each object or feature in each image tile. This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.

  • False : Pixels surrounding objects or features will not be blackened. This is the default.

  • True : Pixels surrounding objects or features will be blackened.

fix_chip_size

Optional bool. Specifies whether to crop the exported tiles such that they are all the same size. This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.

  • True : Exported tiles will be the same size and will center on the feature. This is the default.

  • False : Exported tiles will be cropped such that the bounding geometry surrounds only the feature in the tile.

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns

Output string containing the location of the exported training data

export_point_dataset

arcgis.learn.export_point_dataset(data_path, output_path, block_size=50.0, max_points=8192, extra_features=[], **kwargs)

Exports the las files into h5 blocks.

Argument

Description

data_path

Required string. Folder containing two folders with las files.

Folder structure:
train/

*.las

val/

*.las

output_path

Required string. Path where exported files will be dumped. This directory either should be empty or be a totally new directory.

block_size

Optional float. Size of the h5 block file. The unit of this parameter is the same as that of the dataset’s coordinate system. Default: 50.0 Units. The default value is based on the assumption that dataset’s coordinate system is in metric units.

max_points

Optional integer. Maximum number of points to be included in each h5 block file. Default: 8192 points.

extra_features

Optional list of tuple. Extra features to read from las files. The length of tuple is 3, which contain feature name, max, and min values respectively. For example: If you want extra features like intensity or number of returns to be considered while training, set this parameter like: extra_features=[(‘intensity’, 5000, 0), (‘num_returns’, 5, 0)]. The default behavior has changed from v1.8.0. Default: [].

prepare_data

arcgis.learn.prepare_data(path, class_mapping=None, chip_size=224, val_split_pct=0.1, batch_size=64, transforms=None, collate_fn=<function _bb_pad_collate>, seed=42, dataset_type=None, resize_to=None, working_dir=None, **kwargs)

Prepares a data object from training sample exported by the Export Training Data tool in ArcGIS Pro or Image Server, or training samples in the supported dataset formats. This data object consists of training and validation data sets with the specified transformations, chip size, batch size, split percentage, etc. -For object detection, use Pascal_VOC_rectangles or KITTI_rectangles format. -For feature categorization use Labelled Tiles or ImageNet format. -For pixel classification, use Classified Tiles format. -For entity extraction from text, use IOB, BILUO or ner_json formats.

Argument

Description

path

Required string. Path to data directory. Provide list of paths for Multi-folder training. Note: list of paths is currently supported by for dataset types: Classified_Tiles, Labeled_Tiles, MultiLabel_Tiles PASCAL_VOC_rectangles, RCNN_Masks

class_mapping

Optional dictionary. Mapping from id to its string label. For dataset_type=IOB, BILUO or ner_json:

Provide address field as class mapping in below format: class_mapping={‘address_tag’:’address_field’}. Field defined as ‘address_tag’ will be treated as a location. In cases where trained model extracts multiple locations from a single document, that document will be replicated for each location.

chip_size

Optional integer, default 224. Size of the image to train the model. Images are cropped to the specified chip_size. If image size is less than chip_size, the image size is

used as chip_size. Not supported for superres and siammask.

val_split_pct

Optional float. Percentage of training data to keep as validation.

batch_size

Optional integer. Batch size for mini batch gradient descent (Reduce it if getting CUDA Out of Memory Errors). Batch size is required to be greater than 1. For dataset_type = ‘ObjectTracking’, optimal batch_size value is 64.

transforms

Optional tuple. Fast.ai transforms for data augmentation of training and validation datasets respectively (We have set good defaults which work for satellite imagery well). If transforms is set to False no transformation will take place and chip_size parameter will also not take effect. If the dataset_type is ‘PointCloud’, use Transform3d class from arcgis.learn.

collate_fn

Optional function. Passed to PyTorch to collate data into batches(usually default works).

seed

Optional integer. Random seed for reproducible train-validation split.

dataset_type

Optional string. prepare_data function will infer the dataset_type on its own if it contains a map.txt file. If the path does not contain the map.txt file pass either of ‘PASCAL_VOC_rectangles’, ‘KITTI_rectangles’, ‘RCNN_Masks’, ‘Classified_Tiles’, ‘Labeled_Tiles’, ‘MultiLabeled_Tiles’, ‘Imagenet’, ‘PointCloud’, ‘ImageCaptioning’, ‘ChangeDetection’, ‘superres’, ‘CycleGAN’ and ‘Pix2Pix’. This parameter is mandatory for data which are not exported by ArcGIS Pro / Enterprise which includes ‘PointCloud’, ‘ImageCaptioning’, ‘ChangeDetection’, ‘CycleGAN’, ‘Pix2Pix’ and ‘ObjectTracking’. This parameter is also mandatory while preparing data for ‘EntityRecognizer’ model. Accepted data format for this model are - [‘ner_json’,’BIO’, ‘LBIOU’].

resize_to

Optional integer. Resize the images to a given size. Works only for “PASCAL_VOC_rectangles” and “superres”. First resizes the image to the given size and then crops images of size equal to chip_size. Note: Keep chip_size < resize_to

working_dir

Optional string. Sets the default path to be used as a prefix for saving trained models and checkpoints.

Keyword Arguments

Argument

Description

imagery_type

Optional string. Type of imagery used to export the training data, valid values are:

  • ‘naip’

  • ‘sentinel2’

  • ‘landsat8’

  • ‘ms’ - any other type of imagery

bands

Optional list. Bands of the imagery used to export training data. For example [‘r’, ‘g’, ‘b’, ‘nir’, ‘u’] where ‘nir’ is near infrared band and ‘u’ is a miscellaneous band.

rgb_bands

Optional list. Indices of red, green and blue bands in the imagery used to export the training data. for example: [2, 1, 0]

extract_bands

Optional list. Indices of bands to be used for training the model, same as in the imagery used to export the training data. for example: [3, 1, 0] where we will not be using the band at index 2 to train our model.

norm_pct

Optional float. Percentage of training data to be used for calculating imagery statistics for normalizing the data. Default is 0.3 (30%) of data.

downsample_factor

Optional integer. Factor to downsample the images for image SuperResolution. for example: if value is 2 and image size 256x256, it will create label images of size 128x128. Default is 4

encoding

Optional string. Applicable only when dataset_type=IOB, BILUO or ner_json: The encoding to read the csv/json file. Default is ‘UTF-8’

min_points

Optional int. Filtering based on minimum number of points in a block. Set min_points=1000 to filter out blocks with less than 1000 points. Applicable only for dataset_type=’PointCloud’

classes_of_interest

Optional string. List of classes of interest. This will filter blocks based on classes_of_interest. If we have classes [1, 3, 5, 7] in our dataset, but we are mainly interested in 1 and 3, Set classes_of_interest=[1,3]. Only those blocks will be considered for training which either have class 1 or 3 in them, rest of the blocks will be filtered out. If remapping of rest of the classes is required set background_classcode to some value. Applicable only for dataset_type=’PointCloud’

extra_features

Optional List. Contains a list of strings which tells which extra features to use to train PointCNN. By default only x,y and z are considered for training irrespective of what features were exported. Set this to be a subset of [‘intensity’, ‘numberOfReturns’, ‘returnNumber’, ‘red’, ‘green’, ‘blue’, ‘nearInfrared’]. For data exported from export_point_dataset set this to [‘intensity’, ‘num_returns’, ‘return_num’, ‘red’, ‘green’, ‘blue’, ‘nir’].

remap_classes

Optional dictionary {int:int}. Mapping from class values to user defined values. If we have [1, 3, 5, 7] in our dataset and we want to map class 5 to 3. Set this parameter to remap_classes={5:3}. In training then 5 will also be considered as 3. Applicable only for dataset_type=’PointCloud’

background_classcode

Optional int. Default None. If this is defined it will remap other class except classes_of_interest to background_classcode value. Only applicable when specifying classes_of_interest. Applicable only for dataset_type=’PointCloud’.

Returns

data object

prepare_tabulardata

arcgis.learn.prepare_tabulardata(input_features=None, variable_predict=None, explanatory_variables=None, explanatory_rasters=None, date_field=None, distance_features=None, preprocessors=None, val_split_pct=0.1, seed=42, batch_size=64, index_field=None, working_dir=None)

Prepares a tabular data object from input_features and optionally rasters.

Argument

Description

input_features

Optional Feature Layer Object or spatially enabled dataframe. This contains features denoting the value of the dependent variable. Leave empty for using rasters with MLModel.

variable_predict

Optional String, denoting the field_name of the variable to predict. Keep none for unsupervised training using MLModel.

explanatory_variables

Optional list containing field names from input_features By default the field type is continuous. To override field type to categorical, pass a 2-sized tuple in the list containing:

  1. field to be taken as input from the input_features.

  2. True/False denoting Categorical/Continuous variable.

For example:

[“Field_1”, (“Field_2”, True)]

Here Field_1 is treated as continuous and Field_2 as categorical.

explanatory_rasters

Optional list containing Raster objects. By default the rasters are continuous. To mark a raster categorical, pass a 2-sized tuple containing:

  1. Raster object.

  2. True/False denoting Categorical/Continuous variable.

For example:

[raster_1, (raster_2, True)]

Here raster_1 is treated as continuous and raster_2 as categorical.

date_field

Optional field_name. This field contains the date in the input_features. The field type can be a string or date time field. If specified, the field will be split into Year, month, week, day, dayofweek, dayofyear, is_month_end, is_month_start, is_quarter_end, is_quarter_start, is_year_end, is_year_start, hour, minute, second, elapsed and these will be added to the prepared data as columns. All fields other than elapsed and dayofyear are treated as categorical.

distance_features

Optional list of Feature Layer objects. Distance is calculated from features in these layers to features in input_features. Nearest distance to each feature is added in the prepared data. Field names in the prepared data added are “NEAR_DIST_1”, “NEAR_DIST_2” etc.

preprocessors

For Fastai: Optional transforms list. For Scikit-learn: 1. Supply a column transformer object. 2. Supply a list of tuple, For example: [(‘Col_1’, ‘Col_2’, Transform1()), (‘Col_3’, Transform2())] Categorical data is by default encoded. If nothing is specified, default transforms are applied to fill missing values and normalize categorical data. For Raster use raster.name for the the first band, raster.name_1 for 2nd band, raster.name_2 for 3rd and so on.

val_split_pct

Optional float. Percentage of training data to keep as validation. By default 10% data is kept for validation.

seed

Optional integer. Random seed for reproducible train-validation split. Default value is 42.

batch_size

Optional integer. Batch size for mini batch gradient descent (Reduce it if getting CUDA Out of Memory Errors). Default value is 64.

index_field

Optional string. Field Name in the input features which will be used as index field for the data. Used for Time Series, to visualize values on the x-axis.

working_dir

Optional string. Sets the default path to be used as a prefix for saving trained models and checkpoints.

Returns

TabularData object

prepare_textdata

arcgis.learn.prepare_textdata(path, task, text_columns, label_columns, train_file='train.csv', valid_file=None, val_split_pct=0.1, seed=42, batch_size=8, process_labels=False, remove_html_tags=False, remove_urls=False, working_dir=None)

Prepares a text data object from the files present at data folder

Returns

TextData object

Object Classification Models

FeatureClassifier

class arcgis.learn.FeatureClassifier(data, backbone=None, pretrained_path=None, mixup=False, oversample=False, backend='pytorch', *args, **kwargs)

Creates an image classifier to classify the area occupied by a geographical feature based on the imagery it overlaps with.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional torchvision model. Backbone CNN model to be used for creating the base of the FeatureClassifier, which is resnet34 by default.

pretrained_path

Optional string. Path where pre-trained model is saved.

mixup

Optional boolean. If set to True, it creates new training images by randomly mixing training set images.

The default is set to False.

oversample

Optional boolean. If set to True, it oversamples unbalanced classes of the dataset during training. Not supported with MultiLabel dataset.

backend

Optional string. Controls the backend framework to be used for this model, which is ‘pytorch’ by default.

valid options are ‘pytorch’, ‘tensorflow’

Returns

FeatureClassifier Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

categorize_features(feature_layer, raster=None, class_value_field='class_val', class_name_field='prediction', confidence_field='confidence', cell_size=1, coordinate_system=None, predict_function=None, batch_size=64, overwrite=False)

Deprecated since version 1.7.1: Please use arcgis.learn.classify_objects() instead

Categorizes each feature by classifying its attachments or an image of its geographical area (using the provided Imagery Layer) and updates the feature layer with the prediction results in the output_label_field. Deprecated, please use arcgis.learn.classify_objects() instead.

Argument

Description

feature_layer

Required. Public Feature Layer or path of local feature class for classification with read, write, edit permissions.

raster

Optional. Imagery layer or path of local raster to be used for exporting image chips. (Requires arcpy)

class_value_field

Required string. Output field to be added in the layer, containing class value of predictions.

class_name_field

Required string. Output field to be added in the layer, containing class name of predictions.

confidence_field

Optional string. Output column name to be added in the layer which contains the confidence score.

cell_size

Optional float. Cell size to be used for exporting the image chips.

coordinate_system

Optional. Cartographic Coordinate System to be used for exporting the image chips.

predict_function

Optional list of tuples. Used for calculation of final prediction result when each feature has more than one attachment. The predict_function takes as input a list of tuples. Each tuple has first element as the class predicted and second element is the confidence score. The function should return the final tuple classifying the feature and its confidence.

batch_size

Optional integer. The no of images or tiles to process in a single go.

The default value is 64.

overwrite

Optional boolean. If set to True the output fields will be overwritten by new values.

The default value is False.

Returns

Boolean : True if operation is successful, False otherwise

classify_features(feature_layer, labeled_tiles_directory, input_label_field, output_label_field, confidence_field=None, predict_function=None)

Classifies the exported images and updates the feature layer with the prediction results in the output_label_field. Works with RGB images only.

Argument

Description

feature_layer

Required. Feature Layer for classification.

labeled_tiles_directory

Required. Folder structure containing images and labels folder. The chips should have been generated using the export training data tool in the Labeled Tiles format, and the labels should contain the OBJECTIDs of the features to be classified.

input_label_field

Required. Value field name which created the labeled tiles. This field should contain the OBJECTIDs of the features to be classified. In case of attachments this field is not used.

output_label_field

Required. Output column name to be added in the layer which contains predictions.

confidence_field

Optional. Output column name to be added in the layer which contains the confidence score.

predict_function

Optional. Used for calculation of final prediction result when each feature has more than one attachment. The predict_function takes as input a list of tuples. Each tuple has first element as the class predicted and second element is the confidence score. The function should return the final tuple classifying the feature and its confidence

Returns

Boolean : True/False if operation is successful

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a Feature classifier from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

FeatureClassifier Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_confusion_matrix(**kwargs)

Plots a confusion matrix of the model predictions to evaluate accuracy kwargs: ‘thresh’ - confidence score threshold for multilabel predictions, defaults to 0.5

plot_hard_examples(num_examples)

Plots the hard examples with their heatmaps.

Argument

Description

num_examples

Number of hard examples to plot prepare_data function.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path)

Runs prediction on an Image. Works with RGB images only.

Argument

Description

image_path

Required. Path to the image file to make the predictions on.

Returns

prediction label and confidence

predict_folder_and_create_layer(folder, feature_layer_name, gis=None, prediction_field='predict', confidence_field='confidence')

Predicts on images present in the given folder and creates a feature layer. The images stored in the folder contain GPS information as part of EXIF metadata. Works with RGB images only.

Argument

Description

folder

Required String. Folder containing images to inference on.

feature_layer_name

Required String. The name of the feature layer used to publish.

gis

Optional GIS Object, the GIS on which this tool runs. If not specified, the active GIS is used.

prediction_field

Optional String. The field name to use to add predictions.

confidence_field

Optional String. The field name to use to add confidence.

Returns

FeatureCollection Object

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, **kwargs)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Object Detection Models

FasterRCNN

class arcgis.learn.FasterRCNN(data, backbone='resnet50', pretrained_path=None, **kwargs)

Model architecture from https://arxiv.org/abs/1506.01497. Creates a FasterRCNN object detection model, based on https://github.com/pytorch/vision/blob/master/torchvision/models/detection/faster_rcnn.py.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the FasterRCNN, which is resnet50 by default. Compatible backbones: ‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’, ‘resnet152’

pretrained_path

Optional string. Path where pre-trained model is saved.

kwargs

Argument

Description

rpn_pre_nms_top_n_train

Optional int. Number of proposals to keep before applying NMS during training. Default: 2000

rpn_pre_nms_top_n_test

Optional int. Number of proposals to keep before applying NMS during testing. Default: 1000

rpn_post_nms_top_n_train

Optional int. Number of proposals to keep after applying NMS during training. Default: 2000

rpn_post_nms_top_n_test

Optional int. Number of proposals to keep after applying NMS during testing. Default: 1000

rpn_nms_thresh

Optional float. NMS threshold used for postprocessing the RPN proposals. Default: 0.7

rpn_fg_iou_thresh

Optional float. Minimum IoU between the anchor and the GT box so that they can be considered as positive during training of the RPN. Default: 0.7

rpn_bg_iou_thresh

Optional float. Maximum IoU between the anchor and the GT box so that they can be considered as negative during training of the RPN. Default: 0.3

rpn_batch_size_per_image

Optional int. Number of anchors that are sampled during training of the RPN for computing the loss. Default: 256

rpn_positive_fraction

Optional float. Proportion of positive anchors in a mini-batch during training of the RPN. Default: 0.5

box_score_thresh

Optional float. During inference, only return proposals with a classification score greater than box_score_thresh Default: 0.05

box_nms_thresh

Optional float. NMS threshold for the prediction head. Used during inference. Default: 0.5

box_detections_per_img

Optional int. Maximum number of detections per image, for all classes. Default: 100

box_fg_iou_thresh

Optional float. Minimum IoU between the proposals and the GT box so that they can be considered as positive during training of the classification head. Default: 0.5

box_bg_iou_thresh

Optional float. Maximum IoU between the proposals and the GT box so that they can be considered as negative during training of the classification head. Default: 0.5

box_batch_size_per_image

Optional int. Number of proposals that are sampled during training of the classification head. Default: 512

box_positive_fraction

Optional float. Proportion of positive proposals in a mini-batch during training of the classification head. Default: 0.25

Returns

FasterRCNN Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

average_precision_score(detect_thresh=0.2, iou_thresh=0.1, mean=False, show_progress=True)

Computes average precision on the validation set for each class.

Argument

Description

detect_thresh

Optional float. The probability above which a detection will be considered for computing average precision.

iou_thresh

Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.

mean

Optional bool. If False returns class-wise average precision otherwise returns mean average precision.

Returns

dict if mean is False otherwise float

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a FasterRCNN object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

FasterRCNN Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path, threshold=0.5, nms_overlap=0.1, return_scores=False, visualize=False, resize=False)

Runs prediction on an Image.

Argument

Description

image_path

Required. Path to the image file to make the predictions on.

threshold

Optional float. The probability above which a detection will be considered valid.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

return_scores

Optional boolean. Will return the probability scores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image with predicted bounding boxes if True.

resize

Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).

Returns

Returns a tuple with predictions, labels and optionally confidence scores if return_scores=True. The predicted bounding boxes are returned as a list of lists containing the xmin, ymin, width and height of each predicted object in each image. The labels are returned as a list of class values and the confidence scores are returned as a list of floats indicating the confidence of each prediction.

predict_video(input_video_path, metadata_file, threshold=0.5, nms_overlap=0.1, track=False, visualize=False, output_file_path=None, multiplex=False, multiplex_file_path=None, tracker_options={'assignment_iou_thrd': 0.3, 'detect_frames': 10, 'vanish_frames': 40}, visual_options={'color': (255, 255, 255), 'fontface': 0, 'show_labels': True, 'show_scores': True, 'thickness': 2}, resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.

Argument

Description

input_video_path

Required. Path to the video file to make the predictions on.

metadata_file

Required. Path to the metadata csv file where the predictions will be saved in VMTI format.

threshold

Optional float. The probability above which a detection will be considered.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

track

Optional bool. Set this parameter as True to enable object tracking.

visualize

Optional boolean. If True a video is saved with prediction results.

output_file_path

Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.

tracking_options

Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.

visual_options

Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.

resize

Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, thresh=0.5, nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

RetinaNet

class arcgis.learn.RetinaNet(data, scales=None, ratios=None, backbone=None, pretrained_path=None, *args, **kwargs)

Creates a RetinaNet Object Detector with the specified zoom scales and aspect ratios. Based on the Fast.ai notebook at https://github.com/fastai/fastai_dev/blob/master/dev_nb/102a_coco.ipynb

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

scales

Optional list of float values. Zoom scales of anchor boxes.

ratios

Optional list of float values. Aspect ratios of anchor boxes.

backbone

Optional function. Backbone CNN model to be used for creating the base of the RetinaNet, which is resnet50 by default. Compatible backbones: ‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’, ‘resnet152’

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

RetinaNet Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

average_precision_score(detect_thresh=0.5, iou_thresh=0.1, mean=False, show_progress=True)

Computes average precision on the validation set for each class.

Argument

Description

detect_thresh

Optional float. The probability above which a detection will be considered for computing average precision.

iou_thresh

Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.

mean

Optional bool. If False returns class-wise average precision otherwise returns mean average precision.

Returns

dict if mean is False otherwise float

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a RetinaNet Object Detector from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

RetinaNet Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path, threshold=0.5, nms_overlap=0.1, return_scores=True, visualize=False, resize=False)

Predicts and displays the results of a trained model on a single image.

Argument

Description

image_path

Required. Path to the image file to make the predictions on.

thresh

Optional float. The probability above which a detection will be considered valid.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

return_scores

Optional boolean. Will return the probability scores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image with predicted bounding boxes if True.

resize

Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).

Returns

‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image

predict_video(input_video_path, metadata_file, threshold=0.5, nms_overlap=0.1, track=False, visualize=False, output_file_path=None, multiplex=False, multiplex_file_path=None, tracker_options={'assignment_iou_thrd': 0.3, 'detect_frames': 10, 'vanish_frames': 40}, visual_options={'color': (255, 255, 255), 'fontface': 0, 'show_labels': True, 'show_scores': True, 'thickness': 2}, resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.

Argument

Description

input_video_path

Required. Path to the video file to make the predictions on.

metadata_file

Required. Path to the metadata csv file where the predictions will be saved in VMTI format.

threshold

Optional float. The probability above which a detection will be considered.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

track

Optional bool. Set this parameter as True to enable object tracking.

visualize

Optional boolean. If True a video is saved with prediction results.

output_file_path

Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.

tracking_options

Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.

visual_options

Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.

resize

Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, thresh=0.5, nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Argument

Description

rows

Optional int. Number of rows of results to be displayed.

thresh

Optional float. The probability above which a detection will be considered valid.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

YOLOv3

class arcgis.learn.YOLOv3(data=None, pretrained_path=None, **kwargs)

Creates a YOLOv3 object detector.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

YOLOv3 Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

average_precision_score(detect_thresh=0.1, iou_thresh=0.1, mean=False, show_progress=True)

Computes average precision on the validation set for each class.

Argument

Description

detect_thresh

Optional float. The probability above which a detection will be considered for computing average precision. Defaults to 0.1. To be modified according to the dataset and training.

iou_thresh

Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.

mean

Optional bool. If False returns class-wise average precision otherwise returns mean average precision.

Returns

dict if mean is False otherwise float

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a YOLOv3 Object Detector from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

YOLOv3 Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path, threshold=0.1, nms_overlap=0.1, return_scores=True, visualize=False, resize=False)

Predicts and displays the results of a trained model on a single image.

Argument

Description

image_path

Required. Path to the image file to make the predictions on.

thresh

Optional float. The probability above which a detection will be considered valid. Defaults to 0.1. To be modified according to the dataset and training.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

return_scores

Optional boolean. Will return the probability scores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image with predicted bounding boxes if True.

resize

Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).

Returns

‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image

predict_video(input_video_path, metadata_file, threshold=0.1, nms_overlap=0.1, track=False, visualize=False, output_file_path=None, multiplex=False, multiplex_file_path=None, tracker_options={'assignment_iou_thrd': 0.3, 'detect_frames': 10, 'vanish_frames': 40}, visual_options={'color': (255, 255, 255), 'fontface': 0, 'show_labels': True, 'show_scores': True, 'thickness': 2}, resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.

Argument

Description

input_video_path

Required. Path to the video file to make the predictions on.

metadata_file

Required. Path to the metadata csv file where the predictions will be saved in VMTI format.

threshold

Optional float. The probability above which a detection will be considered. Defaults to 0.1. To be modified according to the dataset and training.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

track

Optional bool. Set this parameter as True to enable object tracking.

visualize

Optional boolean. If True a video is saved with prediction results.

output_file_path

Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.

tracking_options

Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.

visual_options

Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.

resize

Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, thresh=0.1, nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

Argument

Description

rows

Optional int. Number of rows of results to be displayed.

thresh

Optional float. The probability above which a detection will be considered valid. Defaults to 0.1. To be modified according to the dataset and training.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

property supported_backbones

Supported backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

SingleShotDetector

class arcgis.learn.SingleShotDetector(data, grids=None, zooms=[1.0], ratios=[[1.0, 1.0]], backbone=None, drop=0.3, bias=- 4.0, focal_loss=False, pretrained_path=None, location_loss_factor=None, ssd_version=2, backend='pytorch', *args, **kwargs)

Creates a Single Shot Detector with the specified grid sizes, zoom scales and aspect ratios. Based on Fast.ai MOOC Version2 Lesson 9.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

grids

Required list. Grid sizes used for creating anchor boxes.

zooms

Optional list. Zooms of anchor boxes.

ratios

Optional list of tuples. Aspect ratios of anchor boxes.

backbone

Optional function. Backbone CNN model to be used for creating the base of the SingleShotDetector, which is resnet34 by default.

dropout

Optional float. Dropout probability. Increase it to reduce overfitting.

bias

Optional float. Bias for SSD head.

focal_loss

Optional boolean. Uses Focal Loss if True.

pretrained_path

Optional string. Path where pre-trained model is saved.

location_loss_factor

Optional float. Sets the weight of the bounding box loss. This should be strictly between 0 and 1. This is default None which gives equal weight to both location and classification loss. This factor adjusts the focus of model on the location of bounding box.

ssd_version

Optional int within [1,2]. Use version=1 for arcgis v1.6.2 or earlier

backend

Optional string. Controls the backend framework to be used for this model, which is ‘pytorch’ by default.

valid options are ‘pytorch’, ‘tensorflow’

Returns

SingleShotDetector Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

average_precision_score(detect_thresh=0.2, iou_thresh=0.1, mean=False, show_progress=True)

Computes average precision on the validation set for each class.

Argument

Description

detect_thresh

Optional float. The probability above which a detection will be considered for computing average precision.

iou_thresh

Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.

mean

Optional bool. If False returns class-wise average precision otherwise returns mean average precision.

Returns

dict if mean is False otherwise float

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_emd(data, emd_path)

Creates a Single Shot Detector from an Esri Model Definition (EMD) file.

Argument

Description

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

emd_path

Required string. Path to Esri Model Definition file.

Returns

SingleShotDetector Object

classmethod from_model(emd_path, data=None)

Creates a Single Shot Detector from an Esri Model Definition (EMD) file.

Note: Only supported for Pytorch models.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

SingleShotDetector Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(image_path, threshold=0.5, nms_overlap=0.1, return_scores=False, visualize=False, resize=False)

Runs prediction on an Image.

Argument

Description

image_path

Required. Path to the image file to make the predictions on.

threshold

Optional float. The probability above which a detection will be considered valid.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

return_scores

Optional boolean. Will return the probability scores of the bounding box predictions if True.

visualize

Optional boolean. Displays the image with predicted bounding boxes if True.

resize

Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).

Returns

‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image

predict_video(input_video_path, metadata_file, threshold=0.5, nms_overlap=0.1, track=False, visualize=False, output_file_path=None, multiplex=False, multiplex_file_path=None, tracker_options={'assignment_iou_thrd': 0.3, 'detect_frames': 10, 'vanish_frames': 40}, visual_options={'color': (255, 255, 255), 'fontface': 0, 'show_labels': True, 'show_scores': True, 'thickness': 2}, resize=False)

Runs prediction on a video and appends the output VMTI predictions in the metadata file.

Argument

Description

input_video_path

Required. Path to the video file to make the predictions on.

metadata_file

Required. Path to the metadata csv file where the predictions will be saved in VMTI format.

threshold

Optional float. The probability above which a detection will be considered.

nms_overlap

Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.

track

Optional bool. Set this parameter as True to enable object tracking.

visualize

Optional boolean. If True a video is saved with prediction results.

output_file_path

Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.

multiplex

Optional boolean. Runs Multiplex using the VMTI detections.

multiplex_file_path

Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.

tracking_options

Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.

visual_options

Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.

resize

Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.

By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, thresh=0.5, nms_overlap=0.1)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MaskRCNN

class arcgis.learn.MaskRCNN(data, backbone=None, pretrained_path=None, pointrend=False, *args, **kwargs)

Model architecture from https://arxiv.org/abs/1703.06870. Creates a MaskRCNN Instance segmentation model, based on https://github.com/pytorch/vision/blob/master/torchvision/models/detection/mask_rcnn.py.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the MaskRCNN, which is resnet50 by default. Compatible backbones: ‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’, ‘resnet152’

pretrained_path

Optional string. Path where pre-trained model is saved.

pointrend

Optional boolean. If True, it will use PointRend architecture on top of the segmentation head. Default: False. PointRend architecture from https://arxiv.org/pdf/1912.08193.pdf.

kwargs

Argument

Description

rpn_pre_nms_top_n_train

Optional int. Number of proposals to keep before applying NMS during training. Default: 2000

rpn_pre_nms_top_n_test

Optional int. Number of proposals to keep before applying NMS during testing. Default: 1000

rpn_post_nms_top_n_train

Optional int. Number of proposals to keep after applying NMS during training. Default: 2000

rpn_post_nms_top_n_test

Optional int. Number of proposals to keep after applying NMS during testing. Default: 1000

rpn_nms_thresh

Optional float. NMS threshold used for postprocessing the RPN proposals. Default: 0.7

rpn_fg_iou_thresh

Optional float. Minimum IoU between the anchor and the GT box so that they can be considered as positive during training of the RPN. Default: 0.7

rpn_bg_iou_thresh

Optional float. Maximum IoU between the anchor and the GT box so that they can be considered as negative during training of the RPN. Default: 0.3

rpn_batch_size_per_image

Optional int. Number of anchors that are sampled during training of the RPN for computing the loss. Default: 256

rpn_positive_fraction

Optional float. Proportion of positive anchors in a mini-batch during training of the RPN. Default: 0.5

box_score_thresh

Optional float. During inference, only return proposals with a classification score greater than box_score_thresh Default: 0.05

box_nms_thresh

Optional float. NMS threshold for the prediction head. Used during inference. Default: 0.5

box_detections_per_img

Optional int. Maximum number of detections per image, for all classes. Default: 100

box_fg_iou_thresh

Optional float. Minimum IoU between the proposals and the GT box so that they can be considered as positive during training of the classification head. Default: 0.5

box_bg_iou_thresh

Optional float. Maximum IoU between the proposals and the GT box so that they can be considered as negative during training of the classification head. Default: 0.5

box_batch_size_per_image

Optional int. Number of proposals that are sampled during training of the classification head. Default: 512

box_positive_fraction

Optional float. Proportion of positive proposals in a mini-batch during training of the classification head. Default: 0.25

Returns

MaskRCNN Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

average_precision_score(detect_thresh=0.5, iou_thresh=0.5, mean=False, show_progress=True)

Computes average precision on the validation set for each class.

Returns

dict if mean is False otherwise float

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None, **kwargs)

Creates a MaskRCNN Instance segmentation object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

MaskRCNN Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=4, mode='mask', mask_threshold=0.5, box_threshold=0.7, imsize=5, index=0, alpha=0.5, cmap='tab20', **kwargs)

Displays the results of a trained model on a part of the validation set.

Argument

Description

mode

Required arguments within [‘bbox’, ‘mask’, ‘bbox_mask’].
  • bbox - For visualizing only bounding boxes.

  • mask - For visualizing only mask

  • bbox_mask - For visualizing both mask and bounding boxes.

mask_threshold

Optional float. The probability above which a pixel will be considered mask.

box_threshold

Optional float. The probability above which a detection will be considered valid.

nrows

Optional int. Number of rows of results to be displayed.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Pixel Classification Models

UnetClassifier

class arcgis.learn.UnetClassifier(data, backbone=None, pretrained_path=None, backend='pytorch', *args, **kwargs)

Creates a Unet like classifier based on given pretrained encoder.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the UnetClassifier, which is resnet34 by default.

pretrained_path

Optional string. Path where pre-trained model is saved.

backend

Optional string. Controls the backend framework to be used for this model, which is ‘pytorch’ by default.

valid options are ‘pytorch’, ‘tensorflow’

kwargs

Argument

Description

class_balancing

Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.

mixup

Optional boolean. If True, it will use mixup augmentation and mixup loss. Default: False

focal_loss

Optional boolean. If True, it will use focal loss Default: False

dice_loss_fraction

Optional float. Min_val=0, Max_val=1 If > 0 , model will use a combination of default or focal(if focal=True) loss with the specified fraction of dice loss. E.g. for dice = 0.3, loss = (1-0.3)*default loss + 0.3*dice Default: 0

dice_loss_average

Optional str. micro: Micro dice coefficient will be used for loss calculation. macro: Macro dice coefficient will be used for loss calculation. A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes) Default: ‘micro’

ignore_classes

Optional list. It will contain the list of class values on which model will not incur loss. Default: []

Returns

UnetClassifier Object

accuracy()
property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_emd(data, emd_path)

Creates a Unet like classifier from an Esri Model Definition (EMD) file.

Argument

Description

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

emd_path

Required string. Path to Esri Model Definition file.

Returns

UnetClassifier Object

classmethod from_model(emd_path, data=None)

Creates a Unet like classifier from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

UnetClassifier Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

mIOU(mean=False, show_progress=True)

Computes mean IOU on the validation set for each class.

Argument

Description

mean

Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.

show_progress

Optional bool. Displays the progress bar if True.

Returns

dict if mean is False otherwise float

per_class_metrics(ignore_classes=[])

Computer per class precision, recall and f1-score on validation set.

Argument

Description

self

segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]

ignore_classes

Optional list. It will contain the list of class values on which model will not incur loss. Default: []

Returns per class precision, recall and f1 scores

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, **kwargs)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

PSPNetClassifier

class arcgis.learn.PSPNetClassifier(data, backbone=None, use_unet=True, pyramid_sizes=[1, 2, 3, 6], pretrained_path=None, unet_aux_loss=False, pointrend=False, *args, **kwargs)

Model architecture from https://arxiv.org/abs/1612.01105. Creates a PSPNet Image Segmentation/ Pixel Classification model.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the PSPNetClassifier, which is resnet50 by default. It supports the ResNet, DenseNet, and VGG families.

use_unet

Optional Bool. Specify whether to use Unet-Decoder or not, Default True.

pyramid_sizes

Optional List. The sizes at which the feature map is pooled at. Currently set to the best set reported in the paper, i.e, (1, 2, 3, 6)

pretrained

Optional Bool. If True, use the pretrained backbone

pretrained_path

Optional string. Path where pre-trained PSPNet model is saved.

unet_aux_loss

Optional. Bool If True will use auxiliary loss for PSUnet. Default set to False. This flag is applicable only when use_unet is True.

pointrend

Optional boolean. If True, it will use PointRend architecture on top of the segmentation head. Default: False. PointRend architecture from https://arxiv.org/pdf/1912.08193.pdf.

kwargs

Argument

Description

class_balancing

Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.

mixup

Optional boolean. If True, it will use mixup augmentation and mixup loss. Default: False

focal_loss

Optional boolean. If True, it will use focal loss. Default: False

dice_loss_fraction

Optional float. Min_val=0, Max_val=1 If > 0 , model will use a combination of default or focal(if focal=True) loss with the specified fraction of dice loss. E.g. for dice = 0.3, loss = (1-0.3)*default loss + 0.3*dice Default: 0

dice_loss_average

Optional str. micro: Micro dice coefficient will be used for loss calculation. macro: Macro dice coefficient will be used for loss calculation. A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes) Default: ‘micro’

ignore_classes

Optional list. It will contain the list of class values on which model will not incur loss. Default: []

keep_dilation

Optional boolean. When PointRend architecture is used, keep_dilation=True can potentially improves accuracy at the cost of memory consumption. Default: False

Returns

PSPNetClassifier Object

accuracy(input=None, target=None, void_code=0, class_mapping=None)
property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

freeze()

Freezes the pretrained backbone.

classmethod from_model(emd_path, data=None)

Creates a PSPNet classifier from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

PSPNetClassifier Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

mIOU(mean=False, show_progress=True)

Computes mean IOU on the validation set for each class.

Argument

Description

mean

Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.

show_progress

Optional bool. Displays the progress bar if True.

Returns

dict if mean is False otherwise float

per_class_metrics(ignore_classes=[])

Computer per class precision, recall and f1-score on validation set.

Argument

Description

self

segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]

ignore_classes

Optional list. It will contain the list of class values on which model will not incur loss. Default: []

Returns per class precision, recall and f1 scores

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, **kwargs)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

DeepLab

class arcgis.learn.DeepLab(data, backbone=None, pretrained_path=None, pointrend=False, *args, **kwargs)

Model architecture from https://arxiv.org/abs/1706.05587. Creates a DeepLab Image Segmentation/ Pixel Classification model, based on https://github.com/pytorch/vision/tree/master/torchvision/models/segmentation.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the DeepLab, which is resnet101 by default since it is pretrained in torchvision. It supports the ResNet, DenseNet, and VGG families.

pretrained_path

Optional string. Path where pre-trained model is saved.

pointrend

Optional boolean. If True, it will use PointRend architecture on top of the segmentation head. Default: False. PointRend architecture from https://arxiv.org/pdf/1912.08193.pdf.

kwargs

Argument

Description

class_balancing

Optional boolean. If True, it will balance the cross-entropy loss inverse to the frequency of pixels per class. Default: False.

mixup

Optional boolean. If True, it will use mixup augmentation and mixup loss. Default: False

focal_loss

Optional boolean. If True, it will use focal loss. Default: False

dice_loss_fraction

Optional float. Min_val=0, Max_val=1 If > 0 , model will use a combination of default or focal(if focal=True) loss with the specified fraction of dice loss. E.g. for dice = 0.3, loss = (1-0.3)*default loss + 0.3*dice Default: 0

dice_loss_average

Optional str. micro: Micro dice coefficient will be used for loss calculation. macro: Macro dice coefficient will be used for loss calculation. A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes) Default: ‘micro’

ignore_classes

Optional list. It will contain the list of class values on which model will not incur loss. Default: []

keep_dilation

Optional boolean. When PointRend architecture is used, keep_dilation=True can potentially improves accuracy at the cost of memory consumption. Default: False

Returns

DeepLab Object

accuracy()
property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a DeepLab semantic segmentation object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

DeepLab Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

mIOU(mean=False, show_progress=True)

Computes mean IOU on the validation set for each class.

Argument

Description

mean

Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.

show_progress

Optional bool. Displays the progress bar if True.

Returns

dict if mean is False otherwise float

per_class_metrics(ignore_classes=[])

Computer per class precision, recall and f1-score on validation set.

Argument

Description

self

segmentation model object -> [PSPNetClassifier | UnetClassifier | DeepLab]

ignore_classes

Optional list. It will contain the list of class values on which model will not incur loss. Default: []

Returns per class precision, recall and f1 scores

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, **kwargs)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

BDCNEdgeDetector

class arcgis.learn.BDCNEdgeDetector(data, backbone='vgg19', pretrained_path=None)

Model architecture from https://arxiv.org/pdf/1902.10903.pdf. Creates a Bi-Directional Cascade Network for Perceptual Edge Detection model

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the Bi-Directional Cascade Network for Perceptual Edge Detection, which is vgg19 by default. Compatible backbones: resnet and VGG

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

Bi-Directional Cascade Network for Perceptual Edge Detection Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

compute_precision_recall(thresh=0.5, buffer=3, show_progress=True)

Computes precision, recall and f1 score on validation set.

Argument

Description

thresh

Optional float. The probability on which the detection will be considered edge pixel.

buffer

Optional int. pixels in neighborhood to consider true detection.

Returns

dict

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a Bi-Directional Cascade Network for Perceptual Edge Detection object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

Bi-Directional Cascade Network for Perceptual Edge Detection Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, thresh=0.5, thinning=True, **kwargs)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

HEDEdgeDetector

class arcgis.learn.HEDEdgeDetector(data, backbone='vgg19', pretrained_path=None, **kwargs)

Model architecture from https://arxiv.org/pdf/1504.06375.pdf. Creates a Holistically-Nested Edge Detection model

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the Holistically-Nested Edge Detection, which is vgg19 by default. Compatible backbones: resnet and VGG

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

Holistically-Nested Edge Detection Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

compute_precision_recall(thresh=0.5, buffer=3, show_progress=True)

Computes precision, recall and f1 score on validation set.

Argument

Description

thresh

Optional float. The probability on which the detection will be considered edge pixel.

buffer

Optional int. pixels in neighborhood to consider true detection.

Returns

dict

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a Holistically-Nested Edge Detection object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

Holistically-Nested Edge Detection Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5, thresh=0.5, thinning=True, **kwargs)

Displays the results of a trained model on a part of the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MultiTaskRoadExtractor

class arcgis.learn.MultiTaskRoadExtractor(data, backbone=None, pretrained_path=None, *args, **kwargs)

Creates a Multi-Task Learning model for binary segmentation of roads. Supports RGB and Multispectral Imagery. Implementation based on https://doi.org/10.1109/CVPR.2019.01063 .

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional String. Backbone CNN model to be used for creating the base. If hourglass is chosen as the mtl_model (Architecture),then this parameter is ignored as hourglass uses a special customised architecture. This parameter is used with linknet model. Default: ‘resnet34’

Supported backbones: ‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’, ‘resnet152’

mtl_model

Optional String. It is used to create model from linknet or hourglass based neural architectures. Supported: ‘linknet’, ‘hourglass’. Default: ‘hourglass’

pretrained_path

Optional String. Path where a compatible pre-trained model is saved. Accepts a Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

kwargs

Argument

Description

gaussian_thresh

Optional float. Sets the gaussian threshold which allows to set the required road width. Range: 0.0 to 1.0 Default:0.76

orient_bin_size

Optional Int. Sets the bin size for orientation angles. Default:20

orient_theta

Optional Int. Sets the width of orientation mask. Default:8

Returns

MultiTaskRoadExtractor Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a Multi-Task Learning model for binary segmentation from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

Multi-Task Road Extractor Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

mIOU(mean=False, show_progress=True)

Computes mean IOU on the validation set for each class.

Argument

Description

mean

Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.

show_progress

Optional bool. Displays the prgress bar if True.

Returns

dict if mean is False otherwise float

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=2, **kwargs)

Shows the ground truth and predictions of model side by side.

kwargs

Argument

Description

rows

Number of rows of data to be displayed, if batch size is smaller, then the rows will display the value provided for batch size.

alpha

Optional Float. Opacity parameter for label overlay on image. Float [0.0 - 1.0] Default: 0.6

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

ConnectNet

class arcgis.learn.ConnectNet(data, backbone=None, pretrained_path=None, *args, **kwargs)

Creates a ConnectNet model for binary segmentation of linear features. Supports RGB and Multispectral Imagery. Implementation based on https://doi.org/10.1109/CVPR.2019.01063 .

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional String. Backbone CNN model to be used for creating the base. If hourglass is chosen as the mtl_model (Architecture),then this parameter is ignored as hourglass uses a special customised architecture. This parameter is used with linknet model. Default: ‘resnet34’ Supported backbones: ‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’, ‘resnet152’

mtl_model

Optional String. It is used to create model from linknet or hourglass based neural architectures. Supported: ‘linknet’, ‘hourglass’. Default: ‘hourglass’

pretrained_path

Optional String. Path where a compatible pre-trained model is saved. Accepts a Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

kwargs

Argument

Description

gaussian_thresh

Optional float. Sets the gaussian threshold which allows to set the required width of the linear feature. Range: 0.0 to 1.0 Default:0.76

orient_bin_size

Optional Int. Sets the bin size for orientation angles. Default:20

orient_theta

Optional Int. Sets the width of orientation mask. Default:8

Returns

ConnectNet Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a Multi-Task Learning model for binary segmentation from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

Multi-Task Road Extractor Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

mIOU(mean=False, show_progress=True)

Computes mean IOU on the validation set for each class.

Argument

Description

mean

Optional bool. If False returns class-wise mean IOU, otherwise returns mean iou of all classes combined.

show_progress

Optional bool. Displays the prgress bar if True.

Returns

dict if mean is False otherwise float

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=2, **kwargs)

Shows the ground truth and predictions of model side by side.

kwargs

Argument

Description

rows

Number of rows of data to be displayed, if batch size is smaller, then the rows will display the value provided for batch size.

alpha

Optional Float. Opacity parameter for label overlay on image. Float [0.0 - 1.0] Default: 0.6

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

ChangeDetector

class arcgis.learn.ChangeDetector(data, backbone=None, attention_type='PAM', pretrained_path=None, **kwargs)

Creates a Change Detection model.

A Spatial-Temporal Attention-Based Method and a New Dataset for Remote Sensing Image Change Detection - https://www.mdpi.com/2072-4292/12/10/1662

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the encoder of the ChangeDetector, which is resnet18 by default. It supports the ResNet family of backbones.

attention_type

Optional string. It’s value can be either be “PAM” (Pyramid Attention Module) or “BAM” (Basic Attention Module). Defaults to “PAM”.

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

ChangeDetector object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a ChangeDetector model from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Optional fastai Databunch. Returned data object from prepare_data function or None for inferencing.

Returns

ChangeDetector Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

precision_recall_score()

Computes precision, recall and f1 score.

predict(before_image, after_image, **kwargs)

Predict on a pair of images.

Argument

Description

before_image

Required string. Path to image from before.

after_image

Required string. Path to image from later.

Kwargs

Argument

Description

crop_predict

Optional Boolean. If True, It will predict using a sliding window strategy. Typically, used when image size is larger than the chip_size the model is trained on. Default False.

visualize

Optional Boolean. If True, It will plot the predictions on the notebook. Default False.

save

Optional Boolean. If true will write the prediction file on the disk. Default False.

Returns

PyTorch Tensor of the change mask.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=4, **kwargs)

Displays the results of a trained model on the validation set.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Image Translation Models

CycleGAN

class arcgis.learn.CycleGAN(data, pretrained_path=None, gen_blocks=9, lsgan=True, *args, **kwargs)

Creates a model object which generates images of type A from type B or type B from type A.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

pretrained_path

Optional string. Path where pre-trained model is saved.

gen_blocks

Optional integer. Number of ResNet blocks to use in generator.

lsgan

Optional boolean. If True, it will use Mean Squared Error else it will use Binary Cross Entropy.

Returns

CycleGAN Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

compute_metrics()

Computes Frechet Inception Distance (FID) on validation set.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a CycleGAN object from an Esri Model Definition (EMD) file.

Argument

Description

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

Returns

CycleGAN Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path, convert_to)

Predicts and display the image.

Argument

Description

img_path

Required path of an image.

convert_to

‘A’ if we want to generate image of type ‘A’ from type ‘B’ or ‘B’ if we want to generate image of type ‘B’ from type ‘A’ where A and B are the domain specifications that were used while training.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5)

Displays the results of a trained model on a part of the validation set.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Pix2Pix

class arcgis.learn.Pix2Pix(data, pretrained_path=None, *args, **kwargs)

Creates a model object which generates fake images of type B from type A.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

Pix2Pix Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

compute_metrics(accuracy=True, show_progress=True)

Computes Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on validation set.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a Pix2Pix object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

Pix2Pix Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path)

Predicts and display the image.

Argument

Description

img_path

Required path of an image.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5)

Displays the results of a trained model on a part of the validation set.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

SuperResolution

class arcgis.learn.SuperResolution(data, backbone=None, pretrained_path=None, *args, **kwargs)

Creates a model object which increases the resolution and improves the quality of images. Based on Fast.ai MOOC Lesson 7.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the base of the UnetClassifier, which is resnet34 by default. Compatible backbones: ‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’, ‘resnet152’

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

SuperResolution Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

compute_metrics(accuracy=True, show_progress=True)

Computes Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on validation set.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_emd(data, emd_path)

Creates a SuperResolution object from an Esri Model Definition (EMD) file.

Argument

Description

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

emd_path

Required string. Path to Esri Model Definition file.

Returns

SuperResolution Object

classmethod from_model(emd_path, data=None)

Creates a SuperResolution object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

SuperResolution Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(img_path, width=None, height=None)

Predicts and display the image.

Argument

Description

img_path

Required path of an image.

width

Optional int. Width of the predicted output image.

height

Optional int. Height of the predicted output image.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5)

Displays the results of a trained model on a part of the validation set.

Argument

Description

rows

Optional int. Number of rows of results to be displayed.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

ImageCaptioner

class arcgis.learn.ImageCaptioner(data, backbone=None, pretrained_path=None, **kwargs)

Creates an Image Captioning model.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

backbone

Optional function. Backbone CNN model to be used for creating the encoder of the ImageCaptioner, which is resnet34 by default. It supports the ResNet family of backbones.

pretrained_path

Optional string. Path where pre-trained model is saved.

kwargs

Argument

Description

decoder_params

Optional dictionary. The keys of the dictionary are embed_size, hidden_size, attention_size, teacher_forcing, dropout and pretrained_embeddings.

Default values:
decoder_params={

‘embed_size’:100, ‘hidden_size’:100, ‘attention_size’:100, ‘teacher_forcing’:1, ‘dropout’:0.1, ‘pretrained_emb’:False

}

Parameter Explanation
  • ‘embed_size’: Size of embedding to be used during training.

  • ‘hidden_size’: Size of hidden layer.

  • ‘attention_size’: Size of intermediate attention layer.

  • ‘teacher_forcing’: Probability of teacher forcing.

  • ‘dropout’: Dropout probability.

  • ‘pretrained_emb’: If true, it will use fasttext embeddings.

Returns

ImageCaptioner Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

bleu_score(**kwargs)

Computes bleu score over validation set.

kwargs

Argument

Description

beam_width

Optional int. The size of beam to be used during beam search decoding. Default is 5.

max_len

Optional int. The maximum length of the sentence to be decoded. Default is 20.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a ImageCaptioner model from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Optional fastai Databunch. Returned data object from prepare_data function or None for inferencing.

Returns

ImageCaptioner Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(path, visualize=True, **kwargs)
save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=4, **kwargs)

Shows the ground truth and predictions of model side by side.

kwargs

Argument

Description

beam_width

Optional int. The size of beam to be used during beam search decoding. Default is 5.

max_len

Optional int. The maximum length of the sentence to be decoded. Default is 20.

property supported_backbones

Supported torchvision backbones for this model.

property supported_datasets

Supported dataset types for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

3D Models

PointCNN

class arcgis.learn.PointCNN(data, pretrained_path=None, *args, **kwargs)

Model architecture from https://arxiv.org/abs/1801.07791. Creates a Point Cloud classification model.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

pretrained_path

Optional String. Path where pre-trained model is saved.

kwargs

Argument

Description

encoder_params

Optional dictionary. The keys of the dictionary are out_channels, P, K, D and m.

Examples:

{‘out_channels’:[16, 32, 64, 96], ‘P’:[-1, 768, 384, 128], ‘K’:[12, 16, 16, 16], ‘D’:[1, 1, 2, 2], ‘m’:8 }

Length of out_channels, P, K, D should be same. The length denotes the number of layers in encoder.

Parameter Explanation
  • ‘out_channels’: Number of channels produced by each layer,

  • ‘P’: Number of points in each layer,

  • ‘K’: Number of K-nearest neighbor in each layer,

  • ‘D’: Dilation in each layer,

  • ‘m’: Multiplier which is multiplied by each element of out_channel.

dropout

Optional float. This parameter will control overfitting. The range of this parameter is [0,1).

sample_point_num

Optional integer. The number of points that the model will actually process.

Returns

PointCNN Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

compute_precision_recall()

Computes precision, recall and f1-score on the validation sets.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)

Train the model for the specified number of epochs and using the specified learning rates. The precision, recall and f1 scores shown in the training table are macro averaged over all classes.

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

kwargs

Argument

Description

iters_per_epoch

Optional integer. The number of iterations to run during the training phase.

classmethod from_model(emd_path, data=None)

Creates an PointCNN model object from a Deep Learning Package(DLPK) or Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

PointCNN Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict_h5(path, output_path=None, **kwargs)

Predicts and writes the resulting las file on the disk. The block size which was used for training will be used for prediction. Coordinate system for the inferencing data & trained model’s training data should be the same.

Argument

Description

path

Required string. The path to folder where the h5 files which needs to be predicted are present.

output_path

Optional string. The path to folder where to dump the resulting h5 block files. Defaults to results folder in input path.

Returns

Path where files are dumped.

predict_las(path, output_path=None, print_metrics=False, **kwargs)

Predicts and writes the resulting las file on the disk. The block size which was used for training will be used for prediction. Coordinate system for the inferencing data & trained model’s training data should be the same.

Argument

Description

path

Required string. The path to folder where the las files which needs to be predicted are present.

output_path

Optional string. The path to folder where to dump the resulting las files. Defaults to results folder in input path.

print_metrics

Optional boolean. If True, precision, recall and f1_score are also calculated and reported. Defaults to False.

kwargs

Argument

Description

remap_classes

Optional dictionary {int:int}. Mapping from class values to user defined values. Please query pointcnn._data.classes to get the class values on which the model is trained on. Default is {}.

selective_classify

Optional list of integers. If passed, predict_las will selectively classify only those points belonging to the specified class-codes. Other points in the input point clouds will retain their class-codes. Please query pointcnn._data.classes to get the class values on which the model is trained on. If remap_classes is specified, the new mapped values will be used for classification. Default value is [].

preserve_classes

Optional list of integers. A list of classes from the input data, that should be preserved in the predicted output. If a point in the input data belongs to any of the classes mentioned in this list, its class-code won’t be updated with the model’s predicted class. Example: If preserve_classes=[2,6]. The class-code of a point won’t be updated with the predicted class, if it’s 2 or 6. Default: [].

Returns

Path where files are dumped.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=2, **kwargs)

Displays the results from your model on the validation set with ground truth on the left and predictions on the right.

Argument

Description

rows

Optional rows. Number of rows to show. Default value is 2 and maximum value is the batch_size passed in prepare_data.

kwargs

Argument

Description

color_mapping

Optional dictionary. Mapping from class value to RGB values. Default value example: {0:[220,220,220], 2:[255,0,0], 6:[0,255,0]}.

mask_class

Optional list of integers. Array containing class values to mask. Use this parameter to display the classes of interest. Default value is []. Example: All the classes are in [0, 1, 2] to display only class 0 set the mask class parameter to be [1, 2]. List of all classes can be accessed from data.classes attribute where data is the Databunch object returned by prepare_data function.

width

Optional integer. Width of the plot. Default value is 750.

height

Optional integer. Height of the plot. Default value is 512.

max_display_point

Optional integer. Maximum number of points to display. Default is 20000. A warning will be raised if the total points to display exceeds this parameter. Setting this parameter will randomly sample the specified number of points and once set, it will be used for future uses.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning. Not implemented for PointCNN as none of the layers are frozen by default.

Transform3d

class arcgis.learn.Transform3d(rotation_range=[0.04363323129985824, 3.141592653589793, 0.04363323129985824, 'u'], scaling_range=[0.05, 0.05, 0.05, 'g'], jitter=0.0)

Creates a 3D transformation that can be used in prepare_data to apply data augmentation to blocks, with a 50 % probability.

Argument

Description

rotation_range

Optional tuple of length 4. It contains a list of angles(in radians) for X, Z and Y coordinates respectively. These angles will rotate the point cloud block according to the randomly selected angle. The fourth value in the tuple is the sampling method where ‘u’ means uniform and ‘g’ means gaussian. Intrinsic rotation will take place. Default: [math.pi / 72, math.pi, math.pi / 72, ‘u’].

scaling_range

Optional tuple of length 4. It contains a list of scaling ranges[0-1] which will scale the points. Please keep it a very small number otherwise, point cloud block may get distorted. The fourth value in the tuple is the sampling method where ‘u’ means uniform and ‘g’ means gaussian. Default: [0.05, 0.05, 0.05, ‘g’].

jitter

Optional float. The scale to which randomly jitter the points in the point cloud block. Default: 0.0.

Returns

Transform3d object

Object Tracking Models

SiamMask

class arcgis.learn.SiamMask(data=None, **kwargs)

Creates a SiamMask object.

Argument

Description

data

Optional fastai Databunch. Returned data object from prepare_data function. Default value is None.

Returns

SiamMask Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

freeze()

Freezes the pretrained backbone.

classmethod from_model(emd_path, data=None)

Creates a SiamMask Object tracker from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

SiamMask Object

init(frame, detections, labels=None, reset=True)

Initializes the position of the object in the frame/Image using detections.

Argument

Description

frame

Required numpy array. frame is used to initialize the object to track.

detections

Required list. A list of bounding box to intialize the object.

labels

Optional list. A list of class labels to intialize the object.

Returns

Track list

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

remove(track_ids)

Removes the tracks from the track list using track_ids

Argument

Description

track_ids

Required List. List of track ids to be removed from the track list.

Returns

Updated track list

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

show_results(rows=5)

Displays the results of a trained model on a part of the validation set

Argument

Description

rows

Optional int. Number of rows to display.

property supported_backbones

Supported torchvision backbones for this model.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

update(frame)

Tracks the position of the object in the frame/Image

Argument

Description

frame

Required numpy array. frame is used to update the object track.

Returns

Updated track list

ObjectTracker

class arcgis.learn.ObjectTracker(tracker, detector=None, tracker_options={'detect_fail_interval': 5, 'detect_track_failure': True, 'detection_interval': 5, 'detection_threshold': 0.3, 'knn_distance_ratio': 0.75, 'min_obj_size': 10, 'recover_conf_threshold': 0.1, 'recover_iou_threshold': 0.1, 'recover_track': True, 'search_period': 60, 'stab_period': 6, 'status_fail_threshold': 0.6, 'status_history': 60, 'template_history': 25})

Creates ObjectTracker Object.

Argument

Description

tracker

Required. Returned tracker object from from_model API of object tracking models.

detector

Optional. Returned detector object from from_model API of object detection models.

tracker_options

Optional dictionary. A dictionary with keys as parameter names and values as parameter values.

“detection_interval” refers to the interval in frames at which the detector is invoked.

“detection_threshold” refers to the lower threshold for selecting the detections.

“detect_track_failure” refers to the flag which enables/disables the logic to detect whether the object appearance has changed detection.

“recover_track” refers to the flag which enables/disables track recovery post failure.

“stab_period” refers to the number of frames after which post processing starts.

“detect_fail_interval” refers to the number of frames after which to detect track failure.

“min_obj_size” refers to the size in pixels below which tracking is assumed to have failed.

“template_history” refers to the number of frames before the current frame at which template image is fetched.

“status_history” refers to the number of frames over which status of the track is used to detect track failure.

“status_fail_threshold” refers to the threshold for the ratio between number of frames for which object is searched for and the total number of frames which needs to be crossed for track failure detection.

“search_period” refers to the number of frames for which object is searched for before declaring object is lost.

“knn_distance_ratio” refers to the threshold for ratio of the distances between template descriptor and the two best matched detection descriptor, used for filtering best matches.

“recover_conf_threshold” refers to the minimum confidence value over which recovery logic is enabled.

“recover_iou_threshold” refers to the minimum overlap between template and detection for successful recovery.

Returns

ObjectTracker Object

init(frame, detections=None, labels=None, reset=True)

Initializes tracks based on the detections returned by detector/ manually fed to the function.

Argument

Description

frame

Required numpy array. frame is used to initialize the objects to track.

detections

Optional list. A list of bounding box to intialize the tracks.

labels

Optional list. A list of labels corresponding to the detections.

reset

Optional flag. Indicates whether to reset the tracker and remove all existing tracks before initialization.

Returns

list of active track objects

remove(tracks_ids)

Removes the tracks corresponding to track_ids parameter

Argument

Description

tracks_ids

Required list. List of track ids to be removed.

update(frame)

Tracks the position of the object in the frame/Image

Argument

Description

frame

Required numpy array. frame is the current frame to be used to track the objects.

Returns

list of active track objects

Track

class arcgis.learn.Track(id, label, bbox, mask)

Creates a Track object, used to maintain the state of a track

Argument

Description

id

Required int. ID for each track initialized

label

Required String. label/class name of the track

bbox

Required list. Bounding box of the track

mask

Required numpy array. Mask for the tack

Returns

Track Object

Scanned Maps

ScannedMapDigitizer

class arcgis.learn.ScannedMapDigitizer(input_folder, output_folder)

Creates the object for ScannedMapDigitizer class

Argument

Description

input_folder

Path to the folder that contains extracted maps

output_folder

Path to the folder where intermediate results should get generated

classmethod create_mask(color_list, color_delta=60, kernel_size=None, kernel_type='rect', show_result=True)

Generates the binary masked images

Argument

Description

color_list

A list containing different color inputs (r, g, b).

color_delta

A value which defines the range around the threshold value for a specific color used for creating the mask images. Default value is 60

kernel_size

A list of 2 integers corresponding to size of the morphological filter operations closing and opening respectively.

kernel_type

A string value defining the type/shape of the kernel. kernel type can be “rect”, “elliptical” or “cross”. Default value is “rect”.

show_result

A boolean value. Set to “True” to visualize results and set to “False” otherwise

classmethod create_template_image(color, color_delta=10, kernel_size=2, show_result=True)

This method generates templates and color masks from scanned maps which are used in the subsequent step of template matching.

Argument

Description

color

r, g, b value representing land color. The color parameter is required for extracting the land region and generating the binary mask

color_delta

A value which defines the range around the threshold value for a specific color used for creating the mask images.

Default value is 60

kernel_size

A list of 2 integers corresponding to size of the morphological filter operations closing and opening respectively.

show_result

A Boolean value. Set to “True” to visualize results and set to “False” otherwise

classmethod digitize_image(show_result=True)

This method is the final step in the pipeline that maps the species regions on the search image using the computed transformations. Also, it generates the shapefiles for the species region that can be visualized using ArcGIS Pro and further edited.

Argument

Description

show_result

A Boolean value. Set to “True” to visualize results and set to “False” otherwise

classmethod georeference_image(padding_param, show_result=True)

This method estimates the control point pairs by traversing the contours of template image and finding the corresponding matches on the search region ROI image

Argument

Description

padding_param

A tuple that contains x-padding and y-padding at 0th and 1st index respectively

show_result

A Boolean value. Set to “True” to visualize results and set to “False” otherwise

classmethod get_search_region_extent()

Getter function for search region extent

classmethod match_template_multiscale(min_scale, max_scale, num_scales, show_result=True)

This method finds the location of the best match of a smaller image (template) in a larger image(search image) assuming it exists in the larger image.

Argument

Description

min_scale

An integer representing the minimum scale at which template matching is performed

max_scale

An integer representing maximum scale at which template matching is performed

num_scales

An integer representing the number of scales at which template matching is performed.

show_result

A Boolean value. Set to “True” to visualize results and set to “False” otherwise

classmethod prepare_search_region(search_image, color, extent, image_height, image_width, show_result=True)

This method prepares the search region in which the prepared templates are to be searched.

Argument

Description

color

r, g, b value representing water color

search_image

Path to the bigger image/shapefile.

extent

Extent defines the extreme longitude/latitude of the search region.

image_height

Height of the search region.

image_width

Width of the search region.

show_result

A boolean value. Set to “True” to visualize results and set to “False” otherwise

classmethod set_search_region_extent(extent)

Creates the object for ScannedMapDigitizer class

Argument

Description

extent

Extent defines the extreme longitude/latitude of the search region.

Feature, Tabular and Timeseries Models

FullyConnectedNetwork

class arcgis.learn.FullyConnectedNetwork(data, layers=None, emb_szs=None, **kwargs)

Creates a FullyConnectedNetwork Object. Based on the Fast.ai’s Tabular Learner

Argument

Description

data

Required TabularDataObject. Returned data object from prepare_tabulardata function.

layers

Optional list, specifying the number of nodes in each layer. Default: [500, 100] is used. 2 layers each with nodes 500 and 100 respectively.

emb_szs

Optional dict, variable name with embedding size for categorical variables. If not specified, then calculated using fastai.

Returns

FullyConnectedNetwork Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a FullyConnectedNetwork Object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_tabulardata function or None for inferencing.

Returns

FullyConnectedNetwork Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(input_features=None, explanatory_rasters=None, datefield=None, distance_features=None, output_layer_name='Prediction Layer', gis=None, prediction_type='features', output_raster_path=None, match_field_names=None)

Predict on data from feature layer, dataframe and or raster data.

Argument

Description

input_features

Optional Feature Layer or spatially enabled dataframe. Required if prediction_type=’features’. Contains features with location and some or all fields required to infer the dependent variable value.

explanatory_rasters

Optional list of Raster Objects. If prediction_type=’raster’, must contain all rasters required to make predictions.

datefield

Optional string. Field name from feature layer that contains the date, time for the input features. Same as prepare_tabulardata().

distance_features

Optional List of Feature Layer objects. These layers are used for calculation of field “NEAR_DIST_1”, “NEAR_DIST_2” etc in the output dataframe. These fields contain the nearest feature distance from the input_features. Same as prepare_tabulardata().

output_layer_name

Optional string. Used for publishing the output layer.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

prediction_type

Optional String. Set ‘features’ or ‘dataframe’ to make output feature layer predictions. With this feature_layer argument is required.

Set ‘raster’, to make prediction raster. With this rasters must be specified.

output_raster_path

Optional path. Required when prediction_type=’raster’, saves the output raster to this path.

match_field_names

Optional dictionary. Specify mapping of field names from prediction set to training set. For example:

{

“Field_Name_1”: “Field_1”, “Field_Name_2”: “Field_2”

}

:returns Feature Layer if prediction_type=’features’, dataframe for prediction_type=’dataframe’ else creates an output raster.

save(name_or_path, framework='PyTorch', publish=False, gis=None, save_optimizer=False, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Folder path to save the model.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument.

Framework choice: ‘PyTorch’ and ‘TF-ONNX’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

score()

:returns R2 score for regression model and Accuracy for classification model.

show_results(rows=5)

Prints the rows of the dataframe with target and prediction columns.

Argument

Description

rows

Optional Integer. Number of rows to print.

Returns

dataframe

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

MLModel

class arcgis.learn.MLModel(data, model_type, **kwargs)

Creates a machine learning model based on its implementation from scikit-learn. For supervised learning: Refer https://scikit-learn.org/stable/supervised_learning.html#supervised-learning For unsupervised learning: 1. Clustering Models 2. Gaussian Mixture Models 3. Novelty and outlier detection Refer https://scikit-learn.org/stable/unsupervised_learning.html

Argument

Description

data

Required TabularDataObject. Returned data object from prepare_tabulardata function.

model_type

Required string path to the module. For example for SVM:

sklearn.svm.SVR or sklearn.svm.SVC

For tree:

sklearn.tree.DecisionTreeRegressor or sklearn.tree.DecisionTreeClassifier

**kwargs

model_type specific arguments. Refer Parameters section https://scikit-learn.org/stable/supervised_learning.html#supervised-learning

Returns

MLModel Object

decision_function()

:returns output from scikit-learn’s model.decision_function()

property feature_importances_

:returns output from scikit-learn’s model.feature_importances_

fit()
classmethod from_model(emd_path, data=None)

Creates a MLModel Object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Esri Model Definition file.

data

Required TabularDataObject or None. Returned data object from prepare_tabulardata function or None for inferencing.

Returns

MLModel Object

kneighbors(X=None, n_neighbors=None, return_distance=True)

:returns output from scikit-learn’s model.kneighbors()

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Esri Model Definition(EMD) file.

mahalanobis()

:returns output from scikit-learn’s model.mahalanobis()

predict(input_features=None, explanatory_rasters=None, datefield=None, distance_features=None, output_layer_name='Prediction Layer', gis=None, prediction_type='features', output_raster_path=None, match_field_names=None)

Predict on data from feature layer, dataframe and or raster data.

Argument

Description

input_features

Optional Feature Layer or spatial dataframe. Required if prediction_type=’features’. Contains features with location and some or all fields required to infer the dependent variable value.

explanatory_rasters

Optional list. Required if prediction_type=’raster’. Contains a list of raster objects containing some or all fields required to infer the dependent variable value.

datefield

Optional string. Field name from feature layer that contains the date, time for the input features. Same as prepare_tabulardata().

distance_features

Optional List of Feature Layer objects. These layers are used for calculation of field “NEAR_DIST_1”, “NEAR_DIST_2” etc in the output dataframe. These fields contain the nearest feature distance from the input_features. Same as prepare_tabulardata().

output_layer_name

Optional string. Used for publishing the output layer.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

prediction_type

Optional String. Set ‘features’ or ‘dataframe’ to make output feature layer predictions. With this feature_layer argument is required.

Set ‘raster’, to make prediction raster. With this rasters must be specified.

output_raster_path

Optional path. Required when prediction_type=’raster’, saves the output raster to this path.

match_field_names

Optional dictionary. Specify mapping of field names from prediction set to training set. For example:

{

“Field_Name_1”: “Field_1”, “Field_Name_2”: “Field_2”

}

:returns Feature Layer if prediction_type=’features’, dataframe for prediction_type=’dataframe’ else creates an output raster.

predict_proba()

:returns output from scikit-learn’s model.predict_proba()

save(name_or_path)

Saves the model, creates an Esri Model Definition. Uses pickle to save the model. Using protocol level 2. Protocol level is backward compatible.

:returns dataframe

score()

:returns output from scikit-learn’s model.score(), R2 score in case of regression and Accuracy in case of classification. For KMeans returns Opposite of the value of X on the K-means objective.

show_results(rows=5)

Shows sample results for the model.

:returns dataframe

TimeSeriesModel

class arcgis.learn.TimeSeriesModel(data, seq_len, model_arch='InceptionTime', **kwargs)

Creates a TimeSeriesModel Object. Based on the Fast.ai’s https://github.com/timeseriesAI/timeseriesAI

Argument

Description

data

Required TabularDataObject. Returned data object from prepare_tabulardata function.

seq_len

Required Integer. Sequence Length for the series. In case of raster only, seq_len = number of rasters, any other passed value will be ignored.

model_arch

Optional string. Model Architecture. Allowed “InceptionTime”, “ResCNN”, “Resnet”, “FCN”

**kwargs

Optional kwargs.

Returns

TimeSeriesModel Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a TimeSeriesModel Object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_tabulardata function or None for inferencing.

Returns

TimeSeriesModel Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

predict(input_features=None, explanatory_rasters=None, datefield=None, distance_features=None, output_layer_name='Prediction Layer', gis=None, prediction_type='features', output_raster_path=None, match_field_names=None, number_of_predictions=None)

Predict on data from feature layer and or raster data.

Argument

Description

input_features

Optional Feature Layer or spatially enabled dataframe. Contains features with location of the input data. Required if prediction_type is ‘features’ or ‘dataframe’

explanatory_rasters

Optional list of Raster Objects. Required if prediction_type is ‘rasters’

datefield

Optional field_name. This field contains the date in the input_features. The field type can be a string or date time field. If specified, the field will be split into Year, month, week, day, dayofweek, dayofyear, is_month_end, is_month_start, is_quarter_end, is_quarter_start, is_year_end, is_year_start, hour, minute, second, elapsed and these will be added to the prepared data as columns. All fields other than elapsed and dayofyear are treated as categorical.

distance_features

Optional List of Feature Layer objects. These layers are used for calculation of field “NEAR_DIST_1”, “NEAR_DIST_2” etc in the output dataframe. These fields contain the nearest feature distance from the input_features. Same as prepare_tabulardata().

output_layer_name

Optional string. Used for publishing the output layer.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

prediction_type

Optional String. Set ‘features’ or ‘dataframe’ to make output predictions.

output_raster_path

Optional path. Required when prediction_type=’raster’, saves the output raster to this path.

match_field_names

Optional string. Specify mapping of the original training set with prediction set.

number_of_predictions

Optional int for univariate time series. Specify the number of predictions to make, adds new rows to the dataframe. For multivariate or if None, it expects the dataframe to have empty rows. For prediction_type=’raster’, a new raster is created.

:returns Feature Layer/dataframe if prediction_type=’features’/’dataframe’, else returns True and saves output raster at the specified path.

save(name_or_path, framework='PyTorch', publish=False, gis=None, save_optimizer=False, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Folder path to save the model.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument.

Framework choice: ‘PyTorch’ and ‘TF-ONNX’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

score()

:returns R2 score for regression model and Accuracy for classification model.

show_results(rows=5)

Prints the graph with predictions.

Argument

Description

rows

Optional Integer. Number of rows to print.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

Inferencing Methods (Image Server)

detect_objects

arcgis.learn.detect_objects(input_raster, model, model_arguments=None, output_name=None, run_nms=False, confidence_score_field=None, class_value_field=None, max_overlap_ratio=0, context=None, process_all_raster_items=False, *, gis=None, future=False, **kwargs)

Function can be used to generate feature service that contains polygons on detected objects found in the imagery data using the designated deep learning model. Note that the deep learning library needs to be installed separately, in addition to the server’s built in Python 3.x library.

Argument

Description

input_raster

Required. raster layer that contains objects that needs to be detected.

model

Required model object.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

eg: {“name1”:”value1”, “name2”: “value2”}

output_name

Optional. If not provided, a Feature layer is created by the method and used as the output . You can pass in an existing Feature Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Feature Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists

run_nms

Optional bool. Default value is False. If set to True, runs the Non Maximum Suppression tool.

confidence_score_field

Optional string. The field in the feature class that contains the confidence scores as output by the object detection method. This parameter is required when you set the run_nms to True

class_value_field

Optional string. The class value field in the input feature class. If not specified, the function will use the standard class value fields Classvalue and Value. If these fields do not exist, all features will be treated as the same object class. Set only if run_nms is set to True

max_overlap_ratio

Optional integer. The maximum overlap ratio for two overlapping features. Defined as the ratio of intersection area over union area. Set only if run_nms is set to True

context

Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Eg: {“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.env variable for this particular function.

process_all_raster_items

Optional bool. Specifies how all raster items in an image service will be processed.

  • False : all raster items in the image service will be mosaicked together and processed. This is the default.

  • True : all raster items in the image service will be processed as separate images.

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns

The output feature layer item containing the detected objects

classify_objects

arcgis.learn.classify_objects(input_raster, model, model_arguments=None, input_features=None, class_label_field=None, process_all_raster_items=False, output_name=None, context=None, *, gis=None, future=False, **kwargs)

Function can be used to output feature service with assigned class label for each feature based on information from overlapped imagery data using the designated deep learning model.

Argument

Description

input_raster

Required. raster layer that contains objects that needs to be classified.

model

Required model object.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

eg: {“name1”:”value1”, “name2”: “value2”}

input_features

Optional feature layer. The point, line, or polygon input feature layer that identifies the location of each object to be classified and labelled. Each row in the input feature layer represents a single object.

If no input feature layer is specified, the function assumes that each input image contains a single object to be classified. If the input image or images use a spatial reference, the output from the function is a feature layer, where the extent of each image is used as the bounding geometry for each labelled feature layer. If the input image or images are not spatially referenced, the output from the function is a table containing the image ID values and the class labels for each image.

class_label_field

Optional str. The name of the field that will contain the classification label in the output feature layer.

If no field name is specified, a new field called ClassLabel will be generated in the output feature layer.

Example:

“ClassLabel”

process_all_raster_items

Optional bool.

If set to False, all raster items in the image service will be mosaicked together and processed. This is the default.

If set to True, all raster items in the image service will be processed as separate images.

output_name

Optional. If not provided, a Feature layer is created by the method and used as the output . You can pass in an existing Feature Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Feature Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists

context

Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Eg: {“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.env variable for this particular function.

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

Returns

The output feature layer item containing the classified objects

classify_pixels

arcgis.learn.classify_pixels(input_raster, model, model_arguments=None, output_name=None, context=None, process_all_raster_items=False, *, gis=None, future=False, **kwargs)

Function to classify input imagery data using a deep learning model. Note that the deep learning library needs to be installed separately, in addition to the server’s built in Python 3.x library.

Argument

Description

input_raster

Required. raster layer that needs to be classified.

model

Required model object.

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

eg: {“name1”:”value1”, “name2”: “value2”}

output_name

Optional. If not provided, an imagery layer is created by the method and used as the output . You can pass in an existing Image Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Image Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists

context

Optional dictionary. Context contains additional settings that affect task execution.

Dictionary can contain value for following keys:

  • outSR - (Output Spatial Reference) Saves the result in the specified spatial reference

  • snapRaster - Function will adjust the extent of output rasters so that they match the cell alignment of the specified snap raster.

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Eg: {“outSR” : {spatial reference}}

Setting context parameter will override the values set using arcgis.env variable for this particular function.

process_all_raster_items

Optional bool. Specifies how all raster items in an image service will be processed.

  • False : all raster items in the image service will be mosaicked together and processed. This is the default.

  • True : all raster items in the image service will be processed as separate images.

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

tiles_only

Keyword only parameter. Optional boolean. In ArcGIS Online, the default output image service for this function would be a Tiled Imagery Layer. To create Dynamic Imagery Layer as output in ArcGIS Online, set tiles_only parameter to False.

Function will not honor tiles_only parameter in ArcGIS Enterprise and will generate Dynamic Imagery Layer by default.

Returns

The classified imagery layer item

compute_accuracy_for_object_detection

arcgis.learn.compute_accuracy_for_object_detection(detected_features, ground_truth_features, detected_class_value_field=None, ground_truth_class_value_field=None, min_iou=None, mask_features=None, out_accuracy_table_name=None, out_accuracy_report_name=None, context=None, *, gis=None, future=False, **kwargs)

Function can be used to calculate the accuracy of a deep learning model by comparing the detected objects from the detect_objects function to ground truth data. Function available in ArcGIS Image Server 10.9 and higher.

Argument

Description

detected_features

Required. The input polygon feature layer containing the objects detected from the detect_objects function.

ground_truth_features

Required. The polygon feature layer containing ground truth data.

detected_class_value_field

Optional dictionary. The field in the detected objects feature class that contains the class names or class values.

If a field name is not specified, a Classvalue or Value field will be used. If these fields do not exist, all records will be identified as belonging to one class.

The class values or class names must match those in the ground truth feature class exactly.

Syntax: A string describing the detected class value field.

Example: “class”

ground_truth_class_value_field

The field in the ground truth feature class that contains the class names or class values.

If a field name is not specified, a Classvalue or Value field will be used. If these fields do not exist, all records will be identified as belonging to one class.

The class values or class names must match those in the detected objects feature class exactly.

Example: “class”

min_iou

The Intersection over Union (IoU) ratio to use as a threshold to evaluate the accuracy of the object-detection model. The numerator is the area of overlap between the predicted bounding box and the ground truth bounding box. The denominator is the area of union or the area encompassed by both bounding boxes.

min_IoU value should be in the range 0 to 1. [0,1] Example:

0.5

mask_features

Optional feature layer. A polygon feature service layer that delineates the area where accuracy will be computed. Only the image area that falls completely within the polygons will be assessed for accuracy.

out_accuracy_table_name

Optional. Name of the output accuracy table item to be created. If not provided, a random name is generated by the method and used as the output name.

out_accuracy_report_name

Optional. Accuracy report can either be added as an item to the portal. or can be written to a datastore. To add as an item, specify the name of the output report item (pdf item) to be created. Example:

“accuracyReport”

In order to write accuracy report to datastore, specify the datastore path as value to uri key.

Example -

“/fileShares/yourFileShareFolderName/accuracyReport”

context

Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Eg: {“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.env variable for this particular function.

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

Returns

The output accuracy table item or/and accuracy report item (or datastore path to accuracy report)

# Usage Example: This example generates an accuracy table for a specified minimum IoU value.

compute_accuracy_op = compute_accuracy_for_object_detection(detected_features=detected_features, 
                                                            ground_truth_features=ground_truth_features, 
                                                            detected_class_value_field="ClassValue", 
                                                            ground_truth_class_value_field="Class", 
                                                            min_iou=0.5, 
                                                            mask_features=None,
                                                            out_accuracy_table_name="accuracy_table", 
                                                            out_accuracy_report_name="accuracy_report", 
                                                            gis=gis)

Model Management

Model

class arcgis.learn.Model(model=None)
from_json(model)

Function is used to initialise Model object from model definition JSON

eg usage:

model = Model()

model.from_json({“Framework” :”TensorFlow”,

“ModelConfiguration”:”DeepLab”, “InferenceFunction”:”[functions]System\DeepLearning\ImageClassifier.py”, “ModelFile”:”\\folder_path_of_pb_file\frozen_inference_graph.pb”, “ExtractBands”:[0,1,2], “ImageWidth”:513, “ImageHeight”:513, “Classes”: [ { “Value”:0, “Name”:”Evergreen Forest”, “Color”:[0, 51, 0] },

{ “Value”:1, “Name”:”Grassland/Herbaceous”, “Color”:[241, 185, 137] }, { “Value”:2, “Name”:”Bare Land”, “Color”:[236, 236, 0] }, { “Value”:3, “Name”:”Open Water”, “Color”:[0, 0, 117] }, { “Value”:4, “Name”:”Scrub/Shrub”, “Color”:[102, 102, 0] }, { “Value”:5, “Name”:”Impervious Surface”, “Color”:[236, 236, 236] } ] })

from_model_path(model)

Function is used to initialise Model object from url of model package or path of model definition file eg usage:

model = Model()

model.from_model_path(“https://xxxportal.esri.com/sharing/rest/content/items/<itemId>”)

or model = Model()

model.from_model_path("\\sharedstorage\sharefolder\findtrees.emd")

install(*, gis=None, future=False, **kwargs)

Function is used to install the uploaded model package (*.dlpk). Optionally after inferencing the necessary information using the model, the model can be uninstalled by uninstall_model()

Argument

Description

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns

Path where model is installed

query_info(*, gis=None, future=False, **kwargs)

Function is used to extract the deep learning model specific settings from the model package item or model definition file.

Argument

Description

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns

The key model information in dictionary format that describes what the settings are essential for this type of deep learning model.

uninstall(*, gis=None, future=False, **kwargs)

Function is used to uninstall the uploaded model package that was installed using the install_model() This function will delete the named deep learning model from the server but not the portal item.

Argument

Description

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns

itemId of the uninstalled model package item

ModelExtension

class arcgis.learn.ModelExtension(data, model_conf, backbone=None, pretrained_path=None, **kwargs)

Creates a ModelExtension object, to train the model for object detection, semantic segmentation, and edge detection.

Argument

Description

data

Required fastai Databunch. Returned data object from prepare_data function.

model_conf

A class definition contains the following methods:

  • get_model(self, data, backbone=None): for model definition,

  • on_batch_begin(self, learn, model_input_batch, model_target_batch): for feeding input to the model during training,

  • transform_input(self, xb): for feeding input to the model during inferencing/validation,

  • transform_input_multispectral(self, xb): for feeding input to the model during inferencing/validation in case of multispectral data,

  • loss(self, model_output, *model_target): to return loss value of the model, and

  • post_process(self, pred, nms_overlap, thres, chip_size, device): to post-process the output of the object-detection model.

  • post_process(self, pred, thres): to post-process the output of the segmentation model.

backbone

Optional function. If custom model requires any backbone.

pretrained_path

Optional string. Path where pre-trained model is saved.

Returns

ModelExtension Object

property available_metrics

List of available metrics that are displayed in the training table. Set monitor value to be one of these while calling the fit method.

fit(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, monitor='valid_loss', **kwargs)

Train the model for the specified number of epochs and using the specified learning rates

Argument

Description

epochs

Required integer. Number of cycles of training on the data. Increase it if underfitting.

lr

Optional float or slice of floats. Learning rate to be used for training the model. If lr=None, an optimal learning rate is automatically deduced for training the model.

one_cycle

Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.

early_stopping

Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if parameter monitor value stops improving for 5 epochs.

checkpoint

Optional boolean or string. Parameter to save checkpoint during training. If set to True the best model based on monitor will be saved during training. If set to ‘all’, all checkpoints are saved. If set to False, checkpointing will be off. Setting this parameter loads the best model at the end of training.

tensorboard

Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=2.1

The default value is ‘False’. **Note - Not applicable for Text Models

monitor

Optional string. Parameter specifies which metric to monitor while checkpointing and early stopping. Defaults to ‘valid_loss’. Value should be one of the metric that is displayed in the training table. Use {model_name}.available_metrics to list the available metrics to set here.

classmethod from_model(emd_path, data=None)

Creates a ModelExtension object from an Esri Model Definition (EMD) file.

Argument

Description

emd_path

Required string. Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

data

Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.

Returns

ModelExtension Object

load(name_or_path)

Loads a compatible saved model for inferencing or fine tuning from the disk.

Argument

Description

name_or_path

Required string. Name or Path to Deep Learning Package (DLPK) or Esri Model Definition(EMD) file.

lr_find(allow_plot=True)

Runs the Learning Rate Finder, and displays the graph of its output. Helps in choosing the optimum learning rate for training the model.

plot_losses()

Plot validation and training losses after fitting the model.

save(name_or_path, framework='PyTorch', publish=False, gis=None, compute_metrics=True, save_optimizer=False, save_inference_file=True, **kwargs)

Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.

Argument

Description

name_or_path

Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.

framework

Optional string. Defines the framework of the model. (Only supported by SingleShotDetector, currently.) If framework used is TF-ONNX, batch_size can be passed as an optional keyword argument. If framework used is torchscript, pytorch(.pt) files will get generated for the model inside ‘torch_scripts’. (Only supported by SiamMask)

Framework choice: ‘PyTorch’, ‘TF-ONNX’ and ‘torchscript’

publish

Optional boolean. Publishes the DLPK as an item.

gis

Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.

compute_metrics

Optional boolean. Used for computing model metrics.

save_optimizer

Optional boolean. Used for saving the model-optimizer state along with the model. Default is set to False

save_inference_file

Optional boolean. Used for saving the inference file along with the model. If False, the model will not work with ArcGIS Pro 2.6 or earlier. Default is set to True.

kwargs

Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.

unfreeze()

Unfreezes the earlier layers of the model for fine-tuning.

list_models

arcgis.learn.list_models(*, gis=None, future=False, **kwargs)

Function is used to list all the installed deep learning models.

Argument

Description

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

future

Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.

Returns

list of deep learning models installed

train_model

arcgis.learn.train_model(input_folder, model_type, model_arguments=None, batch_size=2, max_epochs=None, learning_rate=None, backbone_model=None, validation_percent=None, pretrained_model=None, stop_training=True, freeze_model=True, overwrite_model=False, output_name=None, context=None, *, gis=None, future=False, **kwargs)

Function can be used to train a deep learning model using the output from the export_training_data function. It generates the deep learning model package (*.dlpk) and adds it to your enterprise portal. train_model function performs the training using the Raster Analytics server.

Argument

Description

input_folder

Required string. This is the input location for the training sample data. It can be the path of output location on the file share raster data store or a shared file system path. The training sample data folder needs to be the output of export_training_data function, containing “images” and “labels” folder, as well as the JSON model definition file written out together by the function.

File share raster store and datastore path examples:
  • /rasterStores/yourRasterStoreFolderName/trainingSampleData

  • /fileShares/yourFileShareFolderName/trainingSampleData

Shared path example:
  • serverNamedeepLearning rainingSampleData

model_type

Required string. The model type to use for training the deep learning model. Possible values: SSD, UNET, FEATURE_CLASSIFIER, PSPNET, RETINANET, MASKRCNN

  • SSD - The Single Shot Detector (SSD) is used for object detection.

  • UNET - U-Net is used for pixel classification.

  • FEATURE_CLASSIFIER - The Feature Classifier is used for object classification.

  • PSPNET - The Pyramid Scene Parsing Network (PSPNET) is used for pixel classification.

  • RETINANET - The RetinaNet is used for object detection.

  • MASKRCNN - The MarkRCNN is used for object detection

model_arguments

Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.

Example:

{“name1”:”value1”, “name2”: “value2”}

batch_size

Optional int. The number of training samples to be processed for training at one time. If the server has a powerful GPU, this number can be increased to 16, 36, 64, and so on.

Example:

4

max_epochs

Optional int. The maximum number of epochs that the model should be trained. One epoch means the whole training dataset will be passed forward and backward through the deep neural network once.

Example:

20

learning_rate

Optional float. The rate at which the weights are updated during the training. It is a small positive value in the range between 0.0 and 1.0. If learning rate is set to 0, it will extract the optimal learning rate from the learning curve during the training process.

Example:

0.0

backbone_model

Optional string. Specifies the preconfigured neural network to be used as an architecture for training the new model. Possible values: DENSENET121 , DENSENET161 , DENSENET169 , DENSENET201 , MOBILENET_V2 , MASKRCNN50_FPN ,

RESNET18 , RESNET34 , RESNET50 , RESNET101 , RESNET152 , VGG11 , VGG11_BN , VGG13 , VGG13_BN , VGG16 , VGG16_BN , VGG19 , VGG19_BN

Example:

RESNET34

validation_percent

Optional float. The percentage (in %) of training sample data that will be used for validating the model.

Example:

10

pretrained_model

Optional dlpk portal item.

The pretrained model to be used for fine tuning the new model. It is a deep learning model package (dlpk) portal item.

stop_training

Optional bool. Specifies whether early stopping will be implemented.

  • True - The model training will stop when the model is no longer improving, regardless of the maximum epochs specified. This is the default.

  • False - The model training will continue until the maximum epochs is reached.

freeze_model

Optional bool. Specifies whether to freeze the backbone layers in the pretrained model, so that the weights and biases in the backbone layers remain unchanged.

  • True - The predefined weights and biases will not be altered in the backboneModel. This is the default.

  • False - The weights and biases of the backboneModel may be altered to better fit your training samples. This may take more time to process but usually could get better results.

overwrite_model

Optional bool. Overwrites an existing deep learning model package (.dlpk) portal item with the same name.

If the output_name parameter uses the file share data store path, this overwriteModel parameter is not applied.

  • True - The portal .dlpk item will be overwritten.

  • False - The portal .dlpk item will not be overwritten. This is the default.

output_name

Optional. trained deep learning model package can either be added as an item to the portal or can be written to a datastore.

To add as an item, specify the name of the output deep learning model package (item) to be created.

Example -

“trainedModel”

In order to write the dlpk to fileshare datastore, specify the datastore path.

Example -

“/fileShares/filesharename/folder”

context

Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:

  • cellSize - Set the output raster cell size, or resolution

  • extent - Sets the processing extent used by the function

  • parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”

  • processorType - Sets the processor type. “CPU” or “GPU”

Example -

{“processorType” : “CPU”}

Setting context parameter will override the values set using arcgis.env variable for this particular function.

gis

Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.

Returns

Returns the dlpk portal item that has properties for title, type, filename, file, id and folderId.