arcgis.learn module¶
Functions for calling the Deep Learning Tools.
detect_objects¶
-
arcgis.learn.
detect_objects
(input_raster, model, model_arguments=None, output_name=None, run_nms=False, confidence_score_field=None, class_value_field=None, max_overlap_ratio=0, context=None, process_all_raster_items=False, *, gis=None, future=False, **kwargs)¶ Function can be used to generate feature service that contains polygons on detected objects found in the imagery data using the designated deep learning model. Note that the deep learning library needs to be installed separately, in addition to the server’s built in Python 3.x library.
Argument
Description
input_raster
Required. raster layer that contains objects that needs to be detected.
model
Required model object.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
eg: {“name1”:”value1”, “name2”: “value2”}
output_name
Optional. If not provided, a Feature layer is created by the method and used as the output . You can pass in an existing Feature Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Feature Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists
run_nms
Optional bool. Default value is False. If set to True, runs the Non Maximum Suppression tool.
confidence_score_field
Optional string. The field in the feature class that contains the confidence scores as output by the object detection method. This parameter is required when you set the run_nms to True
class_value_field
Optional string. The class value field in the input feature class. If not specified, the function will use the standard class value fields Classvalue and Value. If these fields do not exist, all features will be treated as the same object class. Set only if run_nms is set to True
max_overlap_ratio
Optional integer. The maximum overlap ratio for two overlapping features. Defined as the ratio of intersection area over union area. Set only if run_nms is set to True
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”
processorType - Sets the processor type. “CPU” or “GPU”
Eg: {“processorType” : “CPU”}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
process_all_raster_items
Optional bool. Specifies how all raster items in an image service will be processed.
False : all raster items in the image service will be mosaicked together and processed. This is the default.
True : all raster items in the image service will be processed as separate images.
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
- Returns
The output feature layer item containing the detected objects
classify_objects¶
-
arcgis.learn.
classify_objects
(input_raster, model, model_arguments=None, input_features=None, class_label_field=None, process_all_raster_items=False, output_name=None, context=None, *, gis=None, future=False, **kwargs)¶ Function can be used to output feature service with assigned class label for each feature based on information from overlapped imagery data using the designated deep learning model.
Argument
Description
input_raster
Required. raster layer that contains objects that needs to be classified.
model
Required model object.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
eg: {“name1”:”value1”, “name2”: “value2”}
input_features
Optional feature layer. The point, line, or polygon input feature layer that identifies the location of each object to be classified and labelled. Each row in the input feature layer represents a single object.
If no input feature layer is specified, the function assumes that each input image contains a single object to be classified. If the input image or images use a spatial reference, the output from the function is a feature layer, where the extent of each image is used as the bounding geometry for each labelled feature layer. If the input image or images are not spatially referenced, the output from the function is a table containing the image ID values and the class labels for each image.
class_label_field
Optional str. The name of the field that will contain the classification label in the output feature layer.
If no field name is specified, a new field called ClassLabel will be generated in the output feature layer.
- Example:
“ClassLabel”
process_all_raster_items
Optional bool.
If set to False, all raster items in the image service will be mosaicked together and processed. This is the default.
If set to True, all raster items in the image service will be processed as separate images.
output_name
Optional. If not provided, a Feature layer is created by the method and used as the output . You can pass in an existing Feature Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Feature Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists
context
Optional dictionary. Context contains additional settings that affect task execution. Dictionary can contain value for following keys:
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”
processorType - Sets the processor type. “CPU” or “GPU”
Eg: {“processorType” : “CPU”}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
- Returns
The output feature layer item containing the classified objects
classify_pixels¶
-
arcgis.learn.
classify_pixels
(input_raster, model, model_arguments=None, output_name=None, context=None, process_all_raster_items=False, *, gis=None, future=False, **kwargs)¶ Function to classify input imagery data using a deep learning model. Note that the deep learning library needs to be installed separately, in addition to the server’s built in Python 3.x library.
Argument
Description
input_raster
Required. raster layer that needs to be classified
model
Required model object.
model_arguments
Optional dictionary. Name-value pairs of arguments and their values that can be customized by the clients.
eg: {“name1”:”value1”, “name2”: “value2”}
output_name
Optional. If not provided, an imagery layer is created by the method and used as the output . You can pass in an existing Image Service Item from your GIS to use that instead. Alternatively, you can pass in the name of the output Image Service that should be created by this method to be used as the output for the tool. A RuntimeError is raised if a service by that name already exists
context
- Optional dictionary. Context contains additional settings that affect task execution.
Dictionary can contain value for following keys:
outSR - (Output Spatial Reference) Saves the result in the specified spatial reference
snapRaster - Function will adjust the extent of output rasters so that they match the cell alignment of the specified snap raster.
cellSize - Set the output raster cell size, or resolution
extent - Sets the processing extent used by the function
parallelProcessingFactor - Sets the parallel processing factor. Default is “80%”
processorType - Sets the processor type. “CPU” or “GPU”
Eg: {“outSR” : {spatial reference}}
Setting context parameter will override the values set using arcgis.env variable for this particular function.
process_all_raster_items
Optional bool. Specifies how all raster items in an image service will be processed.
False : all raster items in the image service will be mosaicked together and processed. This is the default.
True : all raster items in the image service will be processed as separate images.
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
- Returns
The classified imagery layer item
export_training_data¶
-
arcgis.learn.
export_training_data
(input_raster, input_class_data=None, chip_format=None, tile_size=None, stride_size=None, metadata_format=None, classvalue_field=None, buffer_radius=None, output_location=None, context=None, input_mask_polygons=None, rotation_angle=0, reference_system='MAP_SPACE', process_all_raster_items=False, blacken_around_feature=False, fix_chip_size=True, *, gis=None, future=False, **kwargs)¶ Function is designed to generate training sample image chips from the input imagery data with labeled vector data or classified images. The output of this service tool is the data store string where the output image chips, labels and metadata files are going to be stored.
Argument
Description
input_raster
Required. Raster layer that needs to be exported for training
input_class_data
Labeled data, either a feature layer or image layer. Vector inputs should follow a training sample format as generated by the ArcGIS Pro Training Sample Manager. Raster inputs should follow a classified raster format as generated by the Classify Raster tool.
chip_format
Optional string. The raster format for the image chip outputs.
TIFF: TIFF format
PNG: PNG format
JPEG: JPEG format
MRF: MRF (Meta Raster Format)
tile_size
Optional dictionary. The size of the image chips.
Example: {“x”: 256, “y”: 256}
stride_size
Optional dictionary. The distance to move in the X and Y when creating the next image chip. When stride is equal to the tile size, there will be no overlap. When stride is equal to half of the tile size, there will be 50% overlap.
Example: {“x”: 128, “y”: 128}
metadata_format
- Optional string. The format of the output metadata labels. There are 4 options for output metadata labels for the training data,
KITTI Rectangles, PASCAL VOCrectangles, Classified Tiles (a class map) and RCNN_Masks. If your input training sample data is a feature class layer such as building layer or standard classification training sample file, use the KITTI or PASCAL VOC rectangle option.
The output metadata is a .txt file or .xml file containing the training sample data contained in the minimum bounding rectangle. The name of the metadata file matches the input source image name. If your input training sample data is a class map, use the Classified Tiles as your output metadata format option.
KITTI_rectangles: The metadata follows the same format as the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) Object Detection Evaluation dataset. The KITTI dataset is a vision benchmark suite. This is the default.The label files are plain text files. All values, both numerical or strings, are separated by spaces, and each row corresponds to one object.
PASCAL_VOC_rectangles: The metadata follows the same format as the Pattern Analysis, Statistical Modeling and Computational Learning, Visual Object Classes (PASCAL_VOC) dataset. The PASCAL VOC dataset is a standardized image data set for object class recognition.The label files are XML files and contain information about image name, class value, and bounding box(es).
Classified_Tiles: This option will output one classified image chip per input image chip. No other meta data for each image chip. Only the statistics output has more information on the classes such as class names, class values, and output statistics.
RCNN_Masks: This option will output image chips that have a mask on the areas where the sample exists. The model generates bounding boxes and segmentation masks for each instance of an object in the image. It’s based on Feature Pyramid Network (FPN) and a ResNet101 backbone.
Labeled_Tiles : This option will label each output tile with a specific class.
classvalue_field
Optional string. Specifies the field which contains the class values. If no field is specified, the system will look for a ‘value’ or ‘classvalue’ field. If this feature does not contain a class field, the system will presume all records belong the 1 class.
buffer_radius
Optional integer. Specifies a radius for point feature classes to specify training sample area.
output_location
- This is the output location for training sample data.
It can be the server data store path or a shared file system path.
Example:
- Server datastore path -
/fileShares/deeplearning/rooftoptrainingsamples
/rasterStores/rasterstorename/rooftoptrainingsamples
/cloudStores/cloudstorename/rooftoptrainingsamples
- File share path -
\\servername\deeplearning\rooftoptrainingsamples
context
- Optional dictionary. Context contains additional settings that affect task execution.
Dictionary can contain value for following keys:
- exportAllTiles - Choose if the image chips with overlapped labeled data will be exported.
True - Export all the image chips, including those that do not overlap labeled data. False - Export only the image chips that overlap the labelled data. This is the default.
- startIndex - Allows you to set the start index for the sequence of image chips.
This lets you append more image chips to an existing sequence. The default value is 0.
cellSize - cell size can be set using this key in context parameter
extent - Sets the processing extent used by the function
Setting context parameter will override the values set using arcgis.env variable for this particular function.(cellSize, extent)
eg: {“exportAllTiles” : False, “startIndex”: 0 }
input_mask_polygons
- Optional feature layer. The feature layer that delineates the area where
image chips will be created. Only image chips that fall completely within the polygons will be created.
rotation_angle
- Optional float. The rotation angle that will be used to generate additional
image chips.
An image chip will be generated with a rotation angle of 0, which means no rotation. It will then be rotated at the specified angle to create an additional image chip. The same training samples will be captured at multiple angles in multiple image chips for data augmentation. The default rotation angle is 0.
reference_system
Optional string. Specifies the type of reference system to be used to interpret the input image. The reference system specified should match the reference system used to train the deep learning model.
MAP_SPACE : The input image is in a map-based coordinate system. This is the default.
IMAGE_SPACE : The input image is in image space, viewed from the direction of the sensor that captured the image, and rotated such that the tops of buildings and trees point upward in the image.
PIXEL_SPACE : The input image is in image space, with no rotation and no distortion.
process_all_raster_items
Optional bool. Specifies how all raster items in an image service will be processed.
False : all raster items in the image service will be mosaicked together and processed. This is the default.
True : all raster items in the image service will be processed as separate images.
blacken_around_feature
Optional bool.
Specifies whether to blacken the pixels around each object or feature in each image tile.
This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.
False : Pixels surrounding objects or features will not be blackened. This is the default.
True : Pixels surrounding objects or features will be blackened.
fix_chip_size
Optional bool. Specifies whether to crop the exported tiles such that they are all the same size.
This parameter only applies when the metadata format is set to Labeled_Tiles and an input feature class or classified raster has been specified.
True : Exported tiles will be the same size and will center on the feature. This is the default.
False : Exported tiles will be cropped such that the bounding geometry surrounds only the feature in the tile.
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
- Returns
Output string containing the location of the exported training data
list_models¶
-
arcgis.learn.
list_models
(*, gis=None, future=False, **kwargs)¶ Function is used to list all the installed deep learning models.
Argument
Description
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
- Returns
list of deep learning models installed
Model¶
-
class
arcgis.learn.
Model
(model=None)¶ -
from_json
(model)¶ Function is used to initialise Model object from model definition JSON
eg usage:
model = Model()
- model.from_json({“Framework” :”TensorFlow”,
“ModelConfiguration”:”DeepLab”, “InferenceFunction”:”
[functions]System\DeepLearning\ImageClassifier.py
”, “ModelFile”:”\\folder_path_of_pb_file\frozen_inference_graph.pb
”, “ExtractBands”:[0,1,2], “ImageWidth”:513, “ImageHeight”:513, “Classes”: [ { “Value”:0, “Name”:”Evergreen Forest”, “Color”:[0, 51, 0] },{ “Value”:1, “Name”:”Grassland/Herbaceous”, “Color”:[241, 185, 137] }, { “Value”:2, “Name”:”Bare Land”, “Color”:[236, 236, 0] }, { “Value”:3, “Name”:”Open Water”, “Color”:[0, 0, 117] }, { “Value”:4, “Name”:”Scrub/Shrub”, “Color”:[102, 102, 0] }, { “Value”:5, “Name”:”Impervious Surface”, “Color”:[236, 236, 236] } ] })
-
from_model_path
(model)¶ Function is used to initialise Model object from url of model package or path of model definition file eg usage:
model = Model()
model.from_model_path(“https://xxxportal.esri.com/sharing/rest/content/items/<itemId>”)
or model = Model()
model.from_model_path(
"\\sharedstorage\sharefolder\findtrees.emd"
)
-
install
(*, gis=None, future=False, **kwargs)¶ Function is used to install the uploaded model package (*.dlpk). Optionally after inferencing the necessary information using the model, the model can be uninstalled by uninstall_model()
Argument
Description
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
- Returns
Path where model is installed
-
query_info
(*, gis=None, future=False, **kwargs)¶ Function is used to extract the deep learning model specific settings from the model package item or model definition file.
Argument
Description
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
- Returns
The key model information in dictionary format that describes what the settings are essential for this type of deep learning model.
-
uninstall
(*, gis=None, future=False, **kwargs)¶ Function is used to uninstall the uploaded model package that was installed using the install_model() This function will delete the named deep learning model from the server but not the portal item.
Argument
Description
gis
Optional GIS. The GIS on which this tool runs. If not specified, the active GIS is used.
future
Keyword only parameter. Optional boolean. If True, the result will be a GPJob object and results will be returned asynchronously.
- Returns
itemId of the uninstalled model package item
-
prepare_data¶
-
arcgis.learn.
prepare_data
(path, class_mapping=None, chip_size=224, val_split_pct=0.1, batch_size=64, transforms=None, collate_fn=<function _bb_pad_collate>, seed=42, dataset_type=None, resize_to=None, **kwargs)¶ Prepares a data object from training sample exported by the Export Training Data tool in ArcGIS Pro or Image Server, or training samples in the supported dataset formats. This data object consists of training and validation data sets with the specified transformations, chip size, batch size, split percentage, etc. -For object detection, use Pascal_VOC_rectangles format. -For feature categorization use Labelled Tiles or ImageNet format. -For pixel classification, use Classified Tiles format. -For entity extraction from text, use IOB, BILUO or ner_json formats.
Argument
Description
path
Required string. Path to data directory.
class_mapping
Optional dictionary. Mapping from id to its string label. For dataset_type=IOB, BILUO or ner_json:
Provide address field as class mapping in below format: class_mapping={‘address_tag’:’address_field’}
chip_size
Optional integer. Size of the image to train the model.
val_split_pct
Optional float. Percentage of training data to keep as validation.
batch_size
Optional integer. Batch size for mini batch gradient descent (Reduce it if getting CUDA Out of Memory Errors).
transforms
Optional tuple. Fast.ai transforms for data augmentation of training and validation datasets respectively (We have set good defaults which work for satellite imagery well). If transforms is set to False no transformation will take place and chip_size parameter will also not take effect.
collate_fn
Optional function. Passed to PyTorch to collate data into batches(usually default works).
seed
Optional integer. Random seed for reproducible train-validation split.
dataset_type
Optional string. prepare_data function will infer the dataset_type on its own if it contains a map.txt file. If the path does not contain the map.txt file pass either of ‘PASCAL_VOC_rectangles’, ‘RCNN_Masks’ and ‘Classified_Tiles’
resize_to
Optional integer. Resize the image to given size.
- Returns
data object
SingleShotDetector¶
-
class
arcgis.learn.
SingleShotDetector
(data, grids=None, zooms=[1.0], ratios=[[1.0, 1.0]], backbone=None, drop=0.3, bias=-4.0, focal_loss=False, pretrained_path=None, location_loss_factor=None, ssd_version=2)¶ Creates a Single Shot Detector with the specified grid sizes, zoom scales and aspect ratios. Based on Fast.ai MOOC Version2 Lesson 9.
Argument
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
grids
Required list. Grid sizes used for creating anchor boxes.
zooms
Optional list. Zooms of anchor boxes.
ratios
Optional list of tuples. Aspect ratios of anchor boxes.
backbone
Optional function. Backbone CNN model to be used for creating the base of the SingleShotDetector, which is resnet34 by default.
dropout
Optional float. Dropout propbability. Increase it to reduce overfitting.
bias
Optional float. Bias for SSD head.
focal_loss
Optional boolean. Uses Focal Loss if True.
pretrained_path
Optional string. Path where pre-trained model is saved.
location_loss_factor
Optional float. Sets the weight of the bounding box loss. This should be strictly between 0 and 1. This is default None which gives equal weight to both location and classification loss. This factor adjusts the focus of model on the location of bounding box.
ssd_version
Optional int within [1,2]. Use version=1 for arcgis v1.6.2 or earlier
- Returns
SingleShotDetector Object
-
average_precision_score
(detect_thresh=0.2, iou_thresh=0.1, mean=False, show_progress=True)¶ Computes average precision on the validation set for each class.
Argument
Description
detect_thresh
Optional float. The probabilty above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
- Returns
dict if mean is False otherwise float
-
fit
(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)¶ Train the model for the specified number of epochs and using the specified learning rates
Argument
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If
lr=None
, an optimal learning rate is automatically deduced for training the model.one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if validation loss stops improving for 5 epochs.
checkpoint
Optional boolean. Parameter to save the best model during training. If set to True the best model based on validation loss will be saved during training.
tensorboard
Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=1.7 (Experimental support).
The default value is ‘False’.
-
classmethod
from_emd
(data, emd_path)¶ Creates a Single Shot Detector from an Esri Model Definition (EMD) file.
Argument
Description
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
emd_path
Required string. Path to Esri Model Definition file.
- Returns
SingleShotDetector Object
-
classmethod
from_model
(emd_path, data=None)¶ Creates a Single Shot Detector from an Esri Model Definition (EMD) file.
Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
- Returns
SingleShotDetector Object
-
load
(name_or_path)¶ Loads a saved model for inferencing or fine tuning from the specified path or model name.
Argument
Description
name_or_path
Required string. Name of the model to load from the pre-defined location. If path is passed then it loads from the specified path with model name as directory name. Path to “.pth” file can also be passed
-
lr_find
(allow_plot=True)¶ Runs the Learning Rate Finder, and displays the graph of it’s output. Helps in choosing the optimum learning rate for training the model.
-
predict
(image_path, threshold=0.5, nms_overlap=0.1, return_scores=False, visualize=False, resize=False)¶ Runs prediction on an Image.
Argument
Description
image_path
Required. Path to the image file to make the predictions on.
threshold
Optional float. The probability above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
- Returns
‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image
-
predict_video
(input_video_path, metadata_file, threshold=0.5, nms_overlap=0.1, track=False, visualize=False, output_file_path=None, multiplex=False, multiplex_file_path=None, tracker_options={'assignment_iou_thrd': 0.3, 'detect_frames': 10, 'vanish_frames': 40}, visual_options={'color': (255, 255, 255), 'fontface': 0, 'show_labels': True, 'show_scores': True, 'thickness': 2}, resize=False)¶ Runs prediction on a video and appends the output VMTI predictions in the metadata file.
Argument
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
-
save
(name_or_path, framework='PyTorch', publish=False, gis=None, **kwargs)¶ Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Defines the framework of the model. (Only supported by
SingleShotDetector
, currently.) If framework used isTF-ONNX
,batch_size
can be passed as an optional keyword argument.Framework choice: ‘PyTorch’ and ‘TF-ONNX’
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
-
show_results
(rows=5, thresh=0.5, nms_overlap=0.1)¶ Displays the results of a trained model on a part of the validation set.
-
property
supported_backbones
¶ Supported torchvision backbones for this model.
-
unfreeze
()¶ Unfreezes the earlier layers of the model for fine-tuning.
UnetClassifier¶
-
class
arcgis.learn.
UnetClassifier
(data, backbone=None, pretrained_path=None)¶ Creates a Unet like classifier based on given pretrained encoder.
Argument
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
backbone
Optional function. Backbone CNN model to be used for creating the base of the UnetClassifier, which is resnet34 by default.
pretrained_path
Optional string. Path where pre-trained model is saved.
- Returns
UnetClassifier Object
-
accuracy
()¶
-
fit
(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)¶ Train the model for the specified number of epochs and using the specified learning rates
Argument
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If
lr=None
, an optimal learning rate is automatically deduced for training the model.one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if validation loss stops improving for 5 epochs.
checkpoint
Optional boolean. Parameter to save the best model during training. If set to True the best model based on validation loss will be saved during training.
tensorboard
Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=1.7 (Experimental support).
The default value is ‘False’.
-
classmethod
from_emd
(data, emd_path)¶ Creates a Unet like classifier from an Esri Model Definition (EMD) file.
Argument
Description
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
emd_path
Required string. Path to Esri Model Definition file.
- Returns
UnetClassifier Object
-
classmethod
from_model
(emd_path, data=None)¶ Creates a Unet like classifier from an Esri Model Definition (EMD) file.
Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
- Returns
UnetClassifier Object
-
load
(name_or_path)¶ Loads a saved model for inferencing or fine tuning from the specified path or model name.
Argument
Description
name_or_path
Required string. Name of the model to load from the pre-defined location. If path is passed then it loads from the specified path with model name as directory name. Path to “.pth” file can also be passed
-
lr_find
(allow_plot=True)¶ Runs the Learning Rate Finder, and displays the graph of it’s output. Helps in choosing the optimum learning rate for training the model.
-
save
(name_or_path, framework='PyTorch', publish=False, gis=None, **kwargs)¶ Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Defines the framework of the model. (Only supported by
SingleShotDetector
, currently.) If framework used isTF-ONNX
,batch_size
can be passed as an optional keyword argument.Framework choice: ‘PyTorch’ and ‘TF-ONNX’
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
-
show_results
(rows=5, **kwargs)¶ Displays the results of a trained model on a part of the validation set.
-
property
supported_backbones
¶ Supported torchvision backbones for this model.
-
unfreeze
()¶ Unfreezes the earlier layers of the model for fine-tuning.
FeatureClassifier¶
-
class
arcgis.learn.
FeatureClassifier
(data, backbone=None, pretrained_path=None, mixup=False)¶ Creates an image classifier to classify the area occupied by a geographical feature based on the imagery it overlaps with.
Argument
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
backbone
Optional torchvision model. Backbone CNN model to be used for creating the base of the
FeatureClassifier
, which isresnet34
by default.pretrained_path
Optional string. Path where pre-trained model is saved.
- Returns
FeatureClassifier Object
-
categorize_features
(feature_layer, raster=None, class_value_field='class_val', class_name_field='prediction', confidence_field='confidence', cell_size=1, coordinate_system=None, predict_function=None, batch_size=64, overwrite=False)¶ Deprecated since version 1.7.1: Please use arcgis.learn.classify_objects() instead
Categorizes each feature by classifying its attachments or an image of its geographical area (using the provided Imagery Layer) and updates the feature layer with the prediction results in the
output_label_field
. Deprecated, Please use arcgis.learn.classify_objects() instead.Argument
Description
feature_layer
Required. Public Feature Layer or path of local feature class for classification with read, write, edit permissions.
raster
Optional. Imagery layer or path of local raster to be used for exporting image chips. (Requires arcpy)
class_value_field
Required string. Output field to be added in the layer, containing class value of predictions.
class_name_field
Required string. Output field to be added in the layer, containing class name of predictions.
confidence_field
Optional string. Output column name to be added in the layer which contains the confidence score.
cell_size
Optional float. Cell size to be used for exporting the image chips.
coordinate_system
Optional. Cartographic Coordinate System to be used for exporting the image chips.
predict_function
Optional list of tuples. Used for calculation of final prediction result when each feature has more than one attachment. The
predict_function
takes as input a list of tuples. Each tuple has first element as the class predicted and second element is the confidence score. The function should return the final tuple classifying the feature and its confidence.batch_size
Optional integer. The no of images or tiles to process in a single go.
The default value is 64.
overwrite
Optional boolean. If set to True the output fields will be overwritten by new values.
The default value is False.
- Returns
Boolean : True if operation is successful, False otherwise
-
classify_features
(feature_layer, labeled_tiles_directory, input_label_field, output_label_field, confidence_field=None, predict_function=None)¶ Classifies the exported images and updates the feature layer with the prediction results in the
output_label_field
.Argument
Description
feature_layer
Required. Feature Layer for classification.
labeled_tiles_directory
Required. Folder structure containing images and labels folder. The chips should have been generated using the export training data tool in the Labeled Tiles format, and the labels should contain the OBJECTIDs of the features to be classified.
input_label_field
Required. Value field name which created the labeled tiles. This field should contain the OBJECTIDs of the features to be classified. In case of attachments this field is not used.
output_label_field
Required. Output column name to be added in the layer which contains predictions.
confidence_field
Optional. Output column name to be added in the layer which contains the confidence score.
predict_function
Optional. Used for calculation of final prediction result when each feature has more than one attachment. The
predict_function
takes as input a list of tuples. Each tuple has first element as the class predicted and second element is the confidence score. The function should return the final tuple classifying the feature and its confidence- Returns
Boolean : True/False if operation is sucessful
-
fit
(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)¶ Train the model for the specified number of epochs and using the specified learning rates
Argument
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If
lr=None
, an optimal learning rate is automatically deduced for training the model.one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if validation loss stops improving for 5 epochs.
checkpoint
Optional boolean. Parameter to save the best model during training. If set to True the best model based on validation loss will be saved during training.
tensorboard
Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=1.7 (Experimental support).
The default value is ‘False’.
-
classmethod
from_model
(emd_path, data=None)¶ Creates a Feature classifier from an Esri Model Definition (EMD) file.
Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
- Returns
FeatureClassifier Object
-
load
(name_or_path)¶ Loads a saved model for inferencing or fine tuning from the specified path or model name.
Argument
Description
name_or_path
Required string. Name of the model to load from the pre-defined location. If path is passed then it loads from the specified path with model name as directory name. Path to “.pth” file can also be passed
-
lr_find
(allow_plot=True)¶ Runs the Learning Rate Finder, and displays the graph of it’s output. Helps in choosing the optimum learning rate for training the model.
-
plot_confusion_matrix
()¶ Plots a confusion matrix of the model predictions to evaluate accuracy
-
plot_hard_examples
(num_examples)¶ Plots the hard examples with their heatmaps.
Argument
Description
num_examples
Number of hard examples to plot
prepare_data
function.
-
predict
(img_path)¶ Runs prediction on an Image. ===================== =========================================== Argument Description ——————— ——————————————- image_path Required. Path to the image file to make the
predictions on.
-
predict_folder_and_create_layer
(folder, feature_layer_name, gis=None, prediction_field='predict', confidence_field='confidence')¶ Predicts on images present in the given folder and creates a feature layer.
Argument
Description
folder
Required String. Folder to inference on.
feature_layer_name
Required String. The name of the feature layer used to publish.
gis
Optional GIS Object, the GIS on which this tool runs. If not specified, the active GIS is used.
prediction_field
Optional String. The field name to use to add predictions.
confidence_field
Optional String. The field name to use to add confidence.
- Returns
FeatureCollection Object
-
save
(name_or_path, framework='PyTorch', publish=False, gis=None, **kwargs)¶ Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Defines the framework of the model. (Only supported by
SingleShotDetector
, currently.) If framework used isTF-ONNX
,batch_size
can be passed as an optional keyword argument.Framework choice: ‘PyTorch’ and ‘TF-ONNX’
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
-
show_results
(rows=5, **kwargs)¶ Displays the results of a trained model on a part of the validation set.
-
property
supported_backbones
¶ Supported torchvision backbones for this model.
-
unfreeze
()¶ Unfreezes the earlier layers of the model for fine-tuning.
RetinaNet¶
-
class
arcgis.learn.
RetinaNet
(data, scales=None, ratios=None, backbone=None, pretrained_path=None)¶ Creates a RetinaNet Object Detector with the specified zoom scales and aspect ratios. Based on the Fast.ai notebook at https://github.com/fastai/fastai_dev/blob/master/dev_nb/102a_coco.ipynb
Argument
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
scales
Optional list of float values. Zoom scales of anchor boxes.
ratios
Optional list of float values. Aspect ratios of anchor boxes.
backbone
Optional function. Backbone CNN model to be used for creating the base of the RetinaNet, which is resnet50 by default. Compatible backbones: ‘resnet18’, ‘resnet34’, ‘resnet50’, ‘resnet101’, ‘resnet152’
pretrained_path
Optional string. Path where pre-trained model is saved.
- Returns
RetinaNet Object
-
average_precision_score
(detect_thresh=0.5, iou_thresh=0.1, mean=False, show_progress=True)¶ Computes average precision on the validation set for each class.
Argument
Description
detect_thresh
Optional float. The probabilty above which a detection will be considered for computing average precision.
iou_thresh
Optional float. The intersection over union threshold with the ground truth labels, above which a predicted bounding box will be considered a true positive.
mean
Optional bool. If False returns class-wise average precision otherwise returns mean average precision.
- Returns
dict if mean is False otherwise float
-
fit
(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)¶ Train the model for the specified number of epochs and using the specified learning rates
Argument
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If
lr=None
, an optimal learning rate is automatically deduced for training the model.one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if validation loss stops improving for 5 epochs.
checkpoint
Optional boolean. Parameter to save the best model during training. If set to True the best model based on validation loss will be saved during training.
tensorboard
Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=1.7 (Experimental support).
The default value is ‘False’.
-
classmethod
from_model
(emd_path, data=None)¶ Creates a RetinaNet Object Detector from an Esri Model Definition (EMD) file.
Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
- Returns
RetinaNet Object
-
load
(name_or_path)¶ Loads a saved model for inferencing or fine tuning from the specified path or model name.
Argument
Description
name_or_path
Required string. Name of the model to load from the pre-defined location. If path is passed then it loads from the specified path with model name as directory name. Path to “.pth” file can also be passed
-
lr_find
(allow_plot=True)¶ Runs the Learning Rate Finder, and displays the graph of it’s output. Helps in choosing the optimum learning rate for training the model.
-
predict
(image_path, threshold=0.5, nms_overlap=0.1, return_scores=True, visualize=False, resize=False)¶ Predicts and displays the results of a trained model on a single image.
Argument
Description
image_path
Required. Path to the image file to make the predictions on.
thresh
Optional float. The probabilty above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
return_scores
Optional boolean. Will return the probability scores of the bounding box predictions if True.
visualize
Optional boolean. Displays the image with predicted bounding boxes if True.
resize
Optional boolean. Resizes the image to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the image is resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the image (of the same size as the model was trained on).
- Returns
‘List’ of xmin, ymin, width, height of predicted bounding boxes on the given image
-
predict_video
(input_video_path, metadata_file, threshold=0.5, nms_overlap=0.1, track=False, visualize=False, output_file_path=None, multiplex=False, multiplex_file_path=None, tracker_options={'assignment_iou_thrd': 0.3, 'detect_frames': 10, 'vanish_frames': 40}, visual_options={'color': (255, 255, 255), 'fontface': 0, 'show_labels': True, 'show_scores': True, 'thickness': 2}, resize=False)¶ Runs prediction on a video and appends the output VMTI predictions in the metadata file.
Argument
Description
input_video_path
Required. Path to the video file to make the predictions on.
metadata_file
Required. Path to the metadata csv file where the predictions will be saved in VMTI format.
threshold
Optional float. The probability above which a detection will be considered.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
track
Optional bool. Set this parameter as True to enable object tracking.
visualize
Optional boolean. If True a video is saved with prediction results.
output_file_path
Optional path. Path of the final video to be saved. If not supplied, video will be saved at path input_video_path appended with _prediction.
multiplex
Optional boolean. Runs Multiplex using the VMTI detections.
multiplex_file_path
Optional path. Path of the multiplexed video to be saved. By default a new file with _multiplex.MOV extension is saved in the same folder.
tracking_options
Optional dictionary. Set different parameters for object tracking. assignment_iou_thrd parameter is used to assign threshold for assignment of trackers, vanish_frames is the number of frames the object should be absent to consider it as vanished, detect_frames is the number of frames an object should be detected to track it.
visual_options
Optional dictionary. Set different parameters for visualization. show_scores boolean, to view scores on predictions, show_labels boolean, to view labels on predictions, thickness integer, to set the thickness level of box, fontface integer, fontface value from opencv values, color tuple (B, G, R), tuple containing values between 0-255.
resize
Optional boolean. Resizes the video frames to the same size (chip_size parameter in prepare_data) that the model was trained on, before detecting objects. Note that if resize_to parameter was used in prepare_data, the video frames are resized to that size instead.
By default, this parameter is false and the detections are run in a sliding window fashion by applying the model on cropped sections of the frame (of the same size as the model was trained on).
-
save
(name_or_path, framework='PyTorch', publish=False, gis=None, **kwargs)¶ Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Defines the framework of the model. (Only supported by
SingleShotDetector
, currently.) If framework used isTF-ONNX
,batch_size
can be passed as an optional keyword argument.Framework choice: ‘PyTorch’ and ‘TF-ONNX’
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
-
show_results
(rows=5, thresh=0.5, nms_overlap=0.1)¶ Displays the results of a trained model on a part of the validation set.
Argument
Description
rows
Optional int. Number of rows of results to be displayed.
thresh
Optional float. The probabilty above which a detection will be considered valid.
nms_overlap
Optional float. The intersection over union threshold with other predicted bounding boxes, above which the box with the highest score will be considered a true positive.
-
property
supported_backbones
¶ Supported torchvision backbones for this model.
-
unfreeze
()¶ Unfreezes the earlier layers of the model for fine-tuning.
EntityRecognizer¶
-
class
arcgis.learn.
EntityRecognizer
(data=None, lang='en')¶ Creates an entity recognition model to extract text entities from unstructured text documents. Based on Spacy’s EntityRecognizer
Argument
Description
data
Requires data object returned from
prepare_data
function.lang
Optional string. Language-specific code, named according to the language’s ISO code The default value is ‘en’ for English.
- Returns
EntityRecognizer
Object
-
extract_entities
(text_list, drop=True)¶ Extracts the entities from [documents in the mentioned path or text_list].
Argument
Description
text_list
Required string(path) or list(documents). List of documents for entity extraction OR path to the documents.
drop
Optional bool. If documents without address needs to be dropped from the results.
- Returns
Pandas DataFrame
-
fit
(epochs=20, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, **kwargs)¶ Trains an EntityRecognition model for ‘n’ number of epochs..
Argument
Description
epoch
Optional integer. Number of times the model will train on the complete dataset.
lr
Optional float. Learning rate to be used for training the model.
one_cycle
Not implemented for this model.
early_stopping
Not implemented for this model.
early_stopping
Not implemented for this model.
-
classmethod
from_model
(emd_path, data=None)¶ Creates an EntityRecognizer from an Esri Model Definition (EMD) file.
Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required DatabunchNER object or None. Returned data object from prepare_data function or None for inferencing.
- Returns
EntityRecognizer Object
-
load
(name_or_path)¶ Loads a saved EntityRecognition model from disk.
Argument
Description
name_or_path
Required string. Path of the emd file.
-
lr_find
(allow_plot=True)¶ Not implemented for this model.
-
save
(name_or_path, **kwargs)¶ Saves the model weights, creates an Esri Model Definition. Train the model for the specified number of epochs and using the specified learning rates.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name. and creates all the intermediate directories.
-
show_results
(ds_type='valid')¶ Runs entity extraction on a random batch from the mentioned ds_type.
Argument
Description
ds_type
Optional string, defaults to valid.
- Returns
Pandas DataFrame
-
unfreeze
()¶ Not implemented for this model.
PSPNetClassifier¶
-
class
arcgis.learn.
PSPNetClassifier
(data, backbone=None, use_unet=True, pyramid_sizes=[1, 2, 3, 6], pretrained_path=None, unet_aux_loss=False)¶ Model architecture from https://arxiv.org/abs/1612.01105. Creates a PSPNet Image Segmentation/ Pixel Classification model.
Argument
Description
data
Required fastai Databunch. Returned data object from prepare_data function.
backbone
Optional function. Backbone CNN model to be used for creating the base of the PSPNetClassifier, which is resnet50 by default. It supports the ResNet, DenseNet, and VGG families.
use_unet
Optional Bool. Specify whether to use Unet-Decoder or not, Default True.
pyramid_sizes
Optional List. The sizes at which the feature map is pooled at. Currently set to the best set reported in the paper, i.e, (1, 2, 3, 6)
pretrained
Optional Bool. If True, use the pretrained backbone
pretrained_path
Optional string. Path where pre-trained PSPNet model is saved.
unet_aux_loss
Optional. Bool If True will use auxillary loss for PSUnet. Default set to False. This flag is applicable only when use_unet is True.
- Returns
PSPNetClassifier Object
-
accuracy
(input=None, target=None, void_code=0, class_mapping=None)¶
-
fit
(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)¶ Train the model for the specified number of epochs and using the specified learning rates
Argument
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If
lr=None
, an optimal learning rate is automatically deduced for training the model.one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if validation loss stops improving for 5 epochs.
checkpoint
Optional boolean. Parameter to save the best model during training. If set to True the best model based on validation loss will be saved during training.
tensorboard
Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=1.7 (Experimental support).
The default value is ‘False’.
-
freeze
()¶ Freezes the pretrained backbone.
-
classmethod
from_model
(emd_path, data=None)¶ Creates a PSPNet classifier from an Esri Model Definition (EMD) file.
Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required fastai Databunch or None. Returned data object from prepare_data function or None for inferencing.
- Returns
PSPNetClassifier Object
-
load
(name_or_path)¶ Loads a saved model for inferencing or fine tuning from the specified path or model name.
Argument
Description
name_or_path
Required string. Name of the model to load from the pre-defined location. If path is passed then it loads from the specified path with model name as directory name. Path to “.pth” file can also be passed
-
lr_find
(allow_plot=True)¶ Runs the Learning Rate Finder, and displays the graph of it’s output. Helps in choosing the optimum learning rate for training the model.
-
save
(name_or_path, framework='PyTorch', publish=False, gis=None, **kwargs)¶ Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Defines the framework of the model. (Only supported by
SingleShotDetector
, currently.) If framework used isTF-ONNX
,batch_size
can be passed as an optional keyword argument.Framework choice: ‘PyTorch’ and ‘TF-ONNX’
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
-
show_results
(rows=5, **kwargs)¶ Displays the results of a trained model on a part of the validation set.
-
property
supported_backbones
¶ Supported torchvision backbones for this model.
-
unfreeze
()¶ Unfreezes the earlier layers of the model for fine-tuning.
MaskRCNN¶
-
class
arcgis.learn.
MaskRCNN
(data, backbone=None, pretrained_path=None)¶ Creates a
MaskRCNN
Instance segmentation objectArgument
Description
data
Required fastai Databunch. Returned data object from
prepare_data
function.backbone
Optional function. Backbone CNN model to be used for creating the base of the MaskRCNN, which is resnet50 by default. Compatible backbones: ‘resnet50’
pretrained_path
Optional string. Path where pre-trained model is saved.
- Returns
MaskRCNN
Object
-
average_precision_score
(detect_thresh=0.5, iou_thresh=0.5, mean=False, show_progress=True)¶ Computes average precision on the validation set for each class.
- Returns
dict if mean is False otherwise float
-
fit
(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)¶ Train the model for the specified number of epochs and using the specified learning rates
Argument
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If
lr=None
, an optimal learning rate is automatically deduced for training the model.one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if validation loss stops improving for 5 epochs.
checkpoint
Optional boolean. Parameter to save the best model during training. If set to True the best model based on validation loss will be saved during training.
tensorboard
Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=1.7 (Experimental support).
The default value is ‘False’.
-
classmethod
from_model
(emd_path, data=None)¶ Creates a
MaskRCNN
Instance segmentation object from an Esri Model Definition (EMD) file.Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required fastai Databunch or None. Returned data object from
prepare_data
function or None for inferencing.- Returns
MaskRCNN Object
-
load
(name_or_path)¶ Loads a saved model for inferencing or fine tuning from the specified path or model name.
Argument
Description
name_or_path
Required string. Name of the model to load from the pre-defined location. If path is passed then it loads from the specified path with model name as directory name. Path to “.pth” file can also be passed
-
lr_find
(allow_plot=True)¶ Runs the Learning Rate Finder, and displays the graph of it’s output. Helps in choosing the optimum learning rate for training the model.
-
save
(name_or_path, framework='PyTorch', publish=False, gis=None, **kwargs)¶ Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Defines the framework of the model. (Only supported by
SingleShotDetector
, currently.) If framework used isTF-ONNX
,batch_size
can be passed as an optional keyword argument.Framework choice: ‘PyTorch’ and ‘TF-ONNX’
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
-
show_results
(rows=4, mode='mask', mask_threshold=0.5, box_threshold=0.7, imsize=5, index=0, alpha=0.5, cmap='tab20', **kwargs)¶ Displays the results of a trained model on a part of the validation set.
Argument
Description
mode
- Required arguments within [‘bbox’, ‘mask’, ‘bbox_mask’].
bbox
- For visualizing only boundig boxes.mask
- For visualizing only maskbbox_mask
- For visualizing both mask and bounding boxes.
mask_threshold
Optional float. The probabilty above which a pixel will be considered mask.
box_threshold
Optional float. The pobabilty above which a detection will be considered valid.
nrows
Optional int. Number of rows of results to be displayed.
-
property
supported_backbones
¶ Supported torchvision backbones for this model.
-
unfreeze
()¶ Unfreezes the earlier layers of the model for fine-tuning.
DeepLab¶
-
class
arcgis.learn.
DeepLab
(data, backbone=None, pretrained_path=None)¶ Creates a
DeepLab
Semantic segmentation objectArgument
Description
data
Required fastai Databunch. Returned data object from
prepare_data
function.backbone
Optional function. Backbone CNN model to be used for creating the base of the DeepLab, which is resnet101 by default since it is pretrained in torchvision. It supports the ResNet, DenseNet, and VGG families.
pretrained_path
Optional string. Path where pre-trained model is saved.
- Returns
DeepLab
Object
-
accuracy
()¶
-
fit
(epochs=10, lr=None, one_cycle=True, early_stopping=False, checkpoint=True, tensorboard=False, **kwargs)¶ Train the model for the specified number of epochs and using the specified learning rates
Argument
Description
epochs
Required integer. Number of cycles of training on the data. Increase it if underfitting.
lr
Optional float or slice of floats. Learning rate to be used for training the model. If
lr=None
, an optimal learning rate is automatically deduced for training the model.one_cycle
Optional boolean. Parameter to select 1cycle learning rate schedule. If set to False no learning rate schedule is used.
early_stopping
Optional boolean. Parameter to add early stopping. If set to ‘True’ training will stop if validation loss stops improving for 5 epochs.
checkpoint
Optional boolean. Parameter to save the best model during training. If set to True the best model based on validation loss will be saved during training.
tensorboard
Optional boolean. Parameter to write the training log. If set to ‘True’ the log will be saved at <dataset-path>/training_log which can be visualized in tensorboard. Required tensorboardx version=1.7 (Experimental support).
The default value is ‘False’.
-
classmethod
from_model
(emd_path, data=None)¶ Creates a
DeepLab
semantic segmentation object from an Esri Model Definition (EMD) file.Argument
Description
emd_path
Required string. Path to Esri Model Definition file.
data
Required fastai Databunch or None. Returned data object from
prepare_data
function or None for inferencing.- Returns
DeepLab Object
-
load
(name_or_path)¶ Loads a saved model for inferencing or fine tuning from the specified path or model name.
Argument
Description
name_or_path
Required string. Name of the model to load from the pre-defined location. If path is passed then it loads from the specified path with model name as directory name. Path to “.pth” file can also be passed
-
lr_find
(allow_plot=True)¶ Runs the Learning Rate Finder, and displays the graph of it’s output. Helps in choosing the optimum learning rate for training the model.
-
save
(name_or_path, framework='PyTorch', publish=False, gis=None, **kwargs)¶ Saves the model weights, creates an Esri Model Definition and Deep Learning Package zip for deployment to Image Server or ArcGIS Pro.
Argument
Description
name_or_path
Required string. Name of the model to save. It stores it at the pre-defined location. If path is passed then it stores at the specified path with model name as directory name and creates all the intermediate directories.
framework
Optional string. Defines the framework of the model. (Only supported by
SingleShotDetector
, currently.) If framework used isTF-ONNX
,batch_size
can be passed as an optional keyword argument.Framework choice: ‘PyTorch’ and ‘TF-ONNX’
publish
Optional boolean. Publishes the DLPK as an item.
gis
Optional GIS Object. Used for publishing the item. If not specified then active gis user is taken.
kwargs
Optional Parameters: Boolean overwrite if True, it will overwrite the item on ArcGIS Online/Enterprise, default False.
-
show_results
(rows=5, **kwargs)¶ Displays the results of a trained model on a part of the validation set.
-
property
supported_backbones
¶
-
unfreeze
()¶ Unfreezes the earlier layers of the model for fine-tuning.