NAME
t.pytorch.predict - Apply a pytorch model to imagery groups in a Space Time Raster Dataset (STRDS)
KEYWORDS
temporal,
machine learning,
deep learning,
pytorch,
unet,
GPU,
predict,
imagery,
raster,
strds
SYNOPSIS
t.pytorch.predict
t.pytorch.predict --help
t.pytorch.predict [-cels] input=name [where=sql_query] [reference_strds=name[,name,...]] [reference_where=sql_query] [reference_suffix=string] [sampling=name[,name,...]] [offset=integer] [auxillary_group=name] [region_relation=string] output=name [title=string] [description=string] model=name model_code=name [vector_tiles=name] [tile_size=integer[,integer,...]] [overlap=integer] configuration=name [mask_json=name] [nprocs=integer] [basename=string] [--overwrite] [--help] [--verbose] [--quiet] [--ui]
Flags:
- -c
- Use CPU as device for prediction, default is use cuda (GPU) if detected
- -e
- Extend existing STRDS (requires overwrite flag)
- -l
- Limit output to valid range (data outside the valid range is set to valid min/max)
- -s
- Skip incomplete groups (do not fail)
- --overwrite
- Allow output files to overwrite existing files
- --help
- Print usage summary
- --verbose
- Verbose module output
- --quiet
- Quiet module output
- --ui
- Force launching GUI dialog
Parameters:
- input=name [required]
- Name of the input space time raster dataset
- where=sql_query
- WHERE conditions of SQL statement without 'where' keyword used in the temporal GIS framework
- Example: start_time > '2001-01-01 12:30:00'
- reference_strds=name[,name,...]
- Name of the input space time raster datasets
- reference_where=sql_query
- WHERE conditions of SQL statement without 'where' keyword used in the temporal GIS framework
- Where clause to select reference images
- reference_suffix=string
- Suffix to be added to the semantic label of the raster maps in the reference_strds
- sampling=name[,name,...]
- The method to be used for sampling the input dataset
- Options: start, during, overlap, contain, equal, follows, precedes
- Default: start
- offset=integer
- Offset that defines a reference map (e.g. -1 for the previous map (group) in the input STRDS)
- auxillary_group=name
- Input imagery group with time independent raster maps
- region_relation=string
- Process only maps with this spatial relation to the current computational region
- Options: overlaps, contains, is_contained
- output=name [required]
- Name of the output space time raster dataset
- title=string
- Title of the resulting STRDS
- description=string
- Description of the resulting STRDS
- model=name [required]
- Path to input deep learning model file (.pt)
- model_code=name [required]
- Path to input deep learning model code (.py)
- vector_tiles=name
- Name of input vector map
- Vector map with tiles to process (will be extended by "overlap")
- tile_size=integer[,integer,...]
- Number of rows and columns in tiles (rows, columns)
- overlap=integer
- Number of rows and columns of overlap in tiles
- configuration=name [required]
- Path to JSON file with band configuration in the input deep learning model
- mask_json=name
- JSON file with one or more mask band or map name(s) and reclass rules for masking, e.g. {"mask_band": "1 thru 12 36 = 1", "mask_map": "0"}
- nprocs=integer
- Number of threads for parallel computing
- Default: 1
- basename=string
- Name for output raster map
t.pytorch.predict is a wrapper around the
i.pytorch.predict
module and supports all relevant flags and options of that module.
t.pytorch.predict compiles the input imagery groups to
i.pytorch.predict from the temporal granules in the
input STRDS. Those groups per granule are complemented with
raster maps from a
auxillary_group and/or another
reference_strds, where maps in the reference STRDS are matched
with the input STRDS in space and time using the user-defined
sampling.
If a reference STRDS or an auxaliry group is used it often makes sense to provide a
basename
for the resulting raster maps.
In order to run the module with tile- or orbit repeat-passes, the user
should loop over tiles or orbits and use orbit- or tile IDs in the where
clause of the input and reference STRDS. STRDS containing mosaics with
equal spatial extent do not require special handling.
Currently supported use-cases are:
- only input STRDS, usually grouped ("one process per scene")
- only input STRDS, usually grouped, with reference defined by
offset (e.g. for "repeat-pass")
- input STRDS and reference STRDS matched according to temporal relation
given in sampling with single or grouped semantic labels
For more information on how machine learning models are applied to the imagery
groups, please consult the manual of i.pytorch.predict.
t.pytorch.predict -e --o --v input=Sentinel_3_SLSTR_RBT_L2 model=cloud.pt \
output=Sentinel_3_SLSTR_RBT_L2 tile_size=1024,1024 overlap=164 \
configuration=cloud.json model_code=S3_models/ nprocs=8 \
mask_json=mask_land.json where="start_time > '2024-02-02'"
t.info Sentinel_3_SLSTR_RBT_L2
(...)
time t.pytorch.predict -e --o --v input=Sentinel_3_SLSTR_RBT_L2 model=fsc.pt \
output=Sentinel_3_SLSTR_RBT_L2 tile_size=1024,1024 overlap=164 \
configuration=fsc.json model_code=S3_models/ nprocs=8 \
mask_json=mask_cloud_dl.json where="start_time > '2024-02-02'"
t.info Sentinel_3_SLSTR_RBT_L2
(...)
i.pytorch.predict,
Temporal data processing Wiki
Stefan Blumentrath, NVE
SOURCE CODE
Available at:
t.pytorch.predict source code
(history)
Accessed: Friday Oct 25 13:33:25 2024
Main index |
Temporal index |
Topics index |
Keywords index |
Graphical index |
Full index
© 2003-2024
GRASS Development Team,
GRASS GIS 8.4.0 Reference Manual