utilities.segmentation_utils.ImagePreprocessor module#
- class PreprocessingQueue(queue: list[Callable], arguments: list[Dict])[source]#
Bases:
object
object to initialize a preprocessing queue
Parameters#
- queue list:
list of functions to be applied
- arguments list[dict]:
list of arguments to be passed to the functions
- arguments: list[Dict]#
- get_queue_length() int [source]#
Returns the length of the queue
Returns#
- return int:
length of the queue
- queue: list[Callable]#
- class PreprocessorInterface(*args, **kwargs)[source]#
Bases:
Protocol
- arguments: list[Dict]#
- queue: list[Callable]#
- augmentation_pipeline(image, mask, input_size: tuple[int, int], output_size: tuple[int, int], image_queue: PreprocessingQueue, mask_queue: PreprocessingQueue, output_reshape: tuple[int, int] | None = None, channels: int = 3, seed: int = 0) tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor] [source]#
Function that can execute a set of predifined augmentation functions stored in a PreprocessingQueue object. It augments both the image and the mask with the same functions and arguments.
Parameters#
- tf.Tensor image:
The image to be processed
- tf.Tensor mask:
The mask to be processed
- tuple(int, int) input_size:
Input size of the image
- tuple(int, int) output_size:
Output size of the image
Keyword Arguments#
- tuple(int, int), optional output_reshape:
In case the image is a column vector, this is the shape it should be reshaped to. Defaults to None.
- PreprocessingQueue, optional mask_queue image_queue:
Augmentation processing queue for images, defaults to None
- PreprocessingQueue, optional mask_queue:
Augmentation processing queue for masks, defaults to None
- int, optional channels:
Number of bands in the image, defaults to 3 :int, optional seed: The seed to be used in the pipeline, defaults to 0
Raises#
- raises ValueError:
If only one of the queues is passed
Returns#
- return tuple(tf.Tensor, tf.Tensor):
tuple of the processed image and mask
- flatten(image, input_size, channels=1) Tensor [source]#
Function that flattens an image preserving the number of channels
Parameters#
- tf.Tensor image:
image to be flattened
- tuple(int, int) input_size:
input size of the image
Keyword Arguments#
- int, optional channels:
number of chanels to preserve, defaults to 1
Returns#
- return tf.Tensor:
flattened image
- generate_default_queue(seed=0) tuple[utilities.segmentation_utils.ImagePreprocessor.PreprocessingQueue, utilities.segmentation_utils.ImagePreprocessor.PreprocessingQueue] [source]#
Generates the default processing queue
Keyword Arguments#
- seed int:
seed to be used for the random functions
Returns#
- return PreprocessingQueue:
default queue
- onehot_encode(masks, output_size, num_classes) Tensor [source]#
Function that one-hot encodes masks
- Batch(tf.Tensor) masks:
Masks to be encoded
- Tuple(int, int) output_size:
Output size of the masks
- Int num_classes:
Number of classes in the masks
Returns#
- return tf.Tensor:
Batch of one-hot encoded masks