NvInferModelInput

Inheritance diagram of NvInferModelInput

NvInferModelInput inheritance diagram

class savant.deepstream.nvinfer.model.NvInferModelInput(object='auto.frame', layer_name=None, shape=None, maintain_aspect_ratio=False, symmetric_padding=False, scale_factor=1.0, offsets=(0.0, 0.0, 0.0), color_format=ModelColorFormat.RGB, preprocess_object_meta=None, preprocess_object_image=None, object_min_width=None, object_min_height=None, object_max_width=None, object_max_height=None)

nvinfer model input configuration template.

Example

model:
    # model configuration
    input:
        layer_name: input_1
        shape: [3, 544, 960]
        scale_factor: 0.0039215697906911373
object_min_width: int | None = None

Model infers only on objects with this minimum width.

object_min_height: int | None = None

Model infers only on objects with this minimum height.

object_max_width: int | None = None

Model infers only on objects with this maximum width.

object_max_height: int | None = None

Model infers only on objects with this maximum height.

color_format: ModelColorFormat = 0

Color format required by the model.

Example

color_format: rgb
# color_format: bgr
# color_format: gray
property height: int | None

Input image height.

layer_name: str | None = None

Model input layer name.

maintain_aspect_ratio: bool = False

Indicates whether the input preprocessing should maintain image aspect ratio.

object: str = 'auto.frame'

A text label in the form of model_name.object_label. Indicates objects that will be used as input data. Special value frame is used to specify the entire frame as model input.

offsets: Tuple[float, float, float] = (0.0, 0.0, 0.0)

Array of mean values of color components to be subtracted from each pixel.

Example

offsets: [0.0, 0.0, 0.0]
preprocess_object_image: PreprocessObjectImage | None = None

Object image preprocessing Python/C++ function configuration.

preprocess_object_meta: PyFunc | None = None

Object metadata preprocessing.

Preprocessing implementation should be written as a subclass of BasePreprocessObjectMeta.

scale_factor: float = 1.0

Pixel scaling/normalization factor.

shape: Tuple[int, int, int] | None = None

(Channels, Height, Width) tuple that indicates input image shape.

Example

shape: [3, 224, 224]
symmetric_padding: bool = False

Indicates whether the input preprocessing should symmetrically pad the image when it’s scaled. By default the images are padded asymmetrically.

property width: int | None

Input image width.