NvInferModelInput

NvInferModelInput inheritance diagram
- class savant.deepstream.nvinfer.model.NvInferModelInput(object='auto.frame', layer_name=None, shape=None, maintain_aspect_ratio=False, symmetric_padding=False, scale_factor=1.0, offsets=(0.0, 0.0, 0.0), color_format=ModelColorFormat.RGB, preprocess_object_meta=None, preprocess_object_image=None, object_min_width=None, object_min_height=None, object_max_width=None, object_max_height=None)
- nvinfer model input configuration template. - Example - model: # model configuration input: layer_name: input_1 shape: [3, 544, 960] scale_factor: 0.0039215697906911373 - color_format: ModelColorFormat = 0
- Color format required by the model. - Example - color_format: rgb # color_format: bgr # color_format: gray 
 - maintain_aspect_ratio: bool = False
- Indicates whether the input preprocessing should maintain image aspect ratio. 
 - object: str = 'auto.frame'
- A text label in the form of - model_name.object_label. Indicates objects that will be used as input data. Special value frame is used to specify the entire frame as model input.
 - offsets: Tuple[float, float, float] = (0.0, 0.0, 0.0)
- Array of mean values of color components to be subtracted from each pixel. - Example - offsets: [0.0, 0.0, 0.0] 
 - preprocess_object_image: PreprocessObjectImage | None = None
- Object image preprocessing Python/C++ function configuration. 
 - preprocess_object_meta: PyFunc | None = None
- Object metadata preprocessing. - Preprocessing implementation should be written as a subclass of - BasePreprocessObjectMeta.
 - shape: Tuple[int, int, int] | None = None
- (Channels, Height, Width)tuple that indicates input image shape.- Example - shape: [3, 224, 224]