Why Gemfury? Push, build, and install  RubyGems npm packages Python packages Maven artifacts PHP packages Go Modules Debian packages RPM packages NuGet packages

Repository URL to install this package:

Details    
libnvinfer-samples / usr / src / tensorrt / samples / sampleINT8API
  ..
  Makefile
  README.md
  sampleINT8API.cpp
Size: Mime:

NVIDIA TensorRT Sample "sampleINT8API"

The sampleINT8API sample demonstrates how to:

  • Use nvinfer1::ITensor::setDynamicRange to set per tensor dynamic range
  • Use nvinfer1::ILayer::setPrecison to set computation precision of a layer
  • Use nvinfer1::ILayer::setOutputType to set output tensor data type of a layer
  • Overall the sample showcase how to perform INT8 inference without using INT8 Calibration
  • Supports Image classification onnx models - resnet50, vgg19, mobilenet.

Running the sample

  1. Download the Model files from GitHub: a. wget https://s3.amazonaws.com/download.onnx/models/opset_3/resnet50.tar.gz (Link source: https://github.com/onnx/models/tree/master/resnet50) b. tar -xvzf resnet50.tar.gz c. copy resnet50/model.onnx to data/int8_api/resnet50.onnx
  2. ./sample_int8_api [-v or --verbose]

In order to use this sample with other model files with custom configuration, perform the following steps:

  1. Download the Model files from GitHub: https://github.com/onnx/models/tree/master/models/image_classification
  2. Create an input image with a PPM extension. Resize it with the dimensions of 224x224x3.
  3. Create a file called reference_labels.txt. Ensure each line corresponds to a single imagenet label. You can download the imagenet 1000 class human readable labels from here: https://gist.github.com/yrevar/942d3a0ac09ec9e5eb3a. The reference label file contains only a single label name per line, for example, 0:'tench, Tinca tinca' is represented as tench.
  4. Create a file called dynamic_ranges.txt. a. In order to generate dynamic range file, user will need to provide dynamic range for each network tensor. Sample provides an option to write names of the network tensors to a file. This file can be used to generate dynamic_ranges.txt. Please see Usage(2) in order to generate this file.
    b. To create dynamic_ranges.txt, ensure each line corresponds to the tensor name and floating point dynamic range, for example <tensor_name> : . Tensor names generated in (a) can be used here to represent <tensor_name>. The dynamic range can either be obtained from training (by measuring the min/max value of activation tensors in each epoch) or using custom post processing techniques (similar to TensorRT calibration). You can also choose to use a dummy per tensor dynamic range to run the sample. INT8 inference accuracy may reduce when dummy/random dynamic ranges are provided.

Usage

This sample can be run as: Usage: 1. Print Help Information: ./sample_int8_api [-h or --help] 2. Write network tensors to a file: ./sample_int8_api [--model=model_file] [--write_tensors] [--network_tensors_file=network_tensors.txt] [-v or --verbose] 3. Run Int8 inference with user provided dynamic ranges: ./sample_int8_api [--model=model_file] [--ranges=per_tensor_dynamic_range_file] [--image=image_file] [--reference=reference_file] [--data=/path/to/data/dir] [--useDLACore=] [-v or --verbose]

sampleINT8API needs following files to build the network and run inference:

  • <network>.onnx - The model file which contains the network and trained weights
  • reference_labels.txt - Labels reference file i.e. ground truth imagenet 1000 class mappings
  • per_tensor_dynamic_range.txt - Custom per tensor dynamic range file or User can simply override them by iterating through network layers
  • image_to_infer.ppm - PPM Image to run inference with

By default, the sample expects these files to be in data/samples/int8_api/ or data/int8_api/. The list of default directories can be changed by adding one or more paths with --data=/new/path as a command line argument.