Repository URL to install this package:
|
Version:
5.0.6-1+cuda10.0 ▾
|
| .. |
| Makefile |
| README.md |
| sampleINT8API.cpp |
The sampleINT8API sample demonstrates how to:
This sample can be run as: Usage: 1. Print Help Information: ./sample_int8_api [-h or --help] 2. Write network tensors to a file: ./sample_int8_api [--model=model_file] [--write_tensors] [--network_tensors_file=network_tensors.txt] [-v or --verbose] 3. Run Int8 inference with user provided dynamic ranges: ./sample_int8_api [--model=model_file] [--ranges=per_tensor_dynamic_range_file] [--image=image_file] [--reference=reference_file] [--data=/path/to/data/dir] [--useDLACore=] [-v or --verbose]
sampleINT8API needs following files to build the network and run inference:
<network>.onnx - The model file which contains the network and trained weightsreference_labels.txt - Labels reference file i.e. ground truth imagenet 1000 class mappingsper_tensor_dynamic_range.txt - Custom per tensor dynamic range file or User can simply override them by iterating through network layersimage_to_infer.ppm - PPM Image to run inference withBy default, the sample expects these files to be in data/samples/int8_api/ or data/int8_api/. The list of default directories can be changed by adding one or
more paths with --data=/new/path as a command line argument.