Repository URL to install this package:
|
Version:
5.0.6-1+cuda10.0 ▾
|
| .. |
| plugin |
| CMakeLists.txt |
| README.md |
| __init__.py |
| lenet5.py |
| mnist_uff_custom_plugin.py |
| requirements.txt |
This sample demonstrates how to use plugins written in C++ with the TensorRT Python bindings and UFF Parser. More specifically, this sample implements a clip layer (as a CUDA kernel), wraps the implementation in a TensorRT plugin (with a corresponding plugin creator) and then generates shared library module containing its code. The user then dynamically links this library in Python, which causes plugin to be registered in TensorRT's PluginRegistry and makes it available for UFF parser.
python2 -m pip install -r requirements.txt from the top-level of this sample.python3 -m pip install -r requirements.txt from the top-level of this sample.To build the plugin and its corresponding python bindings, run:
mkdir build && pushd build
cmake ..
Note that if any of the dependencies are not installed in their default locations, you can manually specify them to cmake.
For example:
cmake .. -DTRT_LIB=/home/dev/tensorrt -DTRT_INCLUDE=/home/dev/tensorrt/include -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc
cmake .. will display a complete list of configurable variables:
if a variable is set to VARIABLE_NAME-NOTFOUND, then you probably
need to specify it manually (or set the variable it is derived from
correctly).
make -j8
popd
Run MNIST training
python2 lenet5.py.python3 lenet5.py.Execute the sample.
python2 mnist_uff_custom_plugin.py.python3 mnist_uff_custom_plugin.py.