For specific details on how to run the demos on Windows 10 see [here](./docs/windows.md)
## References
...
...
@@ -181,3 +192,5 @@ For specific details on how to run the demos on Windows 10 see [here](./docs/win
8. Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. "YOLOv4: Optimal Speed and Accuracy of Object Detection." arXiv preprint arXiv:2004.10934 (2020).
9. Bochkovskiy, Alexey, "Yolo v4, v3 and v2 for Windows and Linux" (https://github.com/AlexeyAB/darknet)
10. Wang, Chien-Yao, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. "Scaled-YOLOv4: Scaling Cross Stage Partial Network." arXiv preprint arXiv:2011.08036 (2020).
11. Zhuang, Juntang, et al. "ShelfNet for fast semantic segmentation." Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019.
12. Zhou, Xingyi, Vladlen Koltun, and Philipp Krähenbühl. "Tracking objects as points." European Conference on Computer Vision. Springer, Cham, 2020.
*```<calibration-file>``` is the camera calibration file (opencv format). It is important that the file contains entry "camera_matrix" with sub-entry "rows", "cols", "data". If you do not want to pass the calibration file, pass "NULL" instead.
## Object Detection and Tracking
To run the 3D object detection & tracking demo follow these steps (example with CenterTrack based on DLA34):
```
rm dla34_ctrack_fp32.rt # be sure to delete(or move) old tensorRT files
./test_dla34_ctrack # run the yolo test (is slow)
./demoTracker dla34_ctrack_fp32.rt ../demo/yolo_test.mp4 NULL c
```
The demoTracker program takes the same parameters of the demo program:
*```<calibration-file>``` is the camera calibration file (opencv format). It is important that the file contains entry "camera_matrix" with sub-entry "rows", "cols", "data". If you do not want to pass the calibration file, pass "NULL" instead.
*```<2D/3D-flag>``` if set to 0 the demo will be in the 2D mode, while if set to 1 the demo will be in the 3D mode (Default is 1 - 3D mode).
Currently tkDNN supports only ShelfNet as semantic segmentation network.
## Export weights from Shelfnet
To get the weights needed to run Shelfnet tests use [this](https://git.hipert.unimore.it/mverucchi/shelfnet) fork of a Pytorch implementation of Shelfnet network.
1. Zhuang, Juntang, et al. "ShelfNet for fast semantic segmentation." Proceedings of the IEEE International Conference on Computer Vision Workshops. 2019.
NB) The gif and the videos are obtained with Mapillary Vistas weights, that we cannot publicly share due to its license restrictions. However, you can train Shelfnet using Mapillary and [this](https://git.hipert.unimore.it/mverucchi/shelfnet) fork of the original repo.
*```calibration-file``` is the camera calibration file (opencv format). It is important that the file contains entry "camera_matrix" with sub-entry "rows", "cols", "data". If you do not want to pass the calibration file, pass "NULL" instead.
### Object Detection and Tracking
To run the 3D object detection & tracking demo follow these steps (example with CenterTrack based on DLA34):
```
rm dla34_ctrack_fp32.rt # be sure to delete(or move) old tensorRT files
./test_dla34_ctrack # run the yolo test (is slow)
./demoTracker dla34_ctrack_fp32.rt ../demo/yolo_test.mp4 NULL c
```
The demoTracker program takes the same parameters of the demo program:
*```calibration-file``` is the camera calibration file (opencv format). It is important that the file contains entry "camera_matrix" with sub-entry "rows", "cols", "data". If you do not want to pass the calibration file, pass "NULL" instead.
*```<2D/3D-flag>``` if set to 0 the demo will be in the 2D mode, while if set to 1 the demo will be in the 3D mode (Default is 1 - 3D mode).
### FP16 inference
...
...
@@ -115,7 +63,7 @@ rm yolo3_fp16.rt # be sure to delete(or move) old tensorRT files
./test_yolo3 # run the yolo test (is slow)
./demo yolo3_fp16.rt ../demo/yolo_test.mp4 y
```
N.b. Using FP16 inference will lead to some errors in the results (first or second decimal).
N.B. Using FP16 inference will lead to some errors in the results (first or second decimal).
### INT8 inference
...
...
@@ -169,49 +117,3 @@ rm yolo3_fp32.rt # be sure to delete(or move) old tensorRT fil
./test_yolo3 # build RT file
./test_rtinference yolo3_fp32.rt 4 # test with a batch size of 4
```
### Run the demo on Windows
This example uses yolo4_tiny.\
To run the object detection file create .rt file bu running:
```
.\test_yolo4tiny.exe
```
Once the rt file has been successfully create,run the demo using the following command:
```
.\demo.exe yolo4tiny_fp32.rt ..\demo\yolo_test.mp4 y
```
For general info on more demo paramters,check Run the demo section on top
To run the test_all_tests.sh on windows,use git bash or msys2
### FP16 inference windows
This is an untested feature on windows.To run the object detection demo with FP16 interference follow the below steps(example with yolo4tiny):
To run object detection demo with INT8 (example with yolo4tiny):
```
set TKDNN_MODE=INT8
set TKDNN_CALIB_LABEL_PATH=..\demo\COCO_val2017\all_labels.txt
set TKDNN_CALIB_IMG_PATH=..\demo\COCO_val2017\all_images.txt
del /f yolo4tiny_int8.rt # be sure to delete(or move) old tensorRT files
.\test_yolo4tiny.exe # run the yolo test (is slow)
.\demo.exe yolo4tiny_int8.rt ..\demo\yolo_test.mp4 y
```
### Known issues with tkDNN on Windows
Mobilenet and Centernet demos work properly only when built with msvc 16.7 in Release Mode,when built in debug mode for the mentioned networks one might encounter opencv assert errors
All Darknet models work properly with demo using MSVC version(16.7-16.9)
It is recommended to use Nvidia Driver(465+),Cuda unknown errors have been observed when using older drivers on pascal(SM 61) devices.
To get the weights needed to run Shelfnet tests use [this](https://git.hipert.unimore.it/mverucchi/shelfnet) fork of a Pytorch implementation of Shelfnet network.