README.md 18.3 KB
Newer Older
Francesco Gatti's avatar
README  
Francesco Gatti committed
1
# tkDNN
2
tkDNN is a Deep Neural Network library built with cuDNN and tensorRT primitives, specifically thought to work on NVIDIA Jetson Boards. It has been tested on TK1(branch cudnn2), TX1, TX2, AGX Xavier and several discrete GPU.
Micaela Verucchi's avatar
Micaela Verucchi committed
3
The main goal of this project is to exploit NVIDIA boards as much as possible to obtain the best inference performance. It does not allow training. 
Francesco Gatti's avatar
README  
Francesco Gatti committed
4

Micaela Verucchi's avatar
Micaela Verucchi committed
5
6
7
Accepted paper @ IRC 2020, will soon been published.
M. Verucchi, L. Bartoli, F. Bagni, F. Gatti, P. Burgio and M. Bertogna, "Real-Time clustering and LiDAR-camera fusion on embedded platforms for self-driving cars",  in proceedings in IEEE Robotic Computing (2020)

Micaela Verucchi's avatar
Micaela Verucchi committed
8
## Index
Francesco Gatti's avatar
Francesco Gatti committed
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
- [tkDNN](#tkdnn)
  - [Index](#index)
  - [Dependencies](#dependencies)
  - [About OpenCV](#about-opencv)
  - [How to compile this repo](#how-to-compile-this-repo)
  - [Workflow](#workflow)
  - [How to export weights](#how-to-export-weights)
    - [1)Export weights from darknet](#1export-weights-from-darknet)
    - [2)Export weights for DLA34 and ResNet101](#2export-weights-for-dla34-and-resnet101)
    - [3)Export weights for CenterNet](#3export-weights-for-centernet)
    - [4)Export weights for MobileNetSSD](#4export-weights-for-mobilenetssd)
  - [Run the demo](#run-the-demo)
    - [FP16 inference](#fp16-inference)
    - [INT8 inference](#int8-inference)
  - [mAP demo](#map-demo)
  - [Existing tests and supported networks](#existing-tests-and-supported-networks)
  - [References](#references)
Micaela Verucchi's avatar
Micaela Verucchi committed
26
27
28
29
30
31




## Dependencies
This branch works on every NVIDIA GPU that supports the dependencies:
32
33
34
* CUDA 10.0
* CUDNN 7.603
* TENSORRT 6.01
35
* OPENCV 3.4
36
* yaml-cpp 0.5.2 (sudo apt install libyaml-cpp-dev)
Francesco Gatti's avatar
README  
Francesco Gatti committed
37

38
39
40
## About OpenCV
To compile and install OpenCV4 with contrib us the script ```install_OpenCV4.sh```. It will download and compile OpenCV in Download folder.
```
Micaela Verucchi's avatar
Micaela Verucchi committed
41
bash scripts/install_OpenCV4.sh
42
43
44
```
When using openCV not compiled with contrib, comment the definition of OPENCV_CUDACONTRIBCONTRIB in include/tkDNN/DetectionNN.h. When commented, the preprocessing of the networks is computed on the CPU, otherwise on the GPU. In the latter case some milliseconds are saved in the end-to-end latency. 

Micaela Verucchi's avatar
Micaela Verucchi committed
45
## How to compile this repo
Francesco Gatti's avatar
Francesco Gatti committed
46
Build with cmake. If using Ubuntu 18.04 a new version of cmake is needed (3.15 or above). 
Francesco Gatti's avatar
README  
Francesco Gatti committed
47
```
Micaela Verucchi's avatar
Micaela Verucchi committed
48
git clone https://github.com/ceccocats/tkDNN
49
cd tkDNN
Francesco Gatti's avatar
README  
Francesco Gatti committed
50
51
mkdir build
cd build
Francesco Gatti's avatar
Francesco Gatti committed
52
cmake .. 
Francesco Gatti's avatar
README  
Francesco Gatti committed
53
54
55
make
```

Micaela Verucchi's avatar
Micaela Verucchi committed
56
57
## Workflow
Steps needed to do inference on tkDNN with a custom neural network. 
Micaela Verucchi's avatar
Micaela Verucchi committed
58
* Build and train a NN model with your favorite framework.
Micaela Verucchi's avatar
Micaela Verucchi committed
59
60
61
62
* Export weights and bias for each layer and save them in a binary file (one for layer).
* Export outputs for each layer and save them in a binary file (one for layer).
* Create a new test and define the network, layer by layer using the weights extracted and the output to check the results. 
* Do inference.
Davide Sapienza's avatar
Davide Sapienza committed
63

Micaela Verucchi's avatar
Micaela Verucchi committed
64
65
## How to export weights

66
Weights are essential for any network to run inference. For each test a folder organized as follow is needed (in the build folder):
Davide Sapienza's avatar
Davide Sapienza committed
67
```
Micaela Verucchi's avatar
Micaela Verucchi committed
68
69
70
    test_nn
        |---- layers/ (folder containing a binary file for each layer with the corresponding wieghts and bias)
        |---- debug/  (folder containing a binary file for each layer with the corresponding outputs)
Davide Sapienza's avatar
Davide Sapienza committed
71
```
Micaela Verucchi's avatar
Micaela Verucchi committed
72
Therefore, once the weights have been exported, the folders layers and debug should be placed in the corresponding test.
Davide Sapienza's avatar
Davide Sapienza committed
73

Micaela Verucchi's avatar
Micaela Verucchi committed
74
### 1)Export weights from darknet
Francesco Gatti's avatar
Francesco Gatti committed
75
To export weights for NNs that are defined in darknet framework, use [this](https://git.hipert.unimore.it/fgatti/darknet.git) fork of darknet and follow these steps to obtain a correct debug and layers folder, ready for tkDNN.
Davide Sapienza's avatar
Davide Sapienza committed
76
77

```
Francesco Gatti's avatar
Francesco Gatti committed
78
git clone https://git.hipert.unimore.it/fgatti/darknet.git
79
cd darknet
Micaela Verucchi's avatar
Micaela Verucchi committed
80
81
82
make
mkdir layers debug
./darknet export <path-to-cfg-file> <path-to-weights> layers
Davide Sapienza's avatar
Davide Sapienza committed
83
```
Micaela Verucchi's avatar
Micaela Verucchi committed
84
N.b. Use compilation with CPU (leave GPU=0 in Makefile) if you also want debug. 
Davide Sapienza's avatar
Davide Sapienza committed
85

Micaela Verucchi's avatar
Micaela Verucchi committed
86
87
### 2)Export weights for DLA34 and ResNet101 
To get weights and outputs needed to run the tests dla34 and resnet101 use the Python script and the Anaconda environment included in the repository.   
Davide Sapienza's avatar
Davide Sapienza committed
88

Micaela Verucchi's avatar
Micaela Verucchi committed
89
Create Anaconda environment and activate it:
Francesco Gatti's avatar
Francesco Gatti committed
90
```
Micaela Verucchi's avatar
Micaela Verucchi committed
91
92
93
conda env create -f file_name.yml
source activate env_name
python <script name>
Francesco Gatti's avatar
Francesco Gatti committed
94
```
Micaela Verucchi's avatar
Micaela Verucchi committed
95
96
### 3)Export weights for CenterNet
To get the weights needed to run Centernet tests use [this](https://github.com/sapienzadavide/CenterNet.git) fork of the original Centernet. 
Francesco Gatti's avatar
Francesco Gatti committed
97
```
Micaela Verucchi's avatar
Micaela Verucchi committed
98
git clone https://github.com/sapienzadavide/CenterNet.git
Francesco Gatti's avatar
Francesco Gatti committed
99
```
Micaela Verucchi's avatar
Micaela Verucchi committed
100
* follow the instruction in the README.md and INSTALL.md
Davide Sapienza's avatar
Davide Sapienza committed
101
102

```
Micaela Verucchi's avatar
Micaela Verucchi committed
103
104
python demo.py --input_res 512 --arch resdcn_101 ctdet --demo /path/to/image/or/folder/or/video/or/webcam --load_model ../models/ctdet_coco_resdcn101.pth --exp_wo --exp_wo_dim 512
python demo.py --input_res 512 --arch dla_34 ctdet --demo /path/to/image/or/folder/or/video/or/webcam --load_model ../models/ctdet_coco_dla_2x.pth --exp_wo --exp_wo_dim 512
Davide Sapienza's avatar
Davide Sapienza committed
105
```
Micaela Verucchi's avatar
Micaela Verucchi committed
106
### 4)Export weights for MobileNetSSD
Micaela Verucchi's avatar
Micaela Verucchi committed
107
To get the weights needed to run Mobilenet tests use [this](https://github.com/mive93/pytorch-ssd) fork of a Pytorch implementation of SSD network. 
Davide Sapienza's avatar
Davide Sapienza committed
108
109

```
Micaela Verucchi's avatar
Micaela Verucchi committed
110
111
112
113
git clone https://github.com/mive93/pytorch-ssd
cd pytorch-ssd
conda env create -f env_mobv2ssd.yml
python run_ssd_live_demo.py mb2-ssd-lite <pth-model-fil> <labels-file>
Davide Sapienza's avatar
Davide Sapienza committed
114
```
Francesco Gatti's avatar
Francesco Gatti committed
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142

## Darknet Parser
tkDNN implement and easy parser for darknet cfg files, a network can be converted with *tk::dnn::darknetParser*:
```
// example of parsing yolo4
tk::dnn::Network *net = tk::dnn::darknetParser("yolov4.cfg", "yolov4/layers", "coco.names");
net->print();
```
All models from darknet are now parsed directly from cfg, you still need to export the weights with the descripted tools in the previus section.
<details>
  <summary>Supported layers</summary>
  convolutional
  maxpool
  avgpool
  shortcut
  upsample
  route
  reorg
  region
  yolo
</details>
<details>
  <summary>Supported activations</summary>
  relu
  leaky
  mish
</details>

Micaela Verucchi's avatar
Micaela Verucchi committed
143
## Run the demo
Davide Sapienza's avatar
Davide Sapienza committed
144

Micaela Verucchi's avatar
Micaela Verucchi committed
145
To run the an object detection demo follow these steps (example with yolov3):
Davide Sapienza's avatar
Davide Sapienza committed
146
```
Francesco Gatti's avatar
Francesco Gatti committed
147
rm yolo3_fp32.rt        # be sure to delete(or move) old tensorRT files
Micaela Verucchi's avatar
Micaela Verucchi committed
148
./test_yolo3            # run the yolo test (is slow)
Francesco Gatti's avatar
Francesco Gatti committed
149
./demo yolo3_fp32.rt ../demo/yolo_test.mp4 y
Davide Sapienza's avatar
Davide Sapienza committed
150
```
151
In general the demo program takes 4 parameters:
Davide Sapienza's avatar
Davide Sapienza committed
152
```
153
./demo <network-rt-file> <path-to-video> <kind-of-network> <number-of-classes> <n-batches> <show-flag>
154
```
Micaela Verucchi's avatar
Micaela Verucchi committed
155
156
157
158
where
*  ```<network-rt-file>``` is the rt file generated by a test
*  ```<<path-to-video>``` is the path to a video file or a camera input  
*  ```<kind-of-network>``` is the type of network. Thee types are currently supported: ```y``` (YOLO family), ```c``` (CenterNet family) and ```m``` (MobileNet-SSD family)
159
*  ```<number-of-classes>```is the number of classes the network is trained on
160
161
162
*  ```<n-batches>``` number of batches to use in inference (N.B. you should first export TKDNN_BATCHSIZE to the required n_batches and create again the rt file for the network).
*  ```<show-flag>``` if set to 0 the demo will not show the visualization but save the video into result.mp4 (if n-batches ==1)

Davide Sapienza's avatar
Davide Sapienza committed
163
N.b. By default it is used FP32 inference
Micaela Verucchi's avatar
Micaela Verucchi committed
164
165

![demo](https://user-images.githubusercontent.com/11562617/72547657-540e7800-388d-11ea-83c6-49dfea2a0607.gif)
166

Davide Sapienza's avatar
Davide Sapienza committed
167
168
169
170
171
### FP16 inference

To run the an object detection demo with FP16 inference follow these steps (example with yolov3):
```
export TKDNN_MODE=FP16  # set the half floating point optimization
Francesco Gatti's avatar
Francesco Gatti committed
172
rm yolo3_fp16.rt        # be sure to delete(or move) old tensorRT files
Davide Sapienza's avatar
Davide Sapienza committed
173
./test_yolo3            # run the yolo test (is slow)
Francesco Gatti's avatar
Francesco Gatti committed
174
./demo yolo3_fp16.rt ../demo/yolo_test.mp4 y
Davide Sapienza's avatar
Davide Sapienza committed
175
176
177
178
179
```
N.b. Using FP16 inference will lead to some errors in the results (first or second decimal). 

### INT8 inference

Micaela Verucchi's avatar
Micaela Verucchi committed
180
181
182
183
184
185
To run the an object detection demo with INT8 inference three environment variables need to be set:
  * ```export TKDNN_MODE=INT8```: set the 8-bit integer optimization
  * ```export TKDNN_CALIB_IMG_PATH=/path/to/calibration/image_list.txt``` : image_list.txt has in each line the absolute path to a calibration image
  * ```export TKDNN_CALIB_LABEL_PATH=/path/to/calibration/label_list.txt```: label_list.txt has in each line the absolute path to a calibration label
  
You should provide image_list.txt and label_list.txt, using training images. However, if you want to quickly test the INT8 inference you can run (from this repo root folder)
Davide Sapienza's avatar
Davide Sapienza committed
186
```
Micaela Verucchi's avatar
Micaela Verucchi committed
187
188
189
bash scripts/download_validation.sh COCO
```
to automatically download COCO2017 validation (inside demo folder) and create those needed file. Use BDD insted of COCO to download BDD validation. 
Davide Sapienza's avatar
Davide Sapienza committed
190

Micaela Verucchi's avatar
Micaela Verucchi committed
191
192
193
194
195
Then a complete example using yolo3 and COCO dataset would be:
```
export TKDNN_MODE=INT8
export TKDNN_CALIB_LABEL_PATH=../demo/COCO_val2017/all_labels.txt
export TKDNN_CALIB_IMG_PATH=../demo/COCO_val2017/all_images.txt
Francesco Gatti's avatar
Francesco Gatti committed
196
rm yolo3_int8.rt        # be sure to delete(or move) old tensorRT files
Davide Sapienza's avatar
Davide Sapienza committed
197
./test_yolo3            # run the yolo test (is slow)
Francesco Gatti's avatar
Francesco Gatti committed
198
./demo yolo3_int8.rt ../demo/yolo_test.mp4 y
Davide Sapienza's avatar
Davide Sapienza committed
199
```
Micaela Verucchi's avatar
Micaela Verucchi committed
200
201
202
203
204
N.B. 
 * Using INT8 inference will lead to some errors in the results. 
 * The test will be slower: this is due to the INT8 calibration, which may take some time to complete. 
 * INT8 calibration requires TensorRT version greater than or equal to 6.0
 * Only 100 images are used to create the calibration table by default (set in the code).
Davide Sapienza's avatar
Davide Sapienza committed
205

206
207
208
### BatchSize bigger than 1
```
export TKDNN_BATCHSIZE=2
Francesco Gatti's avatar
Francesco Gatti committed
209
210
211
212
213
214
# build tensorRT files
```
This will create a TensorRT file with the desidered **max** batch size.
The test will still run with a batch of 1, but the created tensorRT can manage the desidered batch size.

### Test batch Inference
Francesco Gatti's avatar
Francesco Gatti committed
215
This will test the network with random input and check if the output of each batch is the same.
Francesco Gatti's avatar
Francesco Gatti committed
216
217
218
219
220
221
222
223
224
```
./test_rtinference <network-rt-file> <number-of-batches>
# <number-of-batches> should be less or equal to the max batch size of the <network-rt-file>

# example
export TKDNN_BATCHSIZE=4           # set max batch size
rm yolo3_fp32.rt                   # be sure to delete(or move) old tensorRT files
./test_yolo3                       # build RT file
./test_rtinference yolo3_fp32.rt 4 # test with a batch size of 4
225
226
```

227
## mAP demo
Davide Sapienza's avatar
Davide Sapienza committed
228

229
230
To compute mAP, precision, recall and f1score, run the map_demo.

231
232
A validation set is needed. 
To download COCO_val2017 (80 classes) run (form the root folder): 
xavier's avatar
xavier committed
233
```
234
bash scripts/download_validation.sh COCO
xavier's avatar
xavier committed
235
```
236
237
238
239
240
To download Berkeley_val (10 classes) run (form the root folder): 
```
bash scripts/download_validation.sh BDD
```

xavier's avatar
xavier committed
241
To compute the map, the following parameters are needed:
242
```
Micaela Verucchi's avatar
Micaela Verucchi committed
243
./map_demo <network rt> <network type [y|c|m]> <labels file path> <config file path>
244
245
```
where 
Micaela Verucchi's avatar
Micaela Verucchi committed
246
* ```<network rt>```: rt file of a chosen network on which compute the mAP.
Micaela Verucchi's avatar
Micaela Verucchi committed
247
* ```<network type [y|c|m]>```: type of network. Right now only y(yolo), c(centernet) and m(mobilenet) are allowed
Micaela Verucchi's avatar
Micaela Verucchi committed
248
* ```<labels file path>```: path to a text file containing all the paths of the ground-truth labels. It is important that all the labels of the ground-truth are in a folder called 'labels'. In the folder containing the folder 'labels' there should be also a folder 'images', containing all the ground-truth images having the same same as the labels. To better understand, if there is a label path/to/labels/000001.txt there should be a corresponding image path/to/images/000001.jpg. 
Micaela Verucchi's avatar
Micaela Verucchi committed
249
* ```<config file path>```: path to a yaml file with the parameters needed for the mAP computation, similar to demo/config.yaml
250
251
252
253

Example:

```
xavier's avatar
xavier committed
254
cd build
Davide Sapienza's avatar
Davide Sapienza committed
255
./map_demo dla34_cnet_FP32.rt c ../demo/COCO_val2017/all_labels.txt ../demo/config.yaml
xavier's avatar
xavier committed
256
```
Micaela Verucchi's avatar
Micaela Verucchi committed
257

258
259
This demo also creates a json file named ```net_name_COCO_res.json``` containing all the detections computed. The detections are in COCO format, the correct format to subit the results to [CodaLab COCO detection challenge](https://competitions.codalab.org/competitions/20794#participate).

Micaela Verucchi's avatar
Micaela Verucchi committed
260
## Existing tests and supported networks
Micaela Verucchi's avatar
Micaela Verucchi committed
261
262
263

| Test Name         | Network                                       | Dataset                                                       | N Classes | Input size    | Weights                                                                   |
| :---------------- | :-------------------------------------------- | :-----------------------------------------------------------: | :-------: | :-----------: | :------------------------------------------------------------------------ |
Micaela Verucchi's avatar
Micaela Verucchi committed
264
| yolo              | YOLO v2<sup>1</sup>                           | [COCO 2014](http://cocodataset.org/)                          | 80        | 608x608       | [weights](https://cloud.hipert.unimore.it/s/nf4PJ3k8bxBETwL/download)                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
265
266
267
| yolo_224          | YOLO v2<sup>1</sup>                           | [COCO 2014](http://cocodataset.org/)                          | 80        | 224x224       | weights                                                                   |
| yolo_berkeley     | YOLO v2<sup>1</sup>                           | [BDD100K  ](https://bair.berkeley.edu/blog/2018/05/30/bdd/)   | 10        | 416x736       | weights                                                                   |
| yolo_relu         | YOLO v2 (with ReLU, not Leaky)<sup>1</sup>    | [COCO 2014](http://cocodataset.org/)                          | 80        | 416x416       | weights                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
268
| yolo_tiny         | YOLO v2 tiny<sup>1</sup>                      | [COCO 2014](http://cocodataset.org/)                          | 80        | 416x416       | [weights](https://cloud.hipert.unimore.it/s/m3orfJr8pGrN5mQ/download)                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
269
| yolo_voc          | YOLO v2<sup>1</sup>                           | [VOC      ](http://host.robots.ox.ac.uk/pascal/VOC/)          | 21        | 416x416       | [weights](https://cloud.hipert.unimore.it/s/DJC5Fi2pEjfNDP9/download)                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
270
| yolo3             | YOLO v3<sup>2</sup>                           | [COCO 2014](http://cocodataset.org/)                          | 80        | 416x416       | [weights](https://cloud.hipert.unimore.it/s/jPXmHyptpLoNdNR/download)     |
271
| yolo3_512   | YOLO v3<sup>2</sup>                                 | [COCO 2017](http://cocodataset.org/)                          | 80        | 512x512       | [weights](https://cloud.hipert.unimore.it/s/RGecMeGLD4cXEWL/download)     |
Micaela Verucchi's avatar
Micaela Verucchi committed
272
| yolo3_berkeley    | YOLO v3<sup>2</sup>                           | [BDD100K  ](https://bair.berkeley.edu/blog/2018/05/30/bdd/)   | 10        | 320x544       | [weights](https://cloud.hipert.unimore.it/s/o5cHa4AjTKS64oD/download)                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
273
274
| yolo3_coco4       | YOLO v3<sup>2</sup>                           | [COCO 2014](http://cocodataset.org/)                          | 4         | 416x416       | [weights](https://cloud.hipert.unimore.it/s/o27NDzSAartbyc4/download)                                                                   |
| yolo3_flir        | YOLO v3<sup>2</sup>                           | [FREE FLIR](https://www.flir.com/oem/adas/adas-dataset-form/) | 3         | 320x544       | [weights](https://cloud.hipert.unimore.it/s/62DECncmF6bMMiH/download)                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
275
| yolo3_tiny        | YOLO v3 tiny<sup>2</sup>                      | [COCO 2014](http://cocodataset.org/)                          | 80        | 416x416       | [weights](https://cloud.hipert.unimore.it/s/LMcSHtWaLeps8yN/download)     |
276
| yolo3_tiny512     | YOLO v3 tiny<sup>2</sup>                      | [COCO 2017](http://cocodataset.org/)                          | 80        | 512x512       | [weights](https://cloud.hipert.unimore.it/s/8Zt6bHwHADqP4JC/download)     |
Micaela Verucchi's avatar
Micaela Verucchi committed
277
| dla34             | Deep Leayer Aggreagtion (DLA) 34<sup>3</sup>  | [COCO 2014](http://cocodataset.org/)                          | 80        | 224x224       | weights                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
278
| dla34_cnet        | Centernet (DLA34 backend)<sup>4</sup>         | [COCO 2017](http://cocodataset.org/)                          | 80        | 512x512       | [weights](https://cloud.hipert.unimore.it/s/KRZBbCQsKAtQwpZ/download)     |
Micaela Verucchi's avatar
Micaela Verucchi committed
279
| mobilenetv2ssd    | Mobilnet v2 SSD Lite<sup>5</sup>              | [VOC      ](http://host.robots.ox.ac.uk/pascal/VOC/)          | 21        | 300x300       | [weights](https://cloud.hipert.unimore.it/s/x4ZfxBKN23zAJQp/download)     |
280
| mobilenetv2ssd512 | Mobilnet v2 SSD Lite<sup>5</sup>              | [COCO 2017](http://cocodataset.org/)                          | 81        | 512x512       | [weights](https://cloud.hipert.unimore.it/s/pdCw2dYyHMJrcEM/download)     |
Micaela Verucchi's avatar
Micaela Verucchi committed
281
| resnet101         | Resnet 101<sup>6</sup>                        | [COCO 2014](http://cocodataset.org/)                          | 80        | 224x224       | weights                                                                   |
Micaela Verucchi's avatar
Micaela Verucchi committed
282
| resnet101_cnet    | Centernet (Resnet101 backend)<sup>4</sup>     | [COCO 2017](http://cocodataset.org/)                          | 80        | 512x512       | [weights](https://cloud.hipert.unimore.it/s/5BTjHMWBcJk8g3i/download)     |
Micaela Verucchi's avatar
Micaela Verucchi committed
283
| csresnext50-panet-spp    | Cross Stage Partial Network <sup>7</sup>     | [COCO 2014](http://cocodataset.org/)                          | 80        | 416x416       | [weights](https://cloud.hipert.unimore.it/s/Kcs4xBozwY4wFx8/download)     |
Micaela Verucchi's avatar
Micaela Verucchi committed
284
| yolo4             | Yolov4 <sup>8</sup>                           | [COCO 2017](http://cocodataset.org/)                          | 80        | 416x416       | [weights](https://cloud.hipert.unimore.it/s/d97CFzYqCPCp5Hg/download)     |
Micaela Verucchi's avatar
Micaela Verucchi committed
285
286
287
288
289
290
291
292
293


## References

1. Redmon, Joseph, and Ali Farhadi. "YOLO9000: better, faster, stronger." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
2. Redmon, Joseph, and Ali Farhadi. "Yolov3: An incremental improvement." arXiv preprint arXiv:1804.02767 (2018).
3. Yu, Fisher, et al. "Deep layer aggregation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
4. Zhou, Xingyi, Dequan Wang, and Philipp Krähenbühl. "Objects as points." arXiv preprint arXiv:1904.07850 (2019).
5. Sandler, Mark, et al. "Mobilenetv2: Inverted residuals and linear bottlenecks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
Micaela Verucchi's avatar
Micaela Verucchi committed
294
6. He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
Micaela Verucchi's avatar
Micaela Verucchi committed
295
7. Wang, Chien-Yao, et al. "CSPNet: A New Backbone that can Enhance Learning Capability of CNN." arXiv preprint arXiv:1911.11929 (2019).
Micaela Verucchi's avatar
Micaela Verucchi committed
296
8. Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. "YOLOv4: Optimal Speed and Accuracy of Object Detection." arXiv preprint arXiv:2004.10934 (2020).