
26 Sep YOLOv4: A step-by-step guide for Custom Data Preparation with Code
You only look once or YOLO is a state of the art object detection algorithm. It can be found today in real-time detection and monitoring systems like CCTV cameras and autonomous driving vehicles. It’s famous for being very accurate and fast at the same time. YOLOv4 is one of the latest versions of the YOLO family. This tutorial will go over how to prepare data in YOLOv4 format from scratch and how to train the model.
The first version of YOLO was released in 2015 by Joseph Redmon et al. You can find the original paper to YOLO at You Only Look Once: Unified, Real-Time Object Detection.
You can use my COLAB Notebook and my preprocessed train and test data or follow this tutorial.
YOLOv4: Data Preparation
The most crucial step in any deep learning task is data preparation. There is a general rule that garbage in == garbage out. It is also the most time-taking as well since we want to ensure good images and correct annotations.
Data Collection
You can find many free open datasets in Yolo format online, but for this tutorial, we’ll create one from scratch. We will create a custom traffic dataset for 5 classes (Car, Person, Number Plate, Bicycle, and Motorcycle). Open Images Dataset V6 is a free resource for gathering dataset, and OIDv4_ToolKit is a toolkit we use to download the dataset. Click on the OIDv4 toolkit link I have given and download it from the Github Repo. After downloading, open your command line and move into the directory using the cd command.
Type the following command to install required python packages.
pip install -r requirements.txt
Then to download the required dataset, type the following in your command line window.
Train
python main.py downloader --classes Car Bicycle Person Vehicle_registration_number
Motorcycle --type_csv train --multiclasses 1 --limit 50
Validation
python main.py downloader --classes Car Bicycle Person Vehicle_registration_number
Motorcycle --type_csv validation --multiclasses 1 --limit 5
If there is space between words like “Vehicle registration number”, you have to write it together as Vehicle_registration_number. When you enter them, the toolkit will ask to download .csv files for the first time. Enter Y to download them. After the .csv, files the toolkit will download the images and their bounding boxes.
Once all of this is done, you can find the downloaded dataset at
OIDv4_ToolKit-master>OID>Dataset>train OIDv4_ToolKit-master>OID>Dataset>validation
If you open the train and validation folder, you will see that there is a “Labels” folder inside them as well. It contains bounding boxes for the object in the images we downloaded. But we can not use them since these are annotations as one object type per image. It means that if an image contains “number plates” and “cars”, the annotations will have bounding boxes for only cars or only number plates. But we need annotations for every object in the image. This can be better understood by the following diagram.

What we have vs How we need it
Yolo is trained better when it sees lots of information in one image, so we need to change it into the new format. For this remove the Labels folder from the “train” and “validation” folders.
Data Preparation
To prepare the dataset, we will use LabelImg (Installation procedure explained in the Github repo). It is a free open source Image annotator that we can use to create annotations in YOLOv4 format.
Open LabelImg and open the location folder. Press “w” and make bounding boxes around objects and label them. After that, save the file. But make sure it is in .txt format and it is being saved in the same folder as the images.
Source:Open Images Dataset V6
Once you are done with the annotations, cut the file called “classes.txt” in the folder and save it somewhere safe. Because we will need to afterward. Rename the folder containing training images as “obj” and validation images as “test”. Zip them separately and upload them to your google drive.
Training
We will use Google COLAB to train our model (if you have not used Google COLAB before, check our blog for Google COLAB). Click on this link to open the Notebook.
Click on Copy to Drive to save a copy to your own Drive.

yolov4: Copy to Drive
To connect to a GPU, goto Edit> Notebook settings> Hardware accelerator > GPU

yolov4: Notebook Settings in Google Colab
Clone the Darknet Github repo
# clone darknet repo !git clone https://github.com/AlexeyAB/darknet
To tell YOLOv4 to use the GPU and OpenCV
# change makefile to have GPU and OPENCV enabled %cd darknet !sed -i 's/OPENCV=0/OPENCV=1/' Makefile !sed -i 's/GPU=0/GPU=1/' Makefile !sed -i 's/CUDNN=0/CUDNN=1/' Makefile !sed -i 's/CUDNN_HALF=0/CUDNN_HALF=1/' Makefile
Verify you have cuda
# verify CUDA !/usr/local/cuda/bin/nvcc --version
You should get the following output
nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Sun_Jul_28_19:07:16_PDT_2019 Cuda compilation tools, release 10.1, V10.1.243
Run the make file
!make
Download YOLOv4 weights file
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal /yolov4.weights
Run this helper function
import cv2 import matplotlib.pyplot as plt %matplotlib inline def imShow(path): image = cv2.imread(path) height, width = image.shape[:2] resized_image = cv2.resize(image,(3*width, 3*height), interpolation = cv2.INTER_CUBIC)
fig = plt.gcf() fig.set_size_inches(18, 10) plt.axis("off") plt.imshow(cv2.cvtColor(resized_image, cv2.COLOR_BGR2RGB)) plt.show()
If these runs and predictions for data/person.jpg are shown then, everything is good to go. Now we can start to train our model
!./darknet detector test cfg/coco.data cfg/yolov4.cfg yolov4.weights data/person.jpg imShow('predictions.jpg')Source:Open Images Dataset V6
At the moment our link to drive is
/content/drive/My Drive/
We can simplify this to /mydrive/ by
!ln -s /content/drive/My\ Drive/ /mydrive !ls /mydrive
Now copy the obj.zip and the test.zip into your virtual environment
# %cd .. !cp -r '/mydrive/data/obj.zip' /content !cp -r '/mydrive/data/test.zip' /content
And unzip them to content>darknet>data
!unzip '/content/obj.zip' -d '/content/darknet/data' !unzip '/content/test.zip' -d '/content/darknet/data'
Go to the OIDv4_ToolKit-master folder on your computer and upload generate_train.py and generate_test.py into /content/darknet/
Then run the following commands
!python generate_train.py !python generate_test.py
Verify that you have train.txt and test.txt in your /content/darknet/data folder.
!ls data/

yolov4: verify test & train
Download the weights of the convolutional layers.
!wget https://github.com/AlexeyAB/darknet/releases/download/darknet_yolo_v3_optimal/yolov4.conv.137
We need to make two files in /content/darknet/data. Namely obj.data and obj.names
obj.data should have the following contents

yolov4: Obj.data
After that, go to your drive and create a folder called backup. YOLOv4 saves model weights after every 100 iterations and after 1000, it saves after every 1000 iterations. If training stops for some reason, you can restart it from the last saved weights file.
obj.names should have the same contents as the classes.txt that you saved during the Data Preparation phase. In the same order, this is how mine looks:

yolov4: obj.names
Finally, you need to change the contents of the config.
Go to /content/darknet/cfg/ and open yolov4-custom.cfg
Make the following changes:
batch=64 subdivisions=16 max_batches = 10000 (num_classes*2000 but if classes are less then or equal to 3 put max_batches = 6000) width = 416 (has to be multiple of 32, increase height and width will increase accuracy but training speed will slow down). height = 416 (has to be multiple of 32). steps = 8000, 9000 (80% of max_batches), (90% of max_batches)
Finally scroll done the file and find classes and filters (they are in three different locations so change all). classes = 5 filters = 30 ( (num_classes + 5) * 3 ) Save the file after making all these changes.
And that’s it! Your model is ready to train. Run this to train:
# %%capture !./darknet detector train data/obj.data cfg/yolov4-custom.cfg yolov4.conv.137 -dont_show -map
If your notebook starts to crash, just uncomment the %%capture line.
In Order to start off training from where you saved your weights, use this:
!./darknet detector train data/obj.data cfg/yolov4-custom.cfg /mydrive/backup /yolov4-obj_last.weights -dont_show
Loss
As soon as you start the training this text will start to appear.
It will show the iteration and the current loss of the model. The loss will go down fast at the beginning but it will slow down as iterations increase.

yolov4: loss
You can also observe the loss using this graph below. It is by default saved in the darknet directory.

yolov4: loss graph
Let the model run for a couple of hours until the graph curve starts to flatten out.
YOLOv4: Make Predictions
To make predictions, first, run this to change the yolov4-custom.cfg
%cd cfg !sed -i 's/batch=64/batch=1/' yolov4-custom.cfg !sed -i 's/subdivisions=16/subdivisions=1/' yolov4-custom.cfg %cd ..
Finally, run this command to run predictions:
!./darknet detector test data/obj.data cfg/yolov4-custom.cfg /mydrive/backup /yolov4-custom_best.weights '/content/test_image.jpg' -thresh 0.6 imShow('predictions.jpg')
Seems pretty accurate!!!

yolov4: heavy bike

yolov4: person & cycle

yolov4: car