AI Wind Turbine Damage Predictor (WTDP): Wind Turbine Visual Inspection on Drone Imagery using Deep learning.

Mozahid Sufiyan
12 min readMay 9, 2020

--

Real-Time Damage and Defect Detection with Drone Streaming Technology

1. Introduction

The production of electricity from energy captured by wind turbines stems from the aerodynamic force created by rotor blades. As windows across the wind turbine blades, air pressure on one side of the blade decreases. The difference in air pressure across the two sides of the blade creates both lift and drag. Because the force of the lift is stronger than the drag, the rotor shaft is then forced to spin. The generator contained within the turbine is connected to the rotor shaft.

Fig. 1. Example of a Manual Blade Inspection

There are numerous defect types that can appear on a blade from cracks and pitting to full delamination in sections. Most of these defects are either caused by an acute event such as lightning, blade erosion is caused by small physical impacts over time. Leading-edge blade erosion caused by water droplet impingement reduces blade aerodynamics and limits power output. If left uncorrected, the structural integrity of the blade can be compromised.

2. Background

Blade inspections have seen rapid advancement due to autonomous drones and burst photography. In some estimates, 85 percent of blades currently installed have defects. Wind farm operators need a way to prioritize and estimate repair costs. As can be seen in figure 2 below, automated blade inspection based on drone photography can reduce human error and lead to more consistent results.

Fig. 2. Drone Images Classified into Blade Damage Detection

3. Literature Review

Wind Turbines inspection present many challenging aspects, especially offshore: a very harsh environment, high costs and losses due to the significant duration of the inspections and turbines downtimes, the involvement of highly specialized technicians risking while climbing up and down each blade. Visual Working developed a remote inspection technique that revolutionizes this highly specialized task: using custom made UAVs (drones), Visual Working can inspect in the shortest time a wind turbine (onshore or offshore) acquiring super high definition photographs of all the blades surface. The collected images and videos are stored in a database and processed through an imaging software that allows quick detection and reporting of early-stage damages. Trained and specialized teams of Visual Working technicians are capable of inspecting entire wind farms in a fraction of the time required by standard inspection methods, in total safeness, cutting costs and wind turbines downtimes dramatically.

Fig. 3. Drone Images Classified into Blade Lightning Strike Damage Detection

4. Solution Approach(deep learning)

A new approach to wind turbine inspection and damage detection. Frame damage detection as separated bounding boxes and associated class probabilities on the program output report generator. A single neural network predicts bounding boxes and class probabilities directly from full images in one evaluation. Our base YOLO model processes images in real-time at 45 frames per second. Compared to state-of-the-art detection systems, YOLO makes more localization errors but is less likely to predict false positives on the background. Finally, YOLO learns very general representations of objects. It outperforms other detection methods, including DPM and R-CNN, when generalizing from natural images in real-time streams.

How to use OpenCV’s ‘dnn’ module with NVIDIA GPUs, CUDA, and cuDNN

In this tutorial, you will learn how to use OpenCV’s “Deep Neural Network” (DNN) module with NVIDIA GPUs, CUDA, and cuDNN for.

module to (1) load a pre-trained network from disk, (2) make predictions on an input image, and then (3) display the results, allowing you to build wind turbine computer vision/deep learning pipeline for your particular project.

OpenCV 4.2 now supports NVIDIA GPUs for inference using OpenCV’s

In today’s tutorial, I show you how to compile and install OpenCV to take advantage of your NVIDIA GPU for deep neural network inference.

Assumptions when compiling OpenCV for NVIDIA GPU support

In order to compile and install OpenCV’s “deep neural network” module with NVIDIA GPU support, I will be making the following assumptions:

  1. You have an NVIDIA GPU. If you do not have an NVIDIA GPU, you cannot compile OpenCV’s “dnn” module with NVIDIA GPU support.
  2. You are using Ubuntu 18.04 (or another Debian-based distribution). When it comes to deep learning, I strongly recommend Unix-based machines over Windows systems.

Step #1: Install NVIDIA CUDA drivers, CUDA Toolkit, and cuDNN

This tutorial makes the assumption that you already have:

  • An NVIDIA GPU
  • The CUDA drivers for that particular GPU installed
  • CUDA Toolkit and cuDNN configured and installed

If you have an NVIDIA GPU on your system but have yet to install the CUDA drivers, CUDA Toolkit, and cuDNN. This can be accomplished in your terminal How to install TensorFlow 2.0 on Ubuntu:

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt-get update

Go ahead and install your NVIDIA graphics driver:

$ sudo apt-get install nvidia-driver-418

And then issue the reboot command and wait for your system to restart:

$ sudo reboot now
$ nvidia-smi

Let’s go ahead and download CUDA 10.0

$ cd ~
$ mkdir installers
$ cd installers/
$ wget https://developer.nvidia.com/compute/cuda/10.0/Prod/local_installers/cuda_10.0.130_410.48_linux
$ mv cuda_10.0.130_410.48_linux cuda_10.0.130_410.48_linux.run
$ chmod +x cuda_10.0.130_410.48_linux.run
$ sudo ./cuda_10.0.130_410.48_linux.run — override

You may safely ignore the error message after the above command. Now let’s update our bash profile

$ sudo nano ~/.bashrc
# NVIDIA CUDA Toolkit(set below path on bashrc file)
export PATH=/usr/local/cuda-10.0/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64

see Figure 4 below for illustration purposes.

Wget and download cuDNN v7.6.4 for CUDA 10.0 from the following link:

You then may need to copy) it from your home machine to your remote deep learning box name folder called installers:

$ cd ~/installers
$ tar -zxf cudnn-10.0-linux-x64-v7.6.4.38.tgz#check with download filename
$ cd cuda
$ sudo cp -P lib64/* /usr/local/cuda/lib64/
$ sudo cp -P include/* /usr/local/cuda/include/
$ cd ~

Step #2: Install OpenCV and “dnn” GPU dependencies

How to use OpenCV’s ‘dnn’ module with NVIDIA GPUs, CUDA, and cuDNN

$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install build-essential cmake unzip pkg-config
$ sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
$ sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
$ sudo apt-get install libv4l-dev libxvidcore-dev libx264-dev
$ sudo apt-get install libgtk-3-dev
$ sudo apt-get install libatlas-base-dev gfortran
$ sudo apt-get install python3-dev

Step #3: Download OpenCV source code

There is no “pip-installable” version of OpenCV that comes with NVIDIA GPU support — instead, we’ll need to compile OpenCV from scratch with the proper NVIDIA GPU configurations set.

The first step in doing so is to download the source code for OpenCV v4.2:

$ cd ~
$ wget -O opencv.zip https://github.com/opencv/opencv/archive/4.2.0.zip
$ wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.2.0.zip
$ unzip opencv.zip
$ unzip opencv_contrib.zip
$ mv opencv-4.2.0 opencv
$ mv opencv_contrib-4.2.0 opencv_contrib

Step #4: Configure Python virtual environment

$ wget https://bootstrap.pypa.io/get-pip.py
$ sudo python3 get-pip.py
$ sudo pip install virtualenv virtualenvwrapper
$ sudo rm -rf ~/get-pip.py ~/.cache/pip

You then need to open up your ~/.bashrc file and update it to automatically load virtualenv/virtualenvwrapper whenever you open up a terminal.

$ mkvirtualenv opencv_cuda -p python3
$ $ sudo nano ~/.bashrc
# virtualenv and virtualenvwrapper
export WORKON_HOME=$HOME/.virtualenvs
export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
source /usr/local/bin/virtualenvwrapper.sh
Figure 4: How to install TensorFlow 2.0 on Ubuntu with an NVIDIA CUDA GPU.

If you ever close your terminal or deactivate your Python virtual environment, you can access it again via the workon command:

$ workon opencv_cuda
$ pip install numpy

Step #5: YOLO: Detection Using A Pre-Trained Model

Download Darknet from the link given below:-

$ git clone https://github.com/pjreddie/darknet
$ cd darknet
mkdir build-release
cd build-release
cmake ..
make
make install

You already have the config file for YOLO in the cfg/ subdirectory. You will have to download the pre-trained weight file here (237 MB). Or just run this:

wget https://pjreddie.com/media/files/yolov3.weights

Step #6:Download Pretrained Convolutional Weights

For training we use convolutional weights that are pre-trained on Imagenet. We use weights from the darknet53 model. You can just download the weights for the convolutional layers here (76 MB).

wget https://pjreddie.com/media/files/darknet53.conv.74

save these pre-trained files on darknet the cfg/ subdirectory.

Step #7: LabelImg

LabelImg is a graphical image annotation tool. It is written in Python and uses Qt for its graphical interface. Annotations are saved as XML files in PASCAL VOC format, the format used by ImageNet. Besides, it also supports YOLO format. collect image from source and start labeling them as shown below:-

  1. https://tzutalin.github.io/labelImg/ #for windows
  2. https://github.com/tzutalin/labelImg #for Linux, ubuntu
Figure 5: How to label image on YOLO format with predefined classes.

Labelimg softwaredata/predefined_classes subdirectory its .txt file where you define your damage classes. Make sure you select YOLO format and save them after you have mark them as shown above in .txt format.

#Predefine your Classes
Erosion
Lightning_Strike
VG_Panel
Debonding

Step #8: How to train (to detect wind turbine damages):

Training Yolo v3:

  1. For training cfg/yolov3-custom.cfg download the pre-trained weights-file: darknet53.conv.74 (see step 5)
  2. Create file yolo-obj.cfg with the same content as in yolov3-custom.cfg and:
  • change line batch to batch=64
  • change line subdivisions to subdivisions=32
  • change line max_batches to (classes*2000 but not less than the number of training images, and not less than 8000), f.e. max_batches=8000 if you train for 4 classes
  • change line steps to 80% and 90% of max_batches, f.e. steps=6400,7200
  • set network size width=416 height=416 or any value multiple of 32:
  • change line classes=80 to your number of objects in each of 4[yolo]-layers:
  • change [filters=255] to filters=(classes + 5)x3 in the 4[convolutional] before each [yolo] layer, keep in mind that it only has to be the last [convolutional] before each of the [yolo] layers.

So if classes=1 then should be filters=18. If classes=4 then write filters=27.

[convolutional]
filters=27
[region]
classes=4

Create file obj.data in the directory \darknet\data\, containing (where classes = number of objects):

classes= 4
train = data/train.txt
valid = data/test.txt
names = data/obj.names
backup = backup/

Create file obj.names in the directory \darknet\data\, with objects names - each in new line

Erosion
Lightning_Strike
VG_Panel
Debonding
  1. Put image-files (.jpg) of your objects in the directory \darknet\data\obj\
  2. You should label each object on images from your dataset. Use this visual GUI-software(see step Labelimg above) for marking bounded boxes of objects and generating annotation files for Yolo v3:

It will create .txt-file for each .jpg-image-file - in the same directory and with the same name, but with .txt-extension, and put to file: object number and object coordinates on this image, for each object in new line:

<object-class> <x_center> <y_center> <width> <height>

Where:

  • <object-class> - integer object number from 0 to (classes-1)
  • <x_center> <y_center> <width> <height> - float values relative to width and height of image, it can be equal from (0.0 to 1.0]
  • for example: <x> = <absolute_x> / <image_width> or <height> = <absolute_height> / <image_height>
  • atention: <x_center> <y_center> - are center of rectangle (are not top-left corner)

For example for img1.jpg you will be created img1.txt containing:

1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
  1. Create file train.txt in directory darknet\data\, with filenames of your images, each filename in new line, with path relative to darknet, for example containing:
data/obj/img1.jpg
data/obj/img2.jpg
data/obj/img3.jpg
  1. Download pre-trained weights for the convolutional layers and put to the directory darknet\cfg
  • foryolov4-custom.cfg : darknet53.conv.74)
  1. Create darknet\data\obj\train.txtby using the command line: python generate_train.py
  2. To start train on Linux use command: ./darknet detector train data/obj.data yolov3-custom.cfg darknet53.conv.74
  • (file yolo-obj_last.weights will be saved to the build\darknet\backup\ for each 100 iterations)
  • (file yolo-obj_xxxx.weights will be saved to the build\darknet\backup\ for each 1000 iterations)

For training with mAP (mean average precisions) calculation for each 4 Epochs (set valid=valid.txt or train.txt in obj.data file) and run: ./darknet detector train data/obj.data yolov3-custom.cfg darknet53.conv.74 -map

Step #9: How to test (to detect wind turbine damages on real-time streaming by drone):

Training Yolo v3:

  1. For training cfg/yolov3-custom.cfg download the pre-trained weights-file: darknet53.conv.74 (see step 5)
  2. Create file yolo-obj.cfg with the same content as in yolov3-custom.cfg and:
[convolutional]
filters=27
[region]
classes=4
  • After each 100 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just start training using: ./darknet detector train data/obj.data yolo-obj.cfg backup\yolo-obj_2000.weights Erosion.jpg
  • (in the original repository https://github.com/pjreddie/darknet the weights-file is saved only once every 10 000 iterations if(iterations > 1000))
  • uring training you see nan values for avg (loss) field - then training goes wrong, but if nan is in some other lines - then training goes well.

Note: If you changed width= or height= in your cfg-file, then new width and height must be divisible by 32.

Note: After training use such command for detection: darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights

Note: if error Out of memory occurs then in .cfg-file you should increase subdivisions=16, 32 or 64: link

When should I stop training:

Usually sufficient 2000 iterations for each class(object), but not less than number of training images and not less than 6000 iterations in total. But for a more precise definition when you should stop training, use the following manual:

  1. During training, you will see varying indicators of error, and you should stop when no longer decreases 0.XXXXXXX avg:

Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000, count: 8 Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000, count: 8

9002: 0.211667, 0.60730 avg, 0.001000 rate, 3.868000 seconds, 576128 images Loaded: 0.000000 seconds

  • 9002 — iteration number (number of batch)
  • 0.60730 avg — average loss (error) — the lower, the better

When you see that average loss 0.xxxxxx avg no longer decreases at many iterations then you should stop training. The final avgerage loss can be from 0.05 (for a small model and easy dataset) to 3.0 (for a big model and a difficult dataset).

Or if you train with flag -map then you will see mAP indicator Last accuracy mAP@0.5 = 18.50% in the console - this indicator is better than Loss, so train while mAP increases.

  1. Once training is stopped, you should take some of last .weights-files from \darknet\backup and choose the best of them:

For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to overfitting. Overfitting — is case when you can detect objects on images from training-dataset, but can’t detect objects on any others images. You should get weights from Early Stopping Point:

To get weights from Early Stopping Point:

2.1. At first, in your file obj.data you must specify the path to the validation dataset valid = valid.txt (format of valid.txt as in train.txt), and if you haven't validation images, just copy data\train.txt to data\valid.txt.

2.2 If training is stopped after 9000 iterations, to validate some of previous weights and comapre last output lines for each weights (7000, 8000, 9000):

Choose weights-file with the highest mAP (mean average precision) or IoU (intersect over union)

For example, bigger mAP gives weights yolo-obj_8000.weights - then use this weights for detection.

Or just train with -map flag:

./darknet detector train data/obj.data yolov3-custom.cfg darknet53.conv.74 -map

So you will see mAP-chart (red-line) in the Loss-chart Window. mAP will be calculated for each 4 Epochs using valid=valid.txt file that is specified in obj.data file (1 Epoch = images_in_train_txt / batch iterations)

(to change the max x-axis value — change max_batches= parameter to 2000*classes, f.e. max_batches=6000 for 3 classes)

Example of custom object detection: ./darknet detector test data/obj.data yolov3-custom.cfg yolo-obj_8000.weights

  • IoU (intersect over union) — average instersect over union of objects and detections for a certain threshold = 0.24
  • mAP (mean average precision) — mean value of average precisions for each class, where average precision is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf

mAP is default metric of precision in the PascalVOC competition, this is the same as AP50 metric in the MS COCO competition. In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but IoU always has the same meaning.

Wind Turbine Damage detection:

Example of custom object detection: ./darknet detector test data/obj.data yolov3-custom.cfg yolo-obj_8000.weights crack.jpg

--

--

Mozahid Sufiyan
Mozahid Sufiyan

Written by Mozahid Sufiyan

I have been employed in the Oil and Gas Industry since five years ago and I am currently a Chartered Member of the Institution of Mechanical Engineers.

Responses (2)