Main page Automated Detection of Defects in Built Assets using Computer Vision and Deep Machine Learning Techniques The Oxford Brookes Unmanned Aerial Vehicle (UAV) Platform for Real-time Defect Detection in Built Assets

A Smartphone Application for Real-Time Detection of Defects in Buildings

Summary

Condition assessment and health monitoring (CAHM) of built assets, requires effective and continuous monitoring of any changes to the material and/or geometric properties of the assets in order to detect any early signs of defects or damage and act on time. Most of the traditional CAHM techniques, however, depend on manual labour despite that, in some cases, the inspection environment can be unsafe and could lead to low efficiency or miss-judgement of the severity of the defect. In recent years, computer vision techniques have been proposed as an automated alternative to the traditional CAHM techniques as methods for extracting and analysing feature-related information from asset images and videos. Such methods have proven to be robust and effective solutions, complementary to current time-consuming and unreliable manual observational practices.

This work is concerned with the development of a deep learning-based smartphone App which allows real-time detection of four types of defects in buildings namely; cracks, mould, stain, and paint deterioration. Since smartphones are widely available and equipped with high-resolution cameras, this application can offer a practical, low-cost solution for condition assessment procedures of built assets. The obtained results are promising and support the feasibility and effectiveness of the approach to identify and classify various types of building defects.

Training Defect Datase

Figure 1.
Dataset used in this study. A sample of the images that were collected to create the dataset showing different mould images (first row), cracks (second row), stains (third row), and paint deterioration (fourth row).

Defect Detection Results

result images

Figure 2.
Predicted defects obtained from running the defect detector on the desktop. (a), (b), and (c) represent the detection of the mould with the corresponding accuracy; (d), (e), and (f) represent the detection of paint deterioration with the corresponding accuracy; (g), (h), and (i) represents the detection of stain with the corresponding accuracy; (j), (k), and (l) represents the detection of cracks with the corresponding accuracy, finally, (m), (n),(o), shows multiple defects detection with the corresponding accuracy.

Mobile result images

Figure 3.
Screenshots representing predicted defects obtained from running the smartphone App. (a) represents the detection of cracks with the corresponding accuracy and inference time, (b) represents the detection of paint deterioration with its corresponding accuracy and inference time, (c) represents the detection of mould with its corresponding accuracy and inference time, and finally (d) represents the detection of stain with its corresponding accuracy and inference time.

Publications

Perez, H. and Tah, J.H.M. (2021). Deep learning smartphone application for real-time detection of defects in buildings, Struct Control Health Monit. 2021;e2751. https://doi.org/10.1002/stc.2751.