A study on radar signal processing and object segmentation for drone system applications

109 15 0
A study on radar signal processing and object segmentation for drone system applications

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

Thông tin tài liệu

Doctoral Dissertation A Study on Radar Signal Processing and Object Segmentation for Drone System Applications Department of Electronics and Computer Engineering Graduate School of Chonnam National University NGUYEN Huy Toan February 2020 TABLE OF CONTENTS Contents i LIST OF FIGURES iv LIST OF TABLE vii GLOSSARY viii Abstract xi Chapter INTRODUCTION 13 Drone system overview 13 1.1 Drone system hardware configuration 14 1.2 Drone system architecture 15 Drone applications in this study 16 Objectives of the study 17 Contribution of the thesis 18 Outline 18 Chapter IMPULSE RADAR SIGNAL PROCESSING 20 Motivations 20 The proposed radar system 20 2.1 Hardware configuration 20 2.2 Software algorithms 21 Experimental setup 25 Experimental results 26 4.1 Distance estimation result 26 4.2 Distance maintenance result 27 Conclusion 28 Chapter FMCW RADAR SIGNAL PROCESSING 29 Motivation and Related Works 29 Data Collection Method 31 Methodology 34 3.1 Preprocessing Data 34 3.2 Background Modeling based on Robust PCA 35 i 3.3 Moving Objects Localization 39 Experimental setup 40 Experimental results 42 5.1 Performance across different approaches 42 5.2 Performance across different updating methods 48 5.3 Impact of the sliding window size 49 5.4 Impact of the number of iteration 50 Conclusion 51 Chapter OBJECT SEGMENTATION BASED ON DEEP LEARNING 52 Motivation and Related Works 52 1.1 Motivation 52 1.2 Related works 54 Proposed method 59 2.1 Data preprocessing 60 2.2 The Proposed Network Architecture 61 2.2.1 Modified U-net network 64 2.2.2 High-level feature network 64 2.3 Training process 65 2.4 Data post processing 66 Experiment and results 67 3.1 Datasets 67 3.2 Experimental setup 68 3.3 Experimental results on CDF dataset 69 3.4 Experimental results on AigleRN dataset 71 3.5 Experimental results on cross dataset 75 Conclusion 77 Chapter DRONE SYSTEM APPLICATIONS 79 Wind turbine inspection using drone system 79 1.1 Motivation and related works 79 1.2 Experimental setup and data record method 81 ii 1.3 Experimental results 82 1.4 Conclusion 85 Plant growth stage recognition using drone system 86 2.1 Motivation and related works 86 2.2 Method 88 2.3 Experiments 90 2.4 Conclusion 93 Chapter CONCLUSION AND FUTURE WORKS 94 Conclusion 94 Future works 95 References 96 Acknowledgments 105 (국문초록) 106 iii LIST OF FIGURES Figure 1.1 The drone system applications (a) Monitoring applications; (b) Firefighting application; (c) Rescue application, (d) Agriculture application 13 Figure 1.2 The prototype of drone system (a) Using Digital camera and IR-UWB radar, (b) Using RPi Camera and FMCW radar 14 Figure 1.3 The proposed system architecture 16 Figure 1.4 Drone system applications (a) Wind turbine inspection, (b) Plant growth stage recognition 17 Figure 2.1 Radar module hardware configuration 21 Figure 2.2 Radar module prototype 21 Figure 2.3 Distance measurement algorithm flow chart 22 Figure 2.4 Radar data normalization result 23 Figure 2.5 Shape of logarithm function 23 Figure 2.6 Smooth calibration function using Polynomial regression 24 Figure 2.7 Testing of IR-UWB radar sensor 26 Figure 2.8 Reference distance and computed output 26 Figure 2.9 Distance maintenance results 27 Figure 3.1 120 GHz Radar front end block diagram [19] 32 Figure 3.2 FMCW Radar sensor connection (a) Real connection, (b) Specific connection diagram 32 Figure 3.3 Raw data signal (a) Raw data frame, (b) Raw data matrix in the distance scale 33 Figure 3.4 Calibration experimental setup 33 iv Figure 3.5 Time-based sliding window 34 Figure 3.6 Block diagram for detecting moving objects 34 Figure 3.7 AMPD algorithm [26] 40 Figure 3.8 Experimental Scenarios (a) Indoor environment; (b) Outdoor environment 42 Figure 3.9 Original data with one moving object 42 Figure 3.10 Detection performance across different methods 43 Figure 3.11.Noise removed signals and target position for one moving object in Figure 3.9 (a) RPCA via IALM [15], (b) RPCA via GD [17], (c) Online RPCA [16], (d) Proposed method 45 Figure 3.12 Target detection results for multiple moving objects (a) Two moving objects, (b) Three moving objects, (c) Four moving objects, (d) Five moving objects (From top to bottom: Original data, RPCA via IALM [15], RPCA via GD [17]) 46 Figure 3.13 Target detection results for multiple moving objects (a) Two moving objects, (b) Three moving objects, (c) Four moving objects, (d) Five moving objects (From top to bottom: Original data, Online RPCA [16] and proposed method results) 47 Figure 3.14 Detection performance across different update methods 48 Figure 3.15 Impact of the sliding window size 50 Figure 3.16 Impact of the number of iteration 50 Figure 4.1 Overview of crack identification 54 Figure 4.2 Illustration of data pre-processing steps (a) Original image, (b) ground truth, (c) grey-scale image, (d) normalized image, (e) histogram equalization image, and (f) preprocessed image 62 Figure 4.3 The schematic architecture of the proposed network 63 v Figure 4.4 Crack prediction results by our proposed method (From top to bottom: Original images, Ground truth, Probability map, Binary output) 67 Figure 4.5 Crack prediction results on CFD dataset (From top to bottom: Original image, ground truth, MFCD [46], CNN [56] and our results 70 Figure 4.6 Results on AigleRN dataset From left to right: Original images, Ground truth images, FFA, MPS, MFCD, CNN, the proposed method 73 Figure 4.7 Detection results on AigleRN dataset From top to bottom: Original images, Ground truth images, FFA, MPS, MFCD, CNN, and our results 74 Figure 4.8 Detection results on cross data generation (a), (b), (c), (d) Original images and ground truth of CFD dataset and AigleRN dataset, (e) Training / Testing: CFD / CFD, (f) Training / Testing: AigleRN / AigleRN, (g) Training / Testing: AigleRN / CFD, and (h) Training / Testing: CFD / AigleRN 77 Figure 5.1 Wind power energy in South Korea [72] 79 Figure 5.2 Proposed Network architecture 81 Figure 5.3 Wind turbine inspection using the drone system (a) Drone system working state, (b) The prototype of drone system 82 Figure 5.4 Illustration of predicting steps (a) Input image, (b) Network threshold output, (c) Contours detection, (d) Final abnormal appearance results 83 Figure 5.5 Real inspection flight on garlic fields 87 Figure 5.6 Scaling garlic size using ruler 89 Figure 5.7 Illustration of image processing to extract the garlic information (a) Garlic contours detection, (b) Final garlic size results 89 Figure 5.8 Example results of plant recognition 92 vi LIST OF TABLE Table 2.1 Numerical results for distance maintenance algorithm 27 Table 3.1 Setup parameters 41 Table 3.2 Processing speed across different methods 44 Table Comparison of different methods on the same data set (CFD dataset and AigleRN dataset) 58 Table 4.2 Comparison of major deep learning approaches for crack detection and segmentation 59 Table 4.3 Detection results with five pixels of tolerance margin on CFD dataset 71 Table 4.4 Detection results with two pixels of tolerance margin on CFD dataset 71 Table 4.5 Detection results with five pixels of tolerance margin on AigleRN dataset 75 Table 4.6 Detection results with two pixels of tolerance margin on AigleRN dataset 75 Table 4.7 Detection results on cross data generation with five pixels of tolerance margin 76 Table 4.8 Detection results on cross data generation with two pixels of tolerance margin 76 Table 5.1 Comparison between our results and the original U-net network 84 Table 5.2 Performance comparison 84 Table 5.3 Computational cost 85 Table 5.4 Pixel-wise performace on the test dataset 90 Table 5.5 Object-wise performace on the test dataset 91 vii GLOSSARY AEE Average Euclidean Error AMPD Automatic Multiscale-based Peak Detection CFAR Constant False Alarm Rate CFD Crack Forest Dataset CLAHE Contrast Limited Adaptive Histogram Equalization CNNs Convolutional Neural Networks CPU Central Processing Unit DCNN Deep Convolutional Neural Networks DLL Delay-Locked Loop DNN Deep Neural Network FFA Free-Form Anisotropy FCN Fully Convolutional Network FFT Fast Fourier Transform FMCW Frequency-Modulated Continuous-Wave FN False Negative FP False Positive GMM Gaussian Mixture Model GPS Global Positioning System GUI Graphical User Interface IALM Inexact Augmented Lagrange Multipliers IoT Internet of Things IR-UWB Impulse Radio – Ultra Wideband ISM Industry-Science-Medical LBP Local Binary Pattern viii 2.4 Conclusion In this chapter, the proposed drone system is applied for garlic crop growth stage estimation The system using Raspberry Pi camera to capture the input image of the garlic field and send to laptop computer for processing A CNN-based system is adopted on ground station laptop to extract the information of the garlic crops In addition, the number of image processing methods are used on output image of CNN to extract the garlic size Based on the bounding box size, the user is able to estimate the growth stage of garlic 93 Chapter CONCLUSION AND FUTURE WORKS Conclusion Drone system have been developed for various applications especially in autonomous inspection purposes Each application requires different kind of sensors and signal processing In this thesis, we investigate radar signal processing algorithms and object segmentation approach for drone system Two kinds of radar sensor namely IR-UWB and FMCW radars with different signal processing algorithms are proposed to real-time estimate the distance from the radar to obstacles for safe flight A novel object segmentation architecture is proposed to extract the valuable information from input RGB images The processing algorithms are given from Chapter to Chapter In chapter 2, hardware configuration and software algorithm for an innovative impulse radar is proposed The impulse radar sensor is light-weight and low power consumption a real-time radar signal processing algorithm based on logarithm compensation method and filters on recorded data is presented Experimental results show that the propose impulse radar is able to work with real-time speed with high accuracy On the other hand, a novel algorithm for FMCW radar based on Robust Principal Component Analysis (RPCA) for moving-targets detection is given in Chapter We introduce a new update method for RPCA-GD to reduce the processing time The proposed scheme shows impressive results in both processing time and accuracy compared to other RPCA-based approaches when using real signals in various experimental scenarios For object segmentation problem, we propose a new CNN based network architecture on normal gray-scale images The proposed architecture contains a modified U-net network and a high-level features network A critical contribution of our work is the combination of these networks afforded through the fusion layer We implement and thoroughly evaluate our proposed system on two open datasets: The Crack Forest Dataset (CFD) and the AigleRN dataset Experimental results show that our system outperforms eight state-of-the-art methods 94 on two open datasets In Chapter 5, we introduce some applications of proposed drone system on wind turbine inspection problem and recognition of plant growth stage Our drone system is applied for inspection of the wind turbine at YongGwang wind turbine farm Furthermore, monitoring the growth stage of plant in agriculture field are conducted Gwangju Institute of Science and Technology and Chonnam National University The experimental results show that our system is safe, achieve high accuracy with real-time speed Future works Our drone system is able to avoid collision with other obstacles and extract the useful information for the user automatically However, there are a number of future works to improve our drone system First, we would like to optimize the radar signal-processing algorithm in case of various clutter during flight Second, object segmentation system is able to modify to multiple object segmentation to extend on other applications such as traffic monitoring or weed and crop classify In detail, for wind turbine application, we need to put more efforts to improve drone localization around wind turbine Accurate localization increase the consistencies of inspection allow the drone to get closer to the WT and get more precise image We expect to modify the control algorithm to get good tracking of drone position for 100% automated inspection service In addition, drone system is able to extend to work in various application in agriculture field such as irrigation and festination equipment monitoring, monitoring livestock, weed management, soil monitoring and construction site inspections 95 References [1] M Ghavami, L.B Michael, R Kohno, “Ultra-Wideband Signals and Systems in Communication Engineering,” John Wiley & Sons, Ltd: Newark, NJ, USA, 2006 [2] X Zhou, A Yang, W Yu, “Moving object detection by detecting contiguous outliers in the low-rank representation,” IEEE Trans Pattern Anal Machine Intell., Vol 35, No 3, pp 597–610, Mar 2013 [3] O Oreifej, X Li, M Shah, “Simultaneous video stabilization and moving object detection in turbulence,” IEEE Trans Pattern Anal Machine Intell., Vol 35, No 2, pp 450–462, Feb 2013 [4] C Li, Z Peng, T.Y Huang, T Fan, F.K Wang, T.S Horng, J.M Moz-Ferreras, R Gómez-García, L Ran, J Lin, “A review on recent progress of portable short-range noncontact microwave radar systems,” IEEE Trans Microwave Theory Tech., Vol 65, No 5, pp 1692–1706, May 2017 [5] W Wang, Y Takeda, Y Yeh, B Floyd, “A 20GHz VCO and frequency doubler for W-band FMCW radar applications, “ In Proceedings of the 14th Topical Meeting on Silicon Monolithich Integrated Circuits in Rf System (SiRF), pp 104–106, Jan 2014 [6] M.T Dao, D.H Shin, Y.T Im, S.O Park, “A two sweeping VCO source for heterodyne FMCW radar,” IEEE Trans Instrum Meas., Vol 62, No 1, pp 230–239, Jan 2013 [7] V Winkler, R Feger, L.Maurer, “79GHz automotive short range radar sensor based on single-chip SiGe-transceivers,” In Proceedings of European Radar Conference (EuRAD), pp 459–462, Oct 2008 [8] H J Ng, A Fischer, R Feger, R Stuhlberger, L Maurer, A Stelzer, “A DLL-Supported, low phase noise Fractional-N PLL with a wideband VCO and a highly linear frequency ramp generator for FMCW radars IEEE Trans Circuits Syst I Vol 60, No 12, pp 3289–3302, Dec 2013 [9] E Hyun, J.H Lee, “Method to improve range and velocity error using de-interleaving and 96 frequency interpolation for automotive FMCW radars,” International Journal of Signal Processing, Image Processing and Pattern Recognition, Vol No 2, pp 11–22, June 2009 [10] M Song, J Lim, D.J Shin, “The velocity and range detection using the 2D-FFT scheme for automotive radars,” In Proceedings of 4th IEEE International Conference on Network Infrastructure and Digital Content (IC-NIDC), pp 507–510, Sep 2014 [11] M Kronauge, H Rohling, “Fast two-dimensional CFAR procedure,” IEEE Trans Aerosp Electron Syst Vol 49, No 3, pp 1817–1823, July 2013 [12] J Li, W Che, T Shen, W Feng, X Li, K Deng, “An improved waveform for multi-target detection in FMCW vehicle radar,” In Proceedings of the 7th International Conference on Mechatronics, Control and Materials (ICMCM 2016), pp 203–206, Oct 2016 [13] Y Zhao, Y Su, “Vehicles detection in complex urban scenes using Gaussian mixture model with FMCW radar,” IEEE Sensors J Vol 17, No 18, pp 5948–5953, Sept 2017 [14] T Bouwmans, A Sobral, S Javed, S.K Jung, E-H Zahzah, “Decomposition into lowrank plus additive matrices for background/foreground separation: A Review for a Comparative Evaluation with a Large-Scale Dataset,” Journal of Computer Science Review Vol 23, pp 1–71, Feb 2017 [15] Z Lin, M Chen, L Wu, Y Ma, “The augmented Lagrange multiplier method for exact recovery of corrupted low-rank matrices,” arXiv:1009.5055, 2010 [16] J Feng, H Xu, S Yan, “Online robust PCA via stochastic optimization.” In Proceedings of the 26th Annual Conference on Neural Information Processing Systems, pp 404–412 Dec 2013 [17] X Yi, D Park, Y Chen, C Caramanis, “Fast algorithms for robust PCA via gradient descent,” arXiv:1605.07784v2, 2016 [18] W Debski, W Winkler, Y Sun, M Marinkovic, J Borngräber, J.C Scheytt, “120 GHz radar mixed-signal transceiver,” In Proceedings of the 7th European Microwave Integrated Circuits Conference (EuMIC), pp 191–194, Oct 2012 [19] Silicon radar, http://siliconradar.com/ 97 [20] E Ozturk, D Genschow, U Yodprasit, B Yilmaz, D Kissinger, W Debski, W Winkler, “Measuring target range and velocity: Developments in chip, Antenna, and packaging technologies for 60-GHz and 122-GHz industrial radars,” IEEE Microwave, Vol 18, No 07, pp 26–39, Dec 2017 [21] M A Richards, “Fundamentals of Radar Signal Processing,” 2nd ed.; McGraw-Hill, New York, USA, 2005 [22] E.J Candès, X Li, Y Ma, J Wright, “Robust principal component analysis?,” Journal of the ACM, Vol 58, No 3, May 2011 [23] H.A Meziani, F Soltani, “Performance analysis of some CFAR detectors in homogeneous and non-homogeneous Pearson-distributed clutter,” Journal of Signal Processing, Vol 86, No 8, pp.2115–2122, Aug 2006 [24] T Wagner, R Feger, A Stelzer, “Cluster CLEAN: An application of CLEAN to LFMCW radar systems,” In Proceedings of the 11th European Radar Conference (EuRAD), pp 173–176, Oct 2014 [25] I.-S Choi, D.-K Seo, J.-K Bang, H.-T Kim, E.J Rothwell, “Radar target recognition using one-dimensional evolutionary programming–based clean,” Journal of Electromagnetic Waves and Applications Vol 17, No 5, pp 763–784, 2003 [26] F Scholkmann, J Boss, M Wolf, “An efficient algorithm for automatic peak detection in noisy periodic and quasi-periodic signals,” Algorithms Vol 5, No 4, pp 588–603, 2012 [27] D Sabushimike, S.Y Na, J.Y Kim, N.N Bui, K.S Seo, G.G Kim, “Low-rank matrix recovery approach for clutter rejection in real-time IR-UWB radar-based moving target detection,” Sensors, Vol 16, No 9, 2016 [28] A Mohan, S Poobal, “Crack detection using image processing: A critical review and analysis,” Alexandria Engineering Journal, Vol 57, No 2, pp 787–798, June 2018 [29] O Ronneberger, P Fischer, T Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp 234–241, 2015 98 [30] P Liskowski, K Krawiec, “Segmenting retinal blood vessels with deep neural networks,” IEEE Trans Med Imag., Vol 35, No 11 , pp 2369–2380, Nov 2016 [31] V Badrinarayanan, A Kendall, R Cipolla, “SegNet: A Deep Convolutional EncoderDecoder Architecture for Image Segmentation,” IEEE Trans Pattern Anal Machine Intell., Vol 39, No 12, pp 2481–2495, Dec 2017 [32] I Abdel-Qader, O Abudayyeh, M E Kelly, “Analysis of Edge-Detection Techniques for Crack Identification in Bridges,” Journal of Computing in Civil Engineering, Vol 17, No 4, pp 255–263, Oct 2003 [33] Y Hu, C Zhao, H Wang, “Automatic pavement crack detection using texture and shape descriptors,” IETE Technical Review, Vol 27, No 5, pp 398–405, 2010 [34] H Oliveira, P.L Correia, “Automatic road crack segmentation using entropy and image dynamic thresholding,” in Proceedings of the 17th European Signal Processing Conference, pp 622–626, Aug 2009 [35] T Yamaguchi, S Hashimoto, “Fast crack detection method for large-size concrete surface images using percolation-based image processing,” Journal of Machine Vision and Applications, Vol 21, No 05, pp 797–809, Aug 2010 [36] Y Hu, C Zhao, “A local binary pattern based methods for pavement crack detection,” Journal of Pattern Recognition Research, Vol 5, No 1, pp 140–147, 2010 [37] R S Lim, H M La, W Sheng, Z Shan, “Developing a crack inspection robot for bridge maintenance,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp 6288–6293, May 2011 [38] R S Lim, H M La, W Sheng, “A robotic crack inspection and mapping system for bridge deck maintenance,” IEEE Transactions on Automation Science and Engineering (T-ASE), Vol 11, No 2, pp 367–78, Apr 2014 [39] T S Nguyen, S Begot, F Duculty, M Avila, “Free-form anisotropy: A new method for crack detection on pavement surface images,” in Proceedings of the 18th IEEE International Conference on Image Processing (ICIP), pp 1069–1072, Sept 2011 99 [40] M Avila, S Begot, F Duculty, T.S Nguyen, “2D image based road pavement crack detection by calculating minimal paths and dynamic programming,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), pp 783–787, Jan 2014 [41] R Amhaz, S Chambon, J Idier, V Baltazart, “A new minimal path selection algorithm for automatic crack detection on pavement images,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), pp 788–792, Oct 2014 [42] R Amhaz, S Chambon, J Idier, V Baltazart, “Automatic Crack Detection on TwoDimensional Pavement Images: An Algorithm Based on Minimal Path Selection,” IEEE Trans Intell Transport Syst., Vol 17, No 10, pp 2718-2729, Oct 2016 [43] V Kaul, A Yezzi, Y C Tsai, “Detecting curves with unknown endpoints and arbitrary topology using minimal paths,” IEEE Trans Pattern Anal Machine Intell., Vol 34, No 10, pp 1952–1965, Oct 2012 [44] Q Zou, Y Cao, Q Li, Q Mao, S Wang, “CrackTree: Automatic crack detection from pavement images,” Pattern Recognition Letters, Vol 33, No 3, pp 227–238, Feb 2012 [45] H Li, D Song, Y Liu, B Li, “Automatic Pavement Crack Detection by Multi-Scale Image Fusion,” IEEE Trans Intell Transport Syst., Vol 20, No 6, pp 2025 - 2036, June 2019 [46] H Oliveira, P L Correia, “Automatic road crack detection and characterization,” IEEE Trans Intell Transport Syst., Vol 14, No 1, pp 155–168, Aug 2012 [47] H Oliveira, P L Correia, “CrackIT — An image processing toolbox for crack detection and characterization,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), Paris, pp 798–802, Oct 2014 [48] A Cord, S Chambon, “Automatic road defect detection by textural pattern recognition based on adaboost,” Computer-Aided Civil and Infrastructure Engineering, Vol 27, No 4, pp 244–59, Mar 2012 [49] K Fernandes, L Ciobanu, “Pavement pathologies classification using graph-based features,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), pp 793–797, Oct 2014 100 [50] D Ai, G Jiang, L Siew Kei, C Li, “Automatic Pixel-level Pavement Crack Detection Using Information of Multi-Scale Neighborhoods,” IEEE Access, Vol 6, pp 24452– 24463, April 2018 [51] Y Shi, L Cui, Z Qi, F Meng, Z Chen, “Automatic road crack detection using random structured forests,” IEEE Trans Intell Transport Syst., Vol 17, No 12, pp 3434–3445, May 2016 [52] A Krizhevsky, I Sutskever, G E Hinton, “Imagenet classification with deep convolutional neural networks,” in Proceedings of the 25th International Conference on Neural Information Processing Systems (NIPS’12), Vol 1, pp 1097–1105, Dec 2012 [53] L Zhang, F Yang, Y Daniel Zhang, Y J Zhu, “ Road crack detection using deep convolutional neural network,” in Proceedings of the IEEE International Conference on Image Processing (ICIP), pp 3708–3712, Aug 2016 [54] Y Cha, W Choi, O Büyüköztürk, “Deep learning-based crack damage detection using convolutional neural networks,” Computer-Aided Civil and Infrastructure Engineering, Vol 32, No 5, pp 361–378, May 2017 [55] K Gopalakrishnan, S K Khaitan, A Choudhary, A Agrawal, “Deep convolutional neural networks with transfer learning for computer vision-based data-driven pavement distress detection,” Journal of Construction and Building Materials, Vol 157, pp 322– 330, Dec 2017 [56] Z Fan, Y Wu, J Lu, W Li, “Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network,” ArXiv, [online] Available: https://arxiv.org/abs/1802.02208v1 [57] Z Tong, J Gao, A Sha, L Hu, S Li, “Convolutional neural network for asphalt pavement surface texture analysis,” Computer-Aided Civil and Infrastructure Engineering, Vol 33, No 12, pp 1056–1072, Dec 2018 [58] H Maeda, Y Sekimoto, T Seto, T Kashiyama, H Omata, “Road Damage Detection and Classification Using Deep Neural Networks with Smartphone Images,” Computer- 101 Aided Civil and Infrastructure Engineering, Vol 33, No 12, pp 1127–1141, Dec 2018 [59] W Liu, D Anguelov, D Erhan, C Szegedy, S Reed, C.-Y Fu, A C Berg, “SSD: single shot multibox detector”, in Proceedings of the European Conference on Computer Vision, pp 21–37, 2016 [60] A G Howard, M Zhu, B Chen, D Kalenichenko, W Wang, T Weyand, M Andreetto, H Adam, “MobileNets: efficient convolutional neural networks for mobile vision applications,” Computer Vision and Pattern Recognition, arXiv preprint arXiv:1704.04861 [61] J Huang, V Rathod, C Sun, M Zhu, A Korattikara, A Fathi, I Fischer, Z Wojna, Y Song, S Guadarrama, K Murphy, “Speed/accuracy trade-offs for modern convolutional object detectors, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 3296-3297, 2017 [62] A Zhang, K C P Wang, B Li, E Yang, X Dai, Y Peng, Y Fei, Y Liu, J Q Li, C Chen, “Automated pixel-level pavement crack detection on 3D asphalt surfaces using a deeplearning network,” Computer-Aided Civil and Infrastructure Engineering, Vol 32, No 10, pp 805–819, Oct 2017 [63] A Zhang, K C P Wang, Y Fei, Y Liu, C Chen, G Yang, J.Q Li, E Yang, S Qiu, “Automated Pixel-Level Pavement Crack Detection on 3D Asphalt Surfaces with a Recurrent Neural Network,” Computer-Aided Civil and Infrastructure Engineering, Vol 34, No 3, pp 213-229, Mar 2019 [64] X Yang, H Li, Y Yu, X Luo, T Huang, X Yang, X Yang, “Automatic pixel-level crack detection and measurement using fully convolutional network,” Computer-Aided Civil and Infrastructure Engineering, Vol 33, No 12, pp 1090-1109, Dec 2018 [65] K Simonyan, A Zisserman, “Very deep convolutional networks for large-scale image recognition,”, https://arxiv.org/abs/1409.1556 [66] K Zuiderveld, “Contrast Limited Adaptive Histograph Equalization,” Graphic Gems IV, Academic Press Professional, pp 474–485, 1994 102 [67] V Nair, G E Hinton, “Rectified linear units improve restricted boltzmann machines,” in Proceedings of the 27th International Conference on Machine Learning (ICML’10), pp 807–814, June 2010 [68] S Ioffe, C Szegedy, “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” in Proceedings of the 32nd International Conference on Machine Learning (ICML’15), Vol 37, pp 448–456, July 2015 [69] N Srivastava, G E Hinton, A Krizhevsky, I Sutskever, R Salakhutdinov, “Dropout: a simple way to prevent neural networks from over fitting.” Journal of Machine Learning Research, Vol 15, No 1, pp 1929–1958, June 2014 [70] P Tchakoua, R Wamkeue, M Ouhrouche, F Slaoui-Hasnaoui, T.A Tameghe, and G Ekemb, “Wind turbine condition monitoring: State-of-the-art review, new trends, and future challenges,” Energies, Vol 7, No 4, pp 2595–2630, 2014 [71] M.H Alsharif, J Kim, and J.H Kim, “Opportunities and challenges of solar and wind energy in south korea: A review”, Sustainability, Vol 10, pp 1–23, 2018 [72] https://www.thewindpower.net/country_en_23_south-korea.php [73] D Chan and J Mo, “Life cycle reliability and maintenance analyses of wind turbines”, Energy Procedia, Vol 110, pp 328–333, Mar 2017 [74] D Li, S.-C.M Ho, G Song, L Ren, and H Li, “A review of damage detection methods for wind turbine blades”, Smart Materials and Structures, Vol 24, No 3, pp 033001, Feb 2015 [75] H Zhang and J Jackman, “A feasibility study of wind turbine blade surface crack detection using an optical inspection method”, in Proceedings of the International Conference on Renewable Energy Research and Applications (ICRERA), pp 847 – 852, Oct 2013 [76] H Zhang and J Jackman, “Feasibility of automatic detection of surface cracks in wind turbine blades”, Wind Engineering, Vol 38, No 6, pp 575-586, Dec 2014 [77] L Emmi, M Gonzalez-de Soto, G Pajares, P Gonzalez-de-Santos, “New trends in 103 robotics for agriculture: Integration and assessment of a real fleet of robots,” The Scientific World Journal, Vol 2014, Article ID 404059, 21 pages, 2014 [78] P Lottes, J Behley, N Chebrolu, A Milioto, C Stachniss, “Joint stem detection and cropweed classification for plant-specific treatment in precision farming,” in Proceedings of the International Conference on Intelligent Robots and Systems (IROS), pp 8233-8238, 2018 [79] T Mueller-Sim, M Jenkins, J Abel, G Kantor, “The robotanist: A ground-based agricultural robot for high-throughput crop phenotyping,” in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp 3634–3639, July 2017 [80] W.S Lee, V Alchanatis, C Yang, M Hirafuji, D Moshou, C Li, “Sensing technologies for precision specialty crop production,” Computers and Electronics in Agriculture, Vol 74, No 1, pp – 33, Oct 2010 [81] J Primicerio, S F D Gennaro, E Fiorillo, L Genesio, E Lugato, A Matese, F P Vaccari, “A flexible unmanned aerial vehicle for precision agriculture,” Precision Agriculture, Vol 13, No 4, pp 517–523, Aug 2012 [82] S Varela, Y Assefa, P V V Prasad, N Peralta, T Griffin, A Sharda, A Ferguson, I Ciampitti, “Spatio-temporal evaluation of plant height in corn via unmanned aerial systems,” Journal of Applied Remote Sensing, Vol 11, No 3, pp 1–12, Aug 2017 [83] P Lottes, R Khanna, J Pfeifer, R Siegwart, C Stachniss, “Uav-based crop and weed classification for smart farming,” In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pp 3024–3031, May 2017 [84] D Anthony, S Elbaum, A Lorenz, C Detweiler, “On crop height estimation with uavs,” In Proceedings of the IEEE International Conference on Intelligent Robots and Systems, pp 4805–4812, Sep 2014 [85] A C Birdal, U Avdan, and T Turk, “Estimating tree heights with images from an unmanned aerial vehicle,” Journal of Geomatics, Natural Hazards and Risk, Vol 8, No 2, pp.1144–1156, Feb 2017 104 Acknowledgments First and foremost, I would like to express my deep gratitude to my advisor Professor Kim Jin Young for the continuous support of my Ph.D study and related research His guidance helped in all the time of research and writing this thesis with motivation, immense knowledge and patience I also would like to thank Professor Na Seung You for his encouragement, guidance and support from my initial steps in research and study of Ph.D course Thank you both very much for proving me research directions and technical feedback during study period in CNU Besides my advisors, I would like to thank my thesis committee: Prof Hong Sung Hoon, Prof Pham The Bao and Prof Seo Kuyng Sik for their time, insightful comments, feedback and encouragement on my thesis from various perspectives I also would like to thank some professors from Electronics and Computer Engineering Department, Prof Kim Dong Kook, Prof Kim Soo Hong, Prof Hong Song Hun, Prof Lee Gue Sang, Prof Choi Teak Chue, Prof Lee Chin Woo, Prof Park Song Mo, Prof Won Yong Won, and Prof Ng Chi Tim from Department of Basic Sciences for their classes and their knowledge I really appreciate that I am also thankful to BK21 Scholarship Program and ITRC for supporting me scholarship during my study In addition, I would like to thank Wave3D and Momed Solution for giving opportunities to work as internship students during my Ph.D course Special thanks to my fellow lab mates, Dr Min So Hee, Dr Shin Do Sung, Dr Bui Ngoc Nam, Dr Trinh Tan Dat, Mr Tran Thuong Khanh, Ms Park Min Kuyng, Ms Liu QinTong, Ms Ma Xinjie, Mr Donatien Sabushimike, Mr Yu Gwang Huyn, Mr Lee Ju Hwan, Mr Hong Seok Jin, Mr Hwang Song Min, Mr Zaigham Zaheer, Mr Shahid ,Mr Vo Hoang Trong, Mr Dang Thanh Vu from ICDSP lab for their interesting discussions, for the tough time we were working together and for all the fun that we had in the last few years My sincere thanks to Vietnamese friends here in Chonnam National University, “Thay Tam”, “anh Duc, chi Van”, “Cong”, “nha Hang Ca”, “Ngoc Mit”, etc for being supportive to me all the time You all have provided us the second family here in Korea Last but not the least, I would like to thank my little family, Le Thien Kim and Hope for being with me, going together through tough time and enjoying happiness together Without you, I would not have done this far including effort and succeed I am very grateful to my big family, my parents, my younger sister, my younger brother in law, and my nephew for supporting me spiritually throughout writing this dissertation and my life in general 105 시스템 어플리케이션을 위한 레이더 신호 처리와 객체 분할 연구 NGUYEN, Huy Toan 전남대학교 대학원 전자컴퓨터공학부 (지도교수 : 김진영) (국문초록) 과거 수십년 동안 점검, 조사, 매핑, 지도제작, 안전, 농업, 광산, 수색 구조와 무인 화물 시스템과 같은 다양한 필드에 드론 시스템은 사용되었다 신뢰할 수 있는 시스템으로 인정받기 위해, 드론은 센서 요소들과 소프트웨어 시스템과 적응 및 통합되어야 한다 반면에, 드론 시스템의 습득 신호를 처리한다는 것은 높은 잡음, 불확실함 및 연산비용으로 인하여 여전히 복잡하고 어려운 문제이다 본 논문은 드론 시스템 어플리케이션으로 센서 신호 처리 알고리즘과 객체 분할 방법을 연구한다 비행 중 드론과 장애물간의 충돌을 피하기 위해 거리 추정이 필요하고, 임펄스 무선 초광대역 레이더 센서와 주파수 변조 연속파 레이더 센서 두 종류의 레이더 센서를 고려한다 지상의 경우, 객체 분할 처리는 입력 영상으로부터 생산적인 정보를 세분화하기 위해 채택된다 본 연구는 새로운 하드웨어 구성과 소프트웨어 알고리즘의 임펄스 레이더를 제안한다 임펄스 레이더 센서 하드웨어는 가벼워야 하고, 저전력이며, 사용이 쉬어야 한다 로그 보정 방법과 원본의 입력 데이터에 대한 필터 기반의 실시간 레이더 신호 처리 알고리즘을 제안한다 제안된 임펄스 레이더는 실시간 처리 속도와 높은 정확도를 수행할 수 있다 106 본 논문은 또한 움직이는 타겟 탐지를 위한 강인한 주성분 분석 기반 주파수 변조 연속파 레이더 신호 처리를 위한 새로운 알고리즘을 제안한다 실험에 기반하여 우선 입력 신호에 보정과 측정을 적용한다 그리고나서 경사 하강법을 통한 강인한 주성분 분석을 채택하여 배경의 저차원의 잡음을 모델링한다 강인한 주성분 분석을 위한 새로운 업데이트 방법은 처리 속도를 감소시킨다 마지막으로, 자동의 다양한 크기 기반의 정점 탐지 방법을 사용하여 전경에서 움직이는 객체의 위치를 확인한다 모든 처리 단계는 슬라이딩 윈도우 기법으로 진행되고, 제안된 방법은 다양한 실험 환경에서의 실제 신호를 사용하여 강인한 주성분 분석 기반의 다른 방법들과 비교했을 때 빠른 처리 속도와 높은 정확도의 인상적인 결과를 보인다 더욱이, 본 연구에서, 보도블럭 크랙 탐지에서의 객체 분할 문제와 그레이스케일 영상을 사용한 깊은 신경망의 픽셀 수준의 분할을 설명한다 변형된 U-net 의 새로운 깊은 신경망 구조와 높은 수준의 특징망을 제안한다 더 중요한 공헌은 융합 층을 통한 앞의 신경망들의 조합을 제공하는 것이다 이런 조합은 놀랍게도 시스템의 성능을 최고로 높여준다 제안된 시스템을 구현하고 개의 공개 데이터셋(the Crack Forest Dataset 와 the AigleRN dataset)으로 철저하게 평가하였다 개의 공개 데이터셋을 사용한 최신의 개 방법들보다 제안한 시스템의 성능이 실험결과 우수하였다 마지막으로, 레이더 신호와 영상 분할에 대한 제안된 알고리즘의 효과를 입증하기 위하여, 풍력 발전기 단지의 풍력 발전 터빈을 조사하고 농장의 식물의 성장 단계를 모니터링하기 위해 드론 시스템에 적용하였다 실험은 각각 영광군 풍력 발전기 단지와 광주 과학기술원 및 전남대학교에서 수행되었다 우리의 시스템은 안전하고, 실시간의 처리 속도와 함께 높은 정확도를 달성한 것을 실험 결과로 보인다 107 ... highly accurate results In general, radar systems are divided into two categories: impulse radar and continuous wave radar In impulse radar system, only one common antenna for transmission and reception... develop and implement radar signal processing algorithms and the object segmentation algorithm for drone system There are two main requirements for radar system: the hardware must be small size and. .. Ultra-Wideband VCO Voltage-Controlled Oscillators WT Wind Turbine x A Study on Radar Signal Processing and Object Segmentation for Drone System Applications NGUYEN, Huy Toan Department of Electronics

Ngày đăng: 04/08/2020, 16:10

Tài liệu cùng người dùng

  • Đang cập nhật ...

Tài liệu liên quan