Перегляд за Автор "Melnychenko, Oleksandr"
Зараз показуємо 1 - 5 з 5
Результатів на сторінці
Налаштування сортування
Документ Apple Detection With Occlusions Using Modified YOLOv5-v1(IEEE Inc., 2023-12-21) Melnychenko, Oleksandr; Savenko, Oleg; Radiuk, PavloIn our research, we created a novel YOLOv5-v1 architecture to identify apples in images with occlusions. We specifically engineered new layers for the BottleneckCSP-v4 module, which replaces the original BottleneckCSP module within the backbone structure of the YOLOv5 network. Integrating the SENet module into our improved trunk network helps to discern features of medium and large-sized fruits more accurately under varying conditions. We also adjusted the initial size of the binding block within the source network to avoid incorrect identification of small objects within the image's background. Based on the test dataset, our experimental results show that our advanced network model can effectively identify fruits captured through an unmanned aerial vehicle camera. The classification metrics - recall, precision, mAP, and F1-score - obtained scores of 92.13%, 84.59%, 87.94%, and 89.02% respectively.Документ Dynamic Trajectory Adaptation for Efficient UAV Inspections of Wind Energy Units(IEEE, Inc., 2024-11-26) Svystun, Serhii; Melnychenko, Oleksandr; Radiuk, Pavlo; Savenko, Oleg; Sachenko, Anatoliy; Lysyi, AndriiThe research presents an automated method for determining the trajectory of an unmanned aerial vehicle (UAV) for wind turbine inspection. The proposed method enables efficient data collection from multiple wind installations using UAV optical sensors, considering the spatial positioning of blades and other components of the wind energy installation. It includes component segmentation of the wind energy unit (WEU), determination of the blade pitch angle, and generation of optimal flight trajectories, considering safe distances and optimal viewing angles. The results of computational experiments have demonstrated the advantage of the proposed method in monitoring WEU, achieving a 78% reduction in inspection time, a 17% decrease in total trajectory length, and a 6% increase in average blade surface coverage compared to traditional methods. Furthermore, the process minimizes the average deviation from the optimal trajectory by 68%, indicating its high accuracy and ability to compensate for external influences.Документ Intelligent integrated system for fruit detection using multi-UAV imaging and deep learning(Multidisciplinary Digital Publishing Institute, 2024-03-16) Melnychenko, Oleksandr; Scislo, Lukasz; Savenko, Oleg; Sachenko, Anatoliy; Radiuk, PavloIn the context of Industry 4.0, one of the most significant challenges is enhancing efficiency in sectors like agriculture by using intelligent sensors and advanced computing. Specifically, the task of fruit detection and counting in orchards represents a complex issue that is crucial for efficient orchard management and harvest preparation. Traditional techniques often fail to provide the timely and precise data necessary for these tasks. With the agricultural sector increasingly relying on technological advancements, the integration of innovative solutions is essential. This study presents a novel approach that combines artificial intelligence (AI), deep learning (DL), and unmanned aerial vehicles (UAVs). The proposed approach demonstrates superior real-time capabilities in fruit detection and counting, utilizing a combination of AI techniques and multi-UAV systems. The core innovation of this approach is its ability to simultaneously capture and synchronize video frames from multiple UAV cameras, converting them into a cohesive data structure and, ultimately, a continuous image. This integration is further enhanced by image quality optimization techniques, ensuring the high-resolution and accurate detection of targeted objects during UAV operations. Its effectiveness is proven by experiments, achieving a high mean average precision rate of 86.8% in fruit detection and counting, which surpasses existing technologies. Additionally, it maintains low average error rates, with a false positive rate at 14.7% and a false negative rate at 18.3%, even under challenging weather conditions like cloudiness. Overall, the practical implications of this multi-UAV imaging and DL-based approach are vast, particularly for real-time fruit recognition in orchards, marking a significant stride forward in the realm of digital agriculture that aligns with the objectives of Industry 4.0.Документ Precision Slicing for Enhanced Defect Detection in High-Resolution Wind Turbine Blade Imagery(CEUR-WS.org, 2024-07-29) Svystun, Serhii; Melnychenko, Oleksandr; Radiuk, Pavlo; Savenko, Oleg; Sachenko, AnatoliyThe analysis of high-resolution aerial imagery captured by unmanned aerial vehicles (UAVs) presents significant analytical challenges, primarily due to the minuscule size of observable objects and the variability in object scale influenced by UAV altitude and positioning. These factors often lead to diminished data fidelity and complicate the detection of smaller objects, which are critical in applications such as infrastructure monitoring. Traditional image processing techniques, which typically segment images into smaller, randomly cropped sections before analysis, must sufficiently address these challenges. In this work, we propose a novel defect detection framework for identifying minor to medium-sized damages on wind turbine blades (WTBs), a critical component in renewable energy production. The proposed framework, termed 'slice-aided inference,' enhances the existing methodologies by incorporating both traditional patch division and a novel, more advanced technique known as slice-aided hyper-inference. These techniques are rigorously assessed with various advanced deep learning models, emphasizing their efficiency in identifying surface defects. The empirical testing conducted as part of this study demonstrates significant enhancements in detection capabilities, leveraging a dataset of high-resolution UAV images to highlight the practical applications and effectiveness of the proposed framework in real-world scenarios.Документ Thermal and RGB Images Work Better Together in Wind Turbine Damage Detection(Research Institute for Intelligent Computer Systems, 2024-12-05) Svystun, Serhii; Melnychenko, Oleksandr; Radiuk, Pavlo; Savenko, Oleg; Sachenko, Anatoliy; Lysyi, AndriiThe inspection of wind turbine blades (WTBs) is crucial for ensuring their structural integrity and operational efficiency. Traditional inspection methods can be dangerous and inefficient, prompting the use of unmanned aerial vehicles (UAVs) that access hard-to-reach areas and capture high-resolution imagery. In this study, we address the challenge of enhancing defect detection on WTBs by integrating thermal and RGB images obtained from UAVs. We propose a multispectral image composition method that combines thermal and RGB imagery through spatial coordinate transformation, key point detection, binary descriptor creation, and weighted image overlay. Using a benchmark dataset of WTB images annotated for defects, we evaluated several state-of-the-art object detection models. Our results show that composite images significantly improve defect detection efficiency. Specifically, the YOLOv8 model’s accuracy increased from 91% to 95%, precision from 89% to 94%, recall from 85% to 92%, and F1-score from 87% to 93%. The number of false positives decreased from 6 to 3, and missed defects reduced from 5 to 2. These findings demonstrate that integrating thermal and RGB imagery enhances defect detection on WTBs, contributing to improved maintenance and reliability.