Implementación de tecnologías de borde y técnicas de Deep Learning para vigilancia en áreas remotas con canales de comunicación limitados
| dc.contributor.advisor | Parra Peña, Jhon Freddy | |
| dc.contributor.author | Bohada Vargas, Sergio David | |
| dc.contributor.other | Garcia Barreto, Germán Alberto (Catalogador) | |
| dc.date.accessioned | 2025-04-04T23:25:36Z | |
| dc.date.available | 2025-04-04T23:25:36Z | |
| dc.date.created | 2025-02-13 | |
| dc.description | La necesidad de salvaguardar a las personas, los espacios y las ciudades ha sido un factor determinante para el desarrollo de sistemas de videovigilancia basados en modelos de aprendizaje profundo cada vez más robustos e inteligentes. Sin embargo, en contextos como el colombiano, caracterizado por limitaciones significativas en su infraestructura de telecomunicaciones a nivel nacional, esta tarea enfrenta diversos desafíos. En respuesta a esta problemática, el presente trabajo aborda la investigación, prueba y evaluación práctica de tecnologías y técnicas óptimas para la implementación de modelos de aprendizaje profundo en dispositivos de borde. Para ello, se establecieron y aplicaron lineamientos derivados de la literatura actual, los cuales fueron evaluados en un entorno práctico. Se llevó a cabo el entrenamiento de distintos modelos de aprendizaje profundo utilizando un conjunto de imágenes extraídas de datos diseñados específicamente para el dominio del problema. Posteriormente, se realizaron pruebas de los modelos mediante el diseño y desarrollo de un sistema de borde en un entorno práctico, cuyos resultados fueron contrastados con los de un sistema en producción actualmente activo. | |
| dc.description.abstract | The need to safeguard people, spaces, and cities has been a key driver for the development of video surveillance systems based on increasingly robust and intelligent deep learning models. However, in contexts like Colombia, characterized by significant limitations in its national telecommunications infrastructure, this task faces various challenges. In response to this issue, the present work focuses on the research, testing, and practical evaluation of optimal technologies and techniques for implementing deep learning models on edge devices. To this end, guidelines derived from the current literature were established and applied, which were then assessed in a practical setting. The training of different deep learning models was carried out using a dataset of images specifically designed for the problem domain. Subsequently, the models were tested through the design and development of an edge system in a practical environment, and the results were compared with those from a currently active production system. | |
| dc.format.mimetype | ||
| dc.identifier.uri | http://hdl.handle.net/11349/94674 | |
| dc.language.iso | spa | |
| dc.publisher | Universidad Distrital Francisco Jóse de Caldas | |
| dc.relation.references | [1] Ultralytics, ‘‘Yolovx models - documentation.’’ https://docs.ultralytics.com/es/models/yolov, 2024. Último acceso: 1 de noviembre de 2024. | |
| dc.relation.references | [2] A. Wang, H. Chen, L. Liu, K. Chen, Z. Lin, J. Han, and G. Ding, ‘‘Yolov10: Real-time end-to-end object detection,’’ arXiv preprint arXiv:2405.14458, 2024. | |
| dc.relation.references | [3] C.-Y. Wang and H.-Y. M. Liao, ‘‘YOLOv9: Learning what you want to learn using programmable gradient information,’’ 2024. | |
| dc.relation.references | [4] S. Aharon, Louis-Dupont, Ofri Masad, K. Yurkova, Lotem Fridman, Lkdci, E. Khvedchenya, R. Rubin, N. Bagrov, B. Tymchenko, T. Keren, A. Zhilko, and Eran-Deci, ‘‘Super-gradients,’’ 2021. | |
| dc.relation.references | [5] N. Corporation, ‘‘Jetson orin nano developer kit user guide,’’ 2024. Último acceso: 1 de noviembre de 2024. | |
| dc.relation.references | [6] H. Guo, B. Tian, Z. Yang, B. Chen, Q. Zhou, S. Liu, K. Nahrstedt, and C. Danilov, ‘‘Deepstream: Bandwidth efficient multi-camera video streaming for deep learning analytics,’’ arXiv preprint arXiv:2306.15129, 2023. | |
| dc.relation.references | [7] Y. Zhao, Y. Yin, and G. Gui, ‘‘Lightweight deep learning-based intelligent edge surveillance techniques,’’ International Journal of Mechanical Engineering, vol. 7, no. 5, pp. 311--314, 2022. Disponible en https://www.amd.com/es/products/ system-on-modules/what-is-a-som.html. | |
| dc.relation.references | [8] R. Chavan and S. Patil, ‘‘Multi-target detection and tracking in cctv using deep learning techniques,’’ International Journal of Mechanical Engineering, vol. 7, pp. 311--314, May 2022. | |
| dc.relation.references | [9] D. G. Lema, R. Usamentiaga, and D. F. García, ‘‘Quantitative comparison and performance evaluation of deep learning- based object detection models on edge computing devices,’’ Carbohydrate Polymers, vol. 95, p. 102127, 2024. | |
| dc.relation.references | [10] J. Chen, K. Li, Q. Deng, K. Li, and P. S. Yu, ‘‘Distributed deep learning model for intelligent video surveillance systems with edge computing,’’ IEEE Transactions on Parallel and Distributed Systems, 2023. Disponible en https: //ieeexplore.ieee.org/. | |
| dc.relation.references | [11] H. Lokhande and S. Ganorkar, ‘‘Optimizing real-time object detection on edge devices: A transfer learning approach,’’ International Journal of Intelligent Systems and Applications in Engineering, vol. 12, no. 21s, pp. 3896--3903, 2024. | |
| dc.relation.references | [12] AMD, ‘‘What is a som?.’’ https://www.amd.com/es/products/system-on-modules/what-is-a-som.html, 2023. Consul- tado el 1 de noviembre de 2024. | |
| dc.relation.references | [13] Z. Zou, K. Chen, Z. Shi, Y. Guo, and J. Ye, ‘‘Object detection in 20 years: A survey,’’ IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022. Disponible en https://arxiv.org/abs/1905.05055. | |
| dc.relation.references | [14] L. Huang, C. Chen, J. Yun, Y. Sun, J. Tian, Z. Hao, H. Yu, and H. Ma, ‘‘Multi-scale feature fusion convolutional neural network for indoor small target detection,’’ Front. Neurorobot., vol. 16, p. 881021, 2022. Published: 19 May 2022. | |
| dc.relation.references | [15] S. C. Magalhães, F. N. dos Santos, P. Machado, A. P. Moreira, and J. Dias, ‘‘Benchmarking edge computing devices for grape bunches and trunks detection using accelerated object detection single shot multibox deep learning models,’’ Engineering Applications of Artificial Intelligence, vol. 117, p. 105604, 2023. Published by Elsevier under the CC BY license. | |
| dc.relation.references | [16] W. Rahmaniar and A. Hernawan, ‘‘Real-time human detection using deep learning on embedded platforms: A review,’’ Journal of Robotics and Control (JRC), vol. 2, pp. 462--467, November 2021. | |
| dc.relation.references | [17] S. Mittal, ‘‘A survey on optimized implementation of deep learning models on the nvidia jetson platform,’’ Journal of Systems Architecture, 2019. Accessed from ResearchGate. | |
| dc.relation.references | [18] H. Feng, G. Mu, S. Zhong, P. Zhang, and T. Yuan, ‘‘Benchmark analysis of yolo performance on edge intelligence devices,’’ Cryptography, vol. 6, no. 2, p. 16, 2022. | |
| dc.relation.references | [19] H.-S. Chang, C.-Y. Wang, R. R. Wang, G. Chou, and H.-Y. M. Liao, ‘‘YOLOR-based multi-task learning,’’ arXiv preprint arXiv:2309.16921, 2023. | |
| dc.relation.references | [20] V. N. Vaibhav Patel, Mahendra Kanojia, ‘‘Exploring the potential of resnet50 and yolov8 in improving breast cancer diag- nosis: A deep learning perspective,’’ International Journal of Computer Information Systems and Industrial Management Applications, vol. 16, pp. 416--431, 2024. | |
| dc.relation.references | [21] NVIDIA, ‘‘Nvidia tensorrt sdk,’’ 2024. Accessed: 2024-12-04. | |
| dc.relation.references | [22] NVIDIA Corporation, ‘‘DeepStream SDK.’’ https://developer.nvidia.com/deepstream-sdk, 2024. Último acceso: 1 de noviembre de 2024. | |
| dc.relation.references | [23] PyTorch Team, ‘‘Pytorch,’’ 2024. Accessed: 2024-12-04. [24] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, ‘‘The pasca | |
| dc.rights.acceso | Abierto (Texto Completo) | |
| dc.rights.accessrights | OpenAccess | |
| dc.subject | Videovigilancia | |
| dc.subject | Aprendizaje Profundo | |
| dc.subject | Dispositivos de borde | |
| dc.subject.keyword | Video Surveillance | |
| dc.subject.keyword | Deep Learning | |
| dc.subject.keyword | Edge devices | |
| dc.subject.lemb | Ingeniería de Sistemas -- Tesis y disertaciones académicas | |
| dc.subject.lemb | Videovigilancia | |
| dc.subject.lemb | Equipo y accesorios en telecomunicaciones | |
| dc.subject.lemb | Innovación tecnológica | |
| dc.subject.lemb | Evaluación tecnológica | |
| dc.subject.lemb | Procesamiento electrónico de datos | |
| dc.title | Implementación de tecnologías de borde y técnicas de Deep Learning para vigilancia en áreas remotas con canales de comunicación limitados | |
| dc.title.titleenglish | Implementation of edge technologies and deep learning techniques for surveillance in remote areas with limited communication channels | |
| dc.type | bachelorThesis | |
| dc.type.coar | http://purl.org/coar/resource_type/c_8042 | |
| dc.type.degree | Monografía | |
| dc.type.driver | info:eu-repo/semantics/bachelorThesis |
Archivos
Bloque original
1 - 3 de 3
Cargando...
- Nombre:
- Monografía_Sergio_Bohada_RIUD_Firmada.pdf
- Tamaño:
- 20.02 MB
- Formato:
- Adobe Portable Document Format
No hay miniatura disponible
- Nombre:
- Licencia de uso y publicacion Sergio Bohada v2.pdf
- Tamaño:
- 210.67 KB
- Formato:
- Adobe Portable Document Format
Bloque de licencias
1 - 1 de 1
No hay miniatura disponible
- Nombre:
- license.txt
- Tamaño:
- 7 KB
- Formato:
- Item-specific license agreed upon to submission
- Descripción:
