Mecanismo de clasificación de paisajes de optimización basado en muestreo multiescala y aprendizaje automático

dc.contributor.advisorMelgarejo Rey, Miguel Alberto
dc.contributor.authorRodríguez Hernández, Angie Patricia
dc.date.accessioned2025-03-21T15:49:26Z
dc.date.available2025-03-21T15:49:26Z
dc.date.created2024-11-18
dc.descriptionEste trabajo presenta un enfoque para clasificar la modalidad en paisajes de optimización, combinando muestreo multiescala con técnicas de aprendizaje automático. Se seleccionó un conjunto de funciones de optimización, que fueron etiquetadas según la definición de modalidad propuesta por Kanemitsu et al. Para minimizar el sesgo en la muestra, se desarrolló un algoritmo de muestreo multiescala, el cual se basa en el comportamiento de una caminata guiada por una ley de potencias, complementada con mecanismos de explotación a escala fina, para explorar y a su vez explotar los paisajes de optimización. Las muestras obtenidas se representan como una imagen que se utiliza como entrada para una red neuronal convolucional, responsable de clasificar la modalidad del paisaje. Los resultados experimentales demuestran que el enfoque propuesto presenta un rendimiento competitivo en la clasificación de paisajes previamente no observados. Además, los resultados sugieren que la estrategia multiescala proporciona información más fiable que el muestreo aleatorio, que es la técnica estándar en el análisis de paisajes de optimización. Es importante resaltar que de este trabajo nace la afirmación que el problema de entender los problemas de optimización podría verse como un problema de reconocimiento de patrones.
dc.description.abstractThis work presents an approach to classify the modality in optimization landscapes, combining multiscale sampling with machine learning techniques. A set of optimization functions was selected and labeled according to the modality definition proposed by Kanemitsu et al. To minimize sample bias, a multiscale sampling algorithm was developed, based on the behavior of a random walk guided by a power law, complemented with fine-scale exploitation mechanisms, to both explore and exploit the optimization landscapes. The obtained samples are represented as an image, which is used as input to a convolutional neural network responsible for classifying the landscape modality. Experimental results show that the proposed approach achieves competitive performance in classifying previously unseen landscapes. Furthermore, the results suggest that the multiscale strategy provides more reliable information compared to random sampling, which is the standard technique in optimization landscape analysis. It is important to highlight that this work leads to the assertion that the problem of understanding optimization problems could be viewed as a pattern recognition problem.
dc.format.mimetypepdf
dc.identifier.urihttp://hdl.handle.net/11349/94020
dc.language.isospa
dc.publisherUniversidad Distrital Francisco José de Caldas
dc.relation.referencesRyan Dieter Lang and Andries Petrus Engelbrecht. An Exploratory Landscape Analysis-Based Benchmark Suite. Algorithms 2021, Vol. 14, Page 78, 14(3):78, 2 2021.
dc.relation.referencesHideo KANEMITSU, Hideaki IMAI, and Masaaki MIYASKOSHI. Definitions and Properties of (Local) Minima and Multimodal Functions using Level Set for Continuous Optimization Problems. IEICE Proceeding Series, 2:94–97, 3 2014.
dc.relation.referencesWikimedia Commons. Typical CNN architecture, 12 2015.
dc.relation.referencesHadj Ahmed Bouarara. A survey of computational intelligence algorithms and their applications. Handbook of Research on Soft Computing and Nature-Inspired Algorithms, pages 133–176, 3 2017.
dc.relation.referencesMichel Gendreau and Jean-Yves Potvin. Handbook of Metaheuristics, volume 272 of International Series in Operations Research & Management Science. Springer Interna- tional Publishing, Cham, 2019.
dc.relation.referencesAaron Courville Heaton Jeff Ian Goodfellow, Yoshua Bengio. Deep learning, volume 19. MIT Press, 2018.
dc.relation.referencesHossin M and Sulaiman M.N. A Review on Evaluation Metrics for Data Classification Evaluations. International Journal of Data Mining & Knowledge Management Process, 5(2):01–11, 3 2015.
dc.relation.referencesJacob Cohen. A coefficient of agreement for nominal scales. Educational and Psycho- logical Measurement, 1960
dc.relation.referencesZhilu Zhang and Mert R. Sabuncu. Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels. Advances in Neural Information Processing Systems, 2018-December:8778–8788, 5 2018.
dc.relation.referencesJon T. Selvik and Eirik B. Abrahamsen. On the meaning of accuracy and precision in a risk analysis context. http://dx.doi.org/10.1177/1748006X16686897, 231(2):91–100, 1 2017.
dc.relation.referencesminimize(method=Powell) SciPy v1.14.1 Manual.
dc.relation.referencesminimize(method=Nelder-Mead) SciPy v1.14.1 Manual.
dc.relation.referencesM. S. Bazaraa, John J. Jarvis, and Hanif D. Sherali. Linear programming and network flows. 2011.
dc.relation.referencesDavid H Wolpert and William G Macready. No Free Lunch Theorems for Optimization. Technical Report 1, 1997.
dc.relation.referencesLuis R. Izquierdo, José Manuel Galán Ordax, José I. Santos, and Ricardo Del Ol- mo Martínez. Modelado de sistemas complejos mediante simulación basada en agentes y mediante dinámica de sistemas. Empiria. Revista de metodología de ciencias sociales, 0(16):85, 10 2008.
dc.relation.referencesJohn R. Rice. The Algorithm Selection Problem. Advances in Computers, 15(C):65– 118, 1 1976.
dc.relation.referencesRyan Dieter Lang. Landscape analysis-based automated algorithm selection. PhD thesis, Stellenbosch University, 2024.
dc.relation.referencesThomas Back, David B. Fogel, and Zbigniew Michalewicz. Handbook of Evolutionary Computation. CRC Press, 1 1997.
dc.relation.referencesKate A. Smith-Miles. Cross-disciplinary perspectives on meta-learning for algorithm selection. ACM Computing Surveys, 41(1), 12 2008.
dc.relation.referencesDavid B. Fogel. Evolutionary Computation: Toward a New Philosophy of Machine Intelligence | IEEE eBooks | IEEE Xplore. 2006.
dc.relation.referencesKatherine Mary Malan. A Survey of Advances in Landscape Analysis for Optimisation. Algorithms 2021, Vol. 14, Page 40, 14(2):40, 1 2021.
dc.relation.referencesG. M. Ostrovsky, Ye M. Mikhailova, and T. A. Berezhinsky. Optimization of large-scale complex systems. International Journal of Systems Science, 17(8):1121–1132, 2017.
dc.relation.referencesOlaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and Gün- ter Rudolph. Exploratory landscape analysis. In Genetic and Evolutionary Compu- tation Conference, GECCO’11, pages 829–836, New York, New York, USA, 2011. ACM Press.
dc.relation.referencesPascal Kerschke and Heike Trautmann. Automated Algorithm Selection on Continuous Black-Box Problems by Combining Exploratory Landscape Analysis and Machine Lear- ning. Evolutionary Computation, 27(1):99–127, 3 2019.
dc.relation.referencesMario A. Muñoz, Michael Kirley, and Saman K. Halgamuge. Exploratory landscape analysis of continuous space optimization problems using information content. IEEE Transactions on Evolutionary Computation, 19(1):74–87, 2 2015.
dc.relation.referencesKent C.B. Steer, Andrew Wirth, and Saman K. Halgamuge. Information theoretic classification of problems for metaheuristics. In Lecture Notes in Computer Scien- ce (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), volume 5361 LNAI, pages 319–328, 2008.
dc.relation.referencesV. K. Vassilev, T. C. Fogarty, and J. F. Miller. Information characteristics and the structure of landscapes. Evolutionary computation, 8(1):31–60, 2000.
dc.relation.referencesMario Andrés Muñoz, Michael Kirley, and Kate Smith-Miles. Analyzing randomness ef- fects on the reliability of exploratory landscape analysis. Natural Computing, 21(2):131– 154, 6 2022.
dc.relation.referencesBas van Stein, Fu Xing Long, Moritz Frenzel, Peter Krause, Markus Gitterle, and Thomas Bäck. DoE2Vec: Deep-learning Based Features for Exploratory Landscape Analysis. GECCO 2023 Companion - Proceedings of the 2023 Genetic and Evolutionary Computation Conference Companion, pages 515–518, 3 2023.
dc.relation.referencesGareth James, Daniela Witten, Trevor Hastie, Robert Tibshirani, and Jonathan Taylor. An Introduction to Statistical Learning. 2023.
dc.relation.referencesIrene Moser and Marius Gheorghita. Combining search space diagnostics and optimi- sation. 2012 IEEE Congress on Evolutionary Computation, CEC 2012, 2012.
dc.relation.referencesAnja Jankovic and Carola Doerr. Adaptive landscape analysis. GECCO 2019 Com- panion - Proceedings of the 2019 Genetic and Evolutionary Computation Conference Companion, pages 2032–2035, 7 2019.
dc.relation.referencesYe Tian, Shichen Peng, Xingyi Zhang, Tobias Rodemann, Kay Chen Tan, and Yaochu Jin. A Recommender System for Metaheuristic Algorithms for Continuous Optimi- zation Based on Deep Recurrent Neural Networks. IEEE Transactions on Artificial Intelligence, 1(1):5–18, 8 2020.
dc.relation.referencesWilliam Gilpin. Chaos as an interpretable benchmark for forecasting and data-driven modelling. 10 2021.
dc.relation.referencesRaphael Patrick Prager, Moritz Vinzent Seiler, Heike Trautmann, and Pascal Kersch- ke. Automated Algorithm Selection inăSingle-Objective Continuous Optimization: A Comparative Study ofăDeep Learning andăLandscape Analysis Methods. Lecture No- tes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13398 LNCS:3–17, 2022.
dc.relation.referencesWenfei Liu, Jingcheng Wei, and Qingmin Meng. Comparisions on KNN, SVM, BP and the CNN for Handwritten Digit Recognition. Proceedings of 2020 IEEE International Conference on Advances in Electrical Engineering and Computer Applications, AEECA 2020, pages 587–590, 8 2020.
dc.relation.referencesAbdurrahim Yilmaz, Ali Anil Demircali, Sena Kocaman, and Huseyin Uvet. Compari- son of Deep Learning and Traditional Machine Learning Techniques for Classification of Pap Smear Images. 9 2020.
dc.relation.referencesPin Wang, En Fan, and Peng Wang. Comparative analysis of image classification algo- rithms based on traditional machine learning and deep learning. Pattern Recognition Letters, 141:61–67, 1 2021.
dc.relation.referencesRagav Venkatesan and Baoxin Li. Convolutional Neural Networks in Visual Computing: A Concise Guide. CRC Press, 10 2017.
dc.relation.referencesStephen Boyd and Lieven Vandenberghe. Convex Optimization. 2004.
dc.relation.referencesVincent Hénaux, Adrien Goëffon, and Frédéric Saubion. Evolving Fitness Landscapes with Complementary Fitness Functions. Artificial Evolution, pages 110–120, 10 2019.
dc.relation.referencesPeter F. Stadler. Fitness landscapes. Biological Evolution and Statistical Physics, pages 183–204, 10 2002.
dc.relation.referencesMario A. Muñoz, Michael Kirley, and Saman K. Halgamuge. A meta-learning predic- tion model of algorithm performance for continuous optimization problems. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 7491 LNCS(PART 1):226–235, 2012.
dc.relation.referencesG. H. Weiss. Aspects and applications of the random walk. Journal of Statistical Physics, 79(1):497–500, 4 1995.
dc.relation.referencesGeorge C.. Montgomery, Douglas C.; Runger. Applied Statistics and Probability for Engineers. 2011.
dc.relation.referencesMomin Jamil and Xin-She Yang. A Literature Survey of Benchmark Functions For Global Optimization Problems. International Journal of Mathematical Modelling and Numerical Optimisation, 4(2):150–194, 8 2013.
dc.relation.referencesFeng Zou, Debao Chen, Hui Liu, Siyu Cao, Xuying Ji, and Yan Zhang. A survey of fitness landscape analysis for optimization. Neurocomputing, 503:129–139, 9 2022.
dc.relation.referencesJeffrey Horn and David E. Goldberg. Genetic Algorithm Difficulty and the Modality of Fitness Landscapes. 3:243–269, 1 1995.
dc.relation.referencesErik Pitzer, Michael Affenzeller, and Andreas Beham. A closer look down the basins of attraction. 2010 UK Workshop on Computational Intelligence, UKCI 2010, 2010.
dc.relation.referencesChristian Hanster and Pascal Kerschke. Flaccogui: Exploratory landscape analysis for everyone. In GECCO 2017 - Proceedings of the Genetic and Evolutionary Computation Conference Companion, pages 1215–1222. Association for Computing Machinery, Inc, 7 2017.
dc.relation.referencesPascal Kerschke, Mike Preuss, Carlos Hernández, Oliver Schütze, Jian Qiao Sun, Chris- tian Grimme, Günter Rudolph, Bernd Bischl, and Heike Trautmann. Cell mapping techniques for exploratory landscape analysis. Advances in Intelligent Systems and Computing, 288:115–131, 2014.
dc.relation.referencesKatherine M. Malan and Andries P. Engelbrecht. Quantifying ruggedness of continuous landscapes using entropy. In IEEE Congress on Evolutionary Computation, CEC 2009, pages 1440–1447, 2009.
dc.relation.referencesAlroomi Website - Unconstrained.
dc.relation.referencesIndex AMPGO 0.1.0 documentation.
dc.relation.referencesMicheal F. Shlesinger, George M. Zaslavsky, and Uriel Frisch. Lévy Flights and Related Topics in Physics. 450, 1995.
dc.relation.referencesJean Philippe Bouchaud and Antoine Georges. Anomalous diffusion in disordered media: Statistical mechanisms, models and physical applications. Physics Reports, 195(4-5):127–293, 11 1990.
dc.relation.referencesMasato S. Abe. Functional advantages of Lévy walks emerging near a critical point. Proceedings of the National Academy of Sciences of the United States of America, 117(39):24336–24344, 9 2020.
dc.relation.referencesRosario N. Mantegna and H. Eugene Stanley. Stochastic Process with Ultraslow Convergence to a Gaussian: The Truncated Lévy Flight. Physical Review Letters, 73(22):2946–2949, 1994.
dc.relation.referencesG. M. Viswanathan, Sergey V. Buldyrev, Shlomo Havlin, M. G.E. Da Luz, E. P. Raposo, and H. Eugene Stanley. Optimizing the success of random searches. Nature 1999 401:6756, 401(6756):911–914, 10 1999.
dc.relation.referencesRalf Metzler and Joseph Klafter. The restaurant at the end of the random walk: Recent developments in the description of anomalous transport by fractional dynamics. Journal of Physics A, 2004.
dc.relation.referencesAlbert-László Barabási and Réka Albert. Emergence of Scaling in Random Networks. Mat. Res. Soc. Symp. Proc, 74:677, 1995.
dc.relation.referencesJ. A. Nelder and R. Mead. A Simplex Method for Function Minimization. The Com- puter Journal, 7(4):308–313, 1 1965.
dc.relation.referencesM. J. D. Powell. An efficient method for finding the minimum of a function of several variables without calculating derivatives. The Computer Journal, 7(2):155–162, 1 1964.
dc.relation.referencesS Kirkpatrick, ; C D Gelatt, and ; M P Vecchi. Optimization by Simulated Annealing. New Series, 220(4598):671–680, 1983.
dc.relation.referencesZewen Li, Fan Liu, Wenjie Yang, Shouheng Peng, and Jun Zhou. A Survey of Convo- lutional Neural Networks: Analysis, Applications, and Prospects. IEEE Transactions on Neural Networks and Learning Systems, 33(12):6999–7019, 12 2022.
dc.relation.referencesD. H. Hubel and T. N. Wiesel. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology, 160(1):106, 1 1962.
dc.relation.referencesDouglas M. Hawkins. The Problem of Overfitting. Journal of Chemical Information and Computer Sciences, 44(1):1–12, 1 2004.
dc.relation.referencesKunihiko Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36(4):193–202, 4 1980.
dc.relation.referencesSiddharth Sharma, Simone Sharma, and Anidhya Athaiya. Activation Functions in Neural Networks. International Journal of Engineering Applied Sciences and Techno- logy, 04(12):310–316, 5 2020.
dc.relation.referencesDiederik P. Kingma and Jimmy Lei Ba. Adam: A Method for Stochastic Optimization. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings, 12 2014.
dc.relation.referencesHouda Bichri, Adil Chergui, and Mustapha Hain. Image Classification with Transfer Learning Using a Custom Dataset: Comparative Study. Procedia Computer Science, 220:48–54, 1 2023.
dc.relation.referencesAndrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. MobileNets: Efficient Convo- lutional Neural Networks for Mobile Vision Applications. arXiv, 4 2017.
dc.relation.referencesMark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang Chieh Chen. MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pa- ges 4510–4520, 1 2018.
dc.relation.referencesDavide Chicco and Giuseppe Jurman. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genomics, 21(1):1–13, 1 2020.
dc.relation.referencesC. Ferri, J. Hernández-Orallo, and R. Modroiu. An experimental comparison of per- formance measures for classification. Pattern Recognition Letters, 30(1):27–38, 1 2009.
dc.relation.referencesSylvain Arlot and Alain Celisse. A survey of cross-validation procedures for model selection. https://doi.org/10.1214/09-SS054, 4(none):40–79, 1 2010.
dc.relation.referencesRafael C. Gonzalez and Richard E. Woods. Digital Image Processing, Global Edition. Person Education, pages 19–44, 2018.
dc.relation.referencesKarl Pearson. III. Contributions to the mathematical theory of evolution. Philosophical Transactions of the Royal Society of London. (A.), 185:71–110, 12 1894.
dc.relation.referencesNguyen Xuan Vinh, Julien Epps, and James Bailey. Information theoretic measures for clusterings comparison: Is a correction for chance necessary? ACM International Conference Proceeding Series, 382, 2009.
dc.relation.referencesZhou Wang, Alan Conrad Bovik, Hamid Rahim Sheikh, and Eero P. Simoncelli. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4):600–612, 4 2004.
dc.relation.referencesChris Thornton, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. Auto- WEKA: Combined Selection and Hyperparameter Optimization of Classification Al- gorithms. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Part F128815:847–855, 8 2012.
dc.relation.referencesFrank Hutter, Holger Hoos, and Kevin Leyton-Brown. An Efficient Approach for Assessing Hyperparameter Importance, 1 2014.
dc.relation.referencesJames Bergstra, James Bergstra@umontreal Ca, and Yoshua Bengio@umontreal Ca. Random Search for Hyper-Parameter Optimization Yoshua Bengio. Journal of Machi- ne Learning Research, 13:281–305, 2012.
dc.relation.referencesDaniel Horn and Bernd Bischl. Multi-objective parameter configuration of machine learning algorithms using model-based optimization. 2016 IEEE Symposium Series on Computational Intelligence, SSCI 2016, 2 2017.
dc.relation.referencesRenan Netto, Sheiny Fabre, Tiago Augusto Fontana, Vinicius Livramento, Laer- cio L. Pilla, Laleh Behjat, and Jose Luis Guntzel. Algorithm Selection Framework for Legalization Using Deep Convolutional Neural Networks and Transfer Learning. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 41(5):1481–1494, 5 2022.
dc.relation.referencesItai Dagan, Roman Vainshtein, Gilad Katz, and Lior Rokach. Automated algorithm selection using meta-learning and pre-trained deep convolution neural networks. In- formation Fusion, 105:102210, 5 2024.
dc.relation.referencesS ; Xu, W ; Liu, C ; Wu, J Li, Siyi Xu, Wenwen Liu, Chengpei Wu, and Junli Li. CNN-HT: A Two-Stage Algorithm Selection Framework. Entropy 2024, Vol. 26, Page 262, 26(3):262, 3 2024.
dc.relation.referencesManuel Fernández-Delgado, Eva Cernadas, Senén Barro, Dinani Amorim, and Amo- rim Fernández-Delgado. Do we Need Hundreds of Classifiers to Solve Real World Classification Problems? Journal of Machine Learning Research, 15:3133–3181, 2014.
dc.relation.referencesPreetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and Ilya Sutskever. Deep Double Descent: Where Bigger Models and More Data Hurt. 8th International Conference on Learning Representations, ICLR 2020, 12 2019.
dc.relation.referencesYaxin Li, Jing Liang, Kunjie Yu, Ke Chen, Yinan Guo, Caitong Yue, and Leiyu Zhang. Adaptive local landscape feature vector for problem classification and algorithm selec- tion. Applied Soft Computing, 131:109751, 12 2022.
dc.relation.referencesMoritz Vinzent Seiler, Pascal Kerschke, and Heike Trautmann. Deep-ELA: Deep Ex- ploratory Landscape Analysis with Self-Supervised Pretrained Transformers for Single- and Multi-Objective Continuous Optimization Problems. 1 2024.
dc.relation.referencesYaxin Li, Jing Liang, Kunjie Yu, Caitong Yue, and Yingjie Zhang. Keenness for characterizing continuous optimization problems and predicting differential evolution algorithm performance. Complex and Intelligent Systems, 9(5):5251–5266, 10 2023.
dc.relation.referencesKangjing Li, Saber Elsayed, Ruhul Sarker, and Daryl Essam. Multiple landscape measure-based approach for dynamic optimization problems. Swarm and Evolutionary Computation, 88:101578, 7 2024.
dc.relation.referencesGaper Petelin, Gjorgjina Cenikj, and Tome Eftimov. TinyTLA: Topological landscape analysis for optimization problem classification in a limited sample setting. Swarm and Evolutionary Computation, 84:101448, 2 2024.
dc.relation.referencesVojtech Uher and Pavel Kromer. Impact of Different Discrete Sampling Strategies on Fitness Landscape Analysis Based on Histograms. ACM International Conference Proceeding Series, 9, 12 2023.
dc.relation.referencesJakob Bossek. Smoof: Single-and Multi-Objective Optimization Test Functions. The R Journal, 9, 2017.
dc.relation.referencesGitHub - ciren/benchmarks: A collection of n-dimensional functions.
dc.relation.referencesZhen Zhang, Kuo Yang, Jinwu Qian, and Lunwei Zhang. Real-time surface EMG pattern recognition for hand gestures based on an artificial neural network. Sensors (Switzerland), 19(14), 7 2019.
dc.relation.referencesYi Zhao, Tongfeng Weng, and Defeng Huang. Lévy walk in complex networks: An effi- cient way of mobility. Physica A: Statistical Mechanics and its Applications, 396:212– 223, 2 2014.
dc.relation.referencesXin She Yang and Suash Deb. Multiobjective cuckoo search for design optimization. Computers & Operations Research, 40(6):1616–1624, 6 2013.
dc.relation.referencesHemanth. levy(n,m,beta), 2019.
dc.relation.referencesJorge; Nocedal and Stephen J. Wrigh. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer New York, 2006.
dc.relation.referencesSaket Gupta, Narendra Kumar, Laxmi Srivastava, Hasmat Malik, Alberto Pliego Ma- rugán, and Fausto Pedro García Márquez. A Hybrid JayaPowells Pattern Search Algo- rithm for Multi-Objective Optimal Power Flow Incorporating Distributed Generation. Energies 2021, Vol. 14, Page 2831, 14(10):2831, 5 2021.
dc.relation.referencesM. D. Buhmann. Michael J.D. Powells work in approximation theory and optimisation. Journal of Approximation Theory, 238:3–25, 2 2019.
dc.relation.referencesMarco A. Luersen and Rodolphe Le Riche. Globalized NelderMead method for engi- neering optimization. Computers & Structures, 82(23-26):2251–2260, 9 2004.
dc.relation.referencesNitish Shirish Keskar, Jorge Nocedal, Ping Tak Peter Tang, Dheevatsa Mudigere, and Mikhail Smelyanskiy. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings, 9 2016.
dc.relation.referencesMariana Medina Muñoz. Método para la síntesis de paisajes de optimización bidimen- sionales basado en redes adversarias generativas, 2024.
dc.relation.referencesJuan Pablo López Guevara. Método de síntesis de funciones de prueba para optimiza- dores basado en autocodificadores variacionales y regresión simbólica, 2024.
dc.relation.referencesMiguel Melgarejo, Mariana Medina, Juan Lopez, and Angie Rodriguez. Optimization test function synthesis with generative adversarial networks and adaptive neuro-fuzzy systems. Information Sciences, 686:121371, 1 2024.
dc.rights.accesoAbierto (Texto Completo)
dc.rights.accessrightsOpenAccess
dc.subjectAnálisis de paisajes de optimización
dc.subjectMuestreo multiescala
dc.subjectAprendizaje automático
dc.subjectRedes neuronales convolucionales
dc.subjectOptimización
dc.subject.keywordOptimization landscape analisys
dc.subject.keywordMulti-scale sampling
dc.subject.keywordMachine learning
dc.subject.keywordConvolutional neural networks
dc.subject.keywordOptimization
dc.subject.lembMaestría en Ciencias de la Información y las Comunicaciones -- Tesis y Disertaciones Académicas
dc.subject.lembArquitectura del paisaje -- Clasificación
dc.subject.lembArquitectura del paisaje -- Muestreo
dc.subject.lembAutoaprendizaje -- Técnicas
dc.titleMecanismo de clasificación de paisajes de optimización basado en muestreo multiescala y aprendizaje automático
dc.title.titleenglishOptimization landscape classification mechanism based on multi-scale sampling and machine learning
dc.typemasterThesis
dc.type.coarhttp://purl.org/coar/resource_type/c_bdcc
dc.type.degreeInvestigación-Innovación
dc.type.driverinfo:eu-repo/semantics/masterThesis

Archivos

Bloque original

Mostrando 1 - 3 de 3
Cargando...
Miniatura
Nombre:
RodríguezHernándezAngiePatricia2024.pdf
Tamaño:
4.05 MB
Formato:
Adobe Portable Document Format
No hay miniatura disponible
Nombre:
RodríguezHernándezAngiePatricia2024Anexos.zip
Tamaño:
930.81 KB
Formato:
Cargando...
Miniatura
Nombre:
Licencia de uso y publicacion.pdf
Tamaño:
213.76 KB
Formato:
Adobe Portable Document Format

Bloque de licencias

Mostrando 1 - 1 de 1
No hay miniatura disponible
Nombre:
license.txt
Tamaño:
7 KB
Formato:
Item-specific license agreed upon to submission
Descripción: