Regularización con cotas de Lipschitz

Nov 27, 2025·
Manuela Chacon-Chamorro
Manuela Chacon-Chamorro
· 1 min read
Abstract
Las redes neuronales profundas son modelos ampliamente utilizados en aprendizaje automático y sustentan la mayoría de los sistemas actuales de IA. Estos modelos, sin embargo, son propensos a aprender particularidades del conjunto de entrenamiento que deterioran su desempeño en datos no vistos. Para mitigar este problema, presento un enfoque de regularización adaptativa basado en cotas de Lipschitz (LBA) que controla la sensibilidad del modelo ante perturbaciones de entrada. En experimentos con datos de imágenes y tabulares, el método propuesto reduce sistemáticamente la brecha entrenamiento–validación, mantiene un rendimiento competitivo. El método también mostró un desempeño sólido frente a ataques adversariales. En la charla se abordarán: (i) los fundamentos teóricos del sobreajuste, (ii) la descripción del método LBA, (iii) los resultados experimentales y una introducción a los ataques adversariales, y (iv) posibles líneas de investigación futura.
Date
Nov 27, 2025 12:00 AM
Event
Workshop en Ciencias de la Computación
Location

Universidad Nacional de Colombia

Campus la Nubia, Manizales,

✨ Summary

In this talk, I presented my master’s research, which focuses on addressing overfitting in deep neural networks through an adaptive regularization strategy based on Lipschitz Bounds Adaptation (LBA). This work is grounded in our research paper, which provides the theoretical foundation and experimental validation of the proposed method (paper on LBA).

The motivation behind this work stems from the tendency of deep neural networks to overfit training data, capturing spurious patterns that negatively affect generalization. The LBA approach mitigates this issue by adaptively controlling the sensitivity of the model to input perturbations via Lipschitz bounds. Through experiments on image and tabular datasets, the method consistently reduces the training–validation gap while preserving competitive performance, and it also exhibits solid robustness against adversarial attacks.

During the talk, I discussed the theoretical foundations of overfitting, detailed the proposed LBA method, and presented experimental results alongside an introduction to adversarial attacks, concluding with potential directions for future research. The support received throughout the talk was excellent, and returning to the university to present my master’s work was a genuinely meaningful and rewarding experience.

Some slides
Presenting the talk