Online ADMM for Large-Scale Machine Learning and Optimization
Ph.D. Research, University of Southern California, ISE, 2025
- Designed a novel online adaptive ADMM framework using exact hypergradients to update penalty parameters (ρ).
- Enabled fast convergence on large-scale constrained machine learning and optimization problems.
- Implemented scalable JAX/NumPy pipelines supporting scalar-, vector-, and block-wise ρ updates.
- Demonstrated robustness on ill-conditioned quadratic programs and SVMs.
- Formulated a rollout-based Lyapunov loss to stabilize hypergradient descent.
- Achieved improved convergence compared to classical residual-balancing heuristics.
