References
Main Papers
[Lee2022]
Lee, J., Zaheer, M., Sra, S., & Jadbabaie, A. (2022). Online Hyperparameter Meta-Learning with Hypergradient Distillation. ICLR 2022. arXiv:2110.02508
[Luketina2016]
Luketina, J., Berglund, M., Greff, K., & Raiko, T. (2016). Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters. ICML 2016. arXiv:1511.06727
[Liu2018]
Liu, H., Simonyan, K., & Yang, Y. (2018). DARTS: Differentiable Architecture Search. ICLR 2019. arXiv:1806.09055
[Anonymous2025]
Anonymous. (2025). Generalized Greedy Gradient Hyperparameter Optimization. Under review at ICLR 2025.
Citation
If you use GradHpO in your research, please cite:
Eynullayev, A., Rubtsov, D., & Karpeev, G. (2026). GradHpO: Gradient-Based Hyperparameter Optimization. MIPT Intelligent Systems.