Usage¶
You can use our library to tune almost all (see below) hyperparameters in your own code.
The HyperOptimizer
interface is very similar to Optimizer
from PyTorch
.
It supports key functionalities:
step
to do an optimization step over parameters (or hyperparameters, see below)zero_grad
to zero out the parameters gradients (same asoptimizer.zero_grad()
)
We provide demo experiments with each implemented method in this notebook. They works as follows:
Get next batch from train dataloader
Forward and backward on calculated loss
hyper_optimizer.step(loss)
do model parameters step and (if inner steps were accumulated) hyperparameters step (calculate hypergradients, do the optimization step, zeroes hypergradients)hyper_optimizer.zero_grad()
zeroes the model parameters gradients (same asoptimizer.zero_grad()
)
Optimizer
vs. HyperOptimizer
method step
¶
Gradient-based hyperparameters optimization involves hyper-optimization steps during the
model parameters optimization. Thus, we combine Optimizer
method step
with inner_steps
,
defined by each method.
For example, T1T2
do NOT use any inner steps, therefore optimization over parameters
and hyperparameters is done step by step. But Neumann
method do some inner optimization steps
over model parameters before it do the hyperstep.
See more details here.
Supported hyperparameters types¶
The HyperOptimizer
logic is well-suited for almost all CONTINUOUS (required for gradient-based methods) hyperparameters types:
Model hyperparameters (e.g., gate coefficients)
Loss hyperparameters (e.g., L1/L2-regularization)
However, it currently does not support (or support, but actually was not sufficiently tested) learning rate tuning. We plan to improve our functionality in future releases, stay tuned!