Variational Trainer
Beta_Scheduler
Abstract class for beta scheduler a scale parameter between Distance loss and Data loss, the higher beta value is the more important Distance loss is. It is recommended to start with small value (< 0.1) and increase it through learning.
Source code in src/methods/bayes/variational/trainer.py
__init__(beta, ref=None, *args, **kwargs)
summary
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
beta
|
float
|
initial beta value |
required |
ref
|
Beta_Scheduler | VarTrainerParams
|
reference to trainer parameters which contains beta attribute or another Beta_Shelduer |
None
|
Source code in src/methods/bayes/variational/trainer.py
step(loss)
Abstract class for beta scheduler a scale parameter between Distance loss and Data loss, the higher this value is the more important Distance loss is. It is recommended to start with small value (< 0.1) and increase it through learning.
Source code in src/methods/bayes/variational/trainer.py
Beta_Scheduler_Plato
Bases: Beta_Scheduler
Class for plato beta scheduler a scale parameter between Distance loss and Data loss, the higher this value is the more important Distance loss is. It is recommended to start with small value (< 0.1) and increase it through learning. It increase it when target loss stops improving fo more then patience steps. Use ref to specify trainer parameters which contain beta attribute in other way you should assign it manually
Source code in src/methods/bayes/variational/trainer.py
alpha = alpha
instance-attribute
Factor by which beta value changing
eps = eps
instance-attribute
Mininum change of beta. If delta is less there will be no change
is_min = is_min
instance-attribute
Is it minimiztion problem. So method would consider that loss stops improving< when it starts maximizing
max_beta = max_beta
instance-attribute
Beta would be maxcliped to this value. To work properly ref should reference to trainer parameter
min_beta = min_beta
instance-attribute
Beta would be mincliped to this value. To work properly ref should reference to trainer parameter
patience = patience
instance-attribute
Number of steps of loss non-improvement which schelduer should tolerate before changing beta value
threshold = threshold
instance-attribute
Algorithm consider loss stops imporving if the next loss is more(for minimiztion) then min loss more then threshold
__init__(beta=0.01, alpha=0.1, patience=10, is_min=True, threshold=0.01, eps=1e-08, max_beta=1.0, min_beta=1e-09, ref=None)
summary
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
beta
|
float
|
initial beta value |
0.01
|
alpha
|
float
|
factor of beta value by which it multiplied |
0.1
|
patience
|
int
|
Number of steps of loss non-improvement which schelduer should tolerate before changing beta value |
10
|
is_min
|
bool
|
Is it minimiztion problem. So method would consider that loss stops improving when it starts maximizing |
True
|
threshold
|
float
|
Algorithm consider loss stops imporving if the next loss is more(for minimiztion) then min loss more then threshold |
0.01
|
eps
|
float
|
Mininum change of beta. If delta is less there will be no change |
1e-08
|
max_beta
|
float
|
Beta would be maxcliped to this value. To work properly ref should reference to trainer parameter |
1.0
|
min_beta
|
float
|
Beta would be mincliped to this value. To work properly ref should reference to trainer parameter |
1e-09
|
ref
|
Beta_Scheduler | VarTrainerParams
|
reference to trainer parameters which contains beta attribute or another Beta_Shelduer |
None
|
Source code in src/methods/bayes/variational/trainer.py
CallbackLoss
Abstract class for additional losses that should be calculated each train step
Source code in src/methods/bayes/variational/trainer.py
__call__()
step(*args, **kwargs)
CallbackLossAccuracy
Bases: CallbackLoss
Class for accuracy losses for classification problem to add them in callback
Source code in src/methods/bayes/variational/trainer.py
__call__()
Function returns mean accuracy for whole train steps.
Returns:
| Name | Type | Description |
|---|---|---|
float |
float
|
mean accuracy |
step(output, label)
Method should calculate accuracy for train
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
output
|
tensor
|
predicted logits for each class |
required |
label
|
tensor
|
validatation labels for each object |
required |
Source code in src/methods/bayes/variational/trainer.py
VarBayesTrainer
Bases: BaseBayesTrainer[VarBayesNet]
Trainer that is used for all variational methods all it parameters are stored in params(VarTrainerParams) attribute
Source code in src/methods/bayes/variational/trainer.py
219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 | |
__init__(params, report_chain, train_dataset, eval_dataset, post_train_step_func=None)
summary
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
params
|
TrainerParams
|
trianing params that is used to fine-tune training |
required |
report_chain
|
Optional[ReportChain]
|
All callback that should be return by each epoch |
required |
train_dataset
|
Iterable
|
Dataset on which model should be trained |
required |
eval_dataset
|
Iterable
|
Dataset on which epoch of training model should be evaluated |
required |
post_train_step_func
|
Union[None, list[Callable[[BaseBayesTrainer, TrainResult], None]]
|
functions that should be executed after each train step |
None
|
]
Source code in src/methods/bayes/variational/trainer.py
__post_train_step(train_result)
Functions that should exectuted after each taraining
eval(model, eval_dataset)
Evalute model on dataset using stored train parameters Args: model (VarBayesModuleNet): Variational bayesian model that should be evaulted eval_dataset: datatest on which model should be evaluted Returns: VarBayesTrainer.EvalResult: Evaluation result that is stored in VarBayesTrainer.EvalResult format
Source code in src/methods/bayes/variational/trainer.py
394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 | |
eval_thresholds(model, thresholds)
Simillar to eval() but evaluate for a list of prune threshold
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
VarBayesModuleNet
|
Variational bayesian model that should be evaulted |
required |
thresholds
|
List[float]
|
list of prune thresholds on which model should be evaluted |
required |
Returns:
| Type | Description |
|---|---|
List[EvalResult]
|
List[VarBayesTrainer.EvalResult]: Evaluation result that is stored in VarBayesTrainer.EvalResult format |
Source code in src/methods/bayes/variational/trainer.py
train(model)
It simply train provided model with tarin parameters that is stores in params
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model
|
VarBayesModuleNet
|
Any variational bayesian model that should be trained |
required |
Returns:
| Name | Type | Description |
|---|---|---|
VarBayesModuleNetDistribution |
VarBayesModuleNetDistribution
|
Distribution of variational nets that could be used |
VarBayesModuleNetDistribution
|
to sample models or getting map estimation (the most probable) by result of train. |
Source code in src/methods/bayes/variational/trainer.py
train_step(model, objects, labels)
Train step for specific batch
Source code in src/methods/bayes/variational/trainer.py
VarTrainerParams
dataclass
Bases: TrainerParams
Class for VarBayesTrainer parameters
Source code in src/methods/bayes/variational/trainer.py
beta: float = 0.02
class-attribute
instance-attribute
Beta is scale factor betwenn distance loss and data loss. The higher beta value is the more important distance loss is. It is recommended to start with small value (< 0.1) and increase it through learning.
callback_losses: Optional[dict[CallbackLoss]] = None
class-attribute
instance-attribute
All additional losses that should be added to callback
dist_loss: VarDistLoss
instance-attribute
Loss for distributions of bayesian-model. This loss set up method that you are choose to use. Select it carefully as not all losses and distribution are compatible
fit_loss: Callable
instance-attribute
Loss for data of non-bayesian model. There could be used any usual loss that is appropiated for this model and task.
num_samples: int
instance-attribute
Number of samples that are used for estimation of losses. Increasing it lowers variance and improves learning in cost of computaion time
prune_threshold: float = -2.2
class-attribute
instance-attribute
Threshold by which parameters are pruned. The lower it is the more are pruned. Could be any real number.