SToG.benchmark

Benchmarking utilities for feature selection methods.

Functions

compare_with_l1_sklearn(datasets)

Compare with sklearn L1 logistic regression baseline.

Classes

ComprehensiveBenchmark([device])

Comprehensive benchmark for all feature selection methods.

class SToG.benchmark.ComprehensiveBenchmark(device='cpu')[source]

Bases: object

Comprehensive benchmark for all feature selection methods.

__init__(device='cpu')[source]

Initialize benchmark.

Parameters:

device – Device to run on (‘cpu’ or ‘cuda’)

run_single_experiment(dataset_info, method_name, lambda_reg, random_state=42)[source]

Run a single experiment.

Parameters:
  • dataset_info – Dictionary with dataset information

  • method_name – Name of the method to test

  • lambda_reg – Regularization strength

  • random_state – Random seed

Returns:

Dictionary with results

evaluate_method(dataset_info, method_name, lambda_values=None, n_runs=5)[source]

Evaluate a method with multiple lambda values and runs.

Parameters:
  • dataset_info – Dictionary with dataset information

  • method_name – Name of the method to test

  • lambda_values – List of lambda values to try

  • n_runs – Number of runs per lambda value

Returns:

Dictionary with best results

run_benchmark(datasets=None)[source]

Run complete benchmark.

Parameters:

datasets – List of dataset info dictionaries (uses default if None)

print_summary()[source]

Print summary table of benchmark results.

SToG.benchmark.compare_with_l1_sklearn(datasets)[source]

Compare with sklearn L1 logistic regression baseline.

Parameters:

datasets – List of dataset info dictionaries

Returns:

Dictionary with sklearn results