torchelie.hyper¶
Hyper parameters related utils. For now it contains hyper parameter search tools with random search but it may evolve later.
hp_sampler = HyperparamSearch(
lr=ExpSampler(1e-6, 0.1),
momentum=DecaySampler(0.5, 0.999),
wd=ExpSampler(0.1, 1e-6),
model=ChoiceSampler(['resnet', 'vgg'])
)
for run_nb in range(10):
hps = hp_sampler.sample()
results = train(**hps)
hp_sampler.log_results(hps, results)
After creating the json file summing up the hyper parameter search, it can be
investigated with the viewer with
python3 -m torchelie.hyper hpsearch.json
. Then locate your browser to
http://localhost:8080
.
-
class
torchelie.hyper.
ChoiceSampler
(choices)¶ Sampler over a discrete of values.
- Parameters
choices (list) – list of values
-
sample
()¶ Sample a value
-
class
torchelie.hyper.
DecaySampler
(low, high)¶ Sample a decay value. Use it for a momentum or beta1 / beta2 value or any exponential decay value.
- Parameters
low (float) – lower bound
high (float) – higher bound
-
sample
()¶ Sample a value.
-
class
torchelie.hyper.
ExpSampler
(low, high)¶ Exponential sampler (Uniform sampler over a log scale). Use it to sample the learning rate or the weight decay.
- Parameters
low (float) – lower bound
high (float) – higher bound
-
sample
()¶ Sample a value.
-
class
torchelie.hyper.
HyperparamSampler
(**hyperparams)¶ Sample hyper parameters. It aggregates multiple samplers to produce a set of hyper parameters.
Example:
- ::
- HyperparamSampler(
lr=ExpSampler(1e-6, 0.1), momentum=DecaySampler(0.5, 0.999), wd=ExpSampler(0.1, 1e-6), model=ChoiceSampler([‘resnet’, ‘vgg’])
)
- Parameters
hyperparams (kwargs) – hyper params samplers. Names are arbitrary.
-
sample
()¶ Sample hyperparameters.
- Returns
a dict containing sampled values for hyper parameters.
-
class
torchelie.hyper.
HyperparamSearch
(**hyperparams)¶ Perform hyper parameter search. Right now it just uses a random search. Params and results are logged to hpsearch.csv in the current directory. It would be cool to implement something like a Gaussian Process search or a RL algorithm.
First, call sample() to get a set of hyper parameters. The, evaluate them on your task and get a dict of results. call log_results() with the hyper params and the results dict, and start again. Stop whenever you want.
- Parameters
hyperparameters (kwargs) – named samplers (like for HyperparamSampler).
-
log_result
(hps, result)¶ Logs hyper parameters and results.
-
sample
(algorithm='random', target=None)¶ Sample a set of hyper parameters.