AAS 390 Chapter Notes - Chapter 1: Deep Learning, Hyperparameter Optimization, Bayesian Optimization

9 views5 pages

Document Summary

In practical bayesian optimization, we must often search over structures with dif- fering numbers of parameters. For instance, we may wish to search over neural network architectures with an unknown number of layers. To relate performance data gathered for different architectures, we de ne a new kernel for conditional parameter spaces that explicitly includes information about which parameters are relevant in a given structure. We show that this kernel improves model quality and. Bayesian optimization (bo) is an ef cient approach for solving blackbox optimization problems of the form arg minx x f (x) (see [1] for a detailed overview), where f is expensive to evaluate. It employs a prior distribution p(f ) over functions that is updated as new information on f be- comes available. Gps, and the ef ciency and effectiveness of bayesian optimization suffers as a result.

Get access

Grade+20% off
$8 USD/m$10 USD/m
Billed $96 USD annually
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
40 Verified Answers
Class+
$8 USD/m
Billed $96 USD annually
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
30 Verified Answers

Related Documents

Related Questions