Between integrals and optima: new methods for scalable machine learning
<p>The success of machine learning is due in part to the effectiveness of scalable computational methods, like stochastic gradient descent or Monte Carlo methods, that undergird learning algorithms. This thesis contributes four new scalable methods for distinct problems that arise in machine l...
Main Author: | Maddison, C |
---|---|
Other Authors: | Doucet, A |
Format: | Thesis |
Language: | English |
Published: |
2020
|
Subjects: |
Similar Items
-
Efficient and scalable methods for deep reinforcement learning
by: Farquhar, G
Published: (2020) -
Scalable Real-Time Attributes Responsive Extreme Learning Machine
by: Hongbo Wang, et al.
Published: (2020-08-01) -
Advances in kernel methods: towards general-purpose and scalable models
by: Samo, YLK
Published: (2017) -
Leveraging domain knowledge for self-supervision in scalable robot learning
by: Barnes, D
Published: (2020) -
Fast and Scalable Private Genotype Imputation Using Machine Learning and Partially Homomorphic Encryption
by: Esha Sarkar, et al.
Published: (2021-01-01)