Data-dependent compression of random features for large-scale kernel approximation
Kernel methods offer the flexibility to learn complex relationships in modern, large data sets while enjoying strong theoretical guarantees on quality. Unfortunately, these methods typically require cubic running time in the data set size, a prohibitive cost in the large-data setting. Random feature...
Main Authors: | Agrawal, Raj, Campbell, Trevor David, Huggins, Jonathan H., Broderick, Tamara A |
---|---|
Other Authors: | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science |
Format: | Article |
Language: | English |
Published: |
2020
|
Online Access: | https://hdl.handle.net/1721.1/128772 |
Similar Items
-
Truncated random measures
by: Campbell, Trevor, et al.
Published: (2021) -
PASS-GLM: Polynomial approximate sufficient statistics for scalable Bayesian GLM inference
by: Huggins, Jonathan H., et al.
Published: (2020) -
Kernel-based hypothesis tests: large-scale approximations and Bayesian perspectives
by: Zhang, Q
Published: (2019) -
Scalable Gaussian process inference with finite-data mean and variance guarantees
by: Huggins, Jonathan H., et al.
Published: (2020) -
Coresets for scalable Bayesian logistic regression
by: Huggins, Jonathan H., et al.
Published: (2021)