On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks
© 2017 Neural information processing systems foundation. All rights reserved. Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there is a large body of work on algorithms for various ERM problems, the exact computational comple...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
2021
|
Online Access: | https://hdl.handle.net/1721.1/137640 |
_version_ | 1811096691438256128 |
---|---|
author | Indyk, Piotr Schmidt, Ludwig Backurs, Arturs |
author_facet | Indyk, Piotr Schmidt, Ludwig Backurs, Arturs |
author_sort | Indyk, Piotr |
collection | MIT |
description | © 2017 Neural information processing systems foundation. All rights reserved. Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there is a large body of work on algorithms for various ERM problems, the exact computational complexity of ERM is still not understood. We address this issue for multiple popular ERM problems including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems based on complexity-theoretic assumptions such as the Strong Exponential Time Hypothesis. Under these assumptions, we show that there are no algorithms that solve the aforementioned ERM problems to high accuracy in sub-quadratic time. We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks. |
first_indexed | 2024-09-23T16:47:31Z |
format | Article |
id | mit-1721.1/137640 |
institution | Massachusetts Institute of Technology |
language | English |
last_indexed | 2024-09-23T16:47:31Z |
publishDate | 2021 |
record_format | dspace |
spelling | mit-1721.1/1376402021-11-09T03:01:16Z On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks Indyk, Piotr Schmidt, Ludwig Backurs, Arturs © 2017 Neural information processing systems foundation. All rights reserved. Empirical risk minimization (ERM) is ubiquitous in machine learning and underlies most supervised learning methods. While there is a large body of work on algorithms for various ERM problems, the exact computational complexity of ERM is still not understood. We address this issue for multiple popular ERM problems including kernel SVMs, kernel ridge regression, and training the final layer of a neural network. In particular, we give conditional hardness results for these problems based on complexity-theoretic assumptions such as the Strong Exponential Time Hypothesis. Under these assumptions, we show that there are no algorithms that solve the aforementioned ERM problems to high accuracy in sub-quadratic time. We also give similar hardness results for computing the gradient of the empirical loss, which is the main computational burden in many non-convex learning tasks. 2021-11-08T13:05:22Z 2021-11-08T13:05:22Z 2017 2019-05-31T14:35:34Z Article http://purl.org/eprint/type/ConferencePaper https://hdl.handle.net/1721.1/137640 Indyk, Piotr, Schmidt, Ludwig and Backurs, Arturs. 2017. "On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks." en https://papers.nips.cc/paper/7018-on-the-fine-grained-complexity-of-empirical-risk-minimization-kernel-methods-and-neural-networks Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. application/pdf Neural Information Processing Systems (NIPS) |
spellingShingle | Indyk, Piotr Schmidt, Ludwig Backurs, Arturs On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks |
title | On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks |
title_full | On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks |
title_fullStr | On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks |
title_full_unstemmed | On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks |
title_short | On the fine-grained complexity of empirical risk minimization: Kernel methods and neural networks |
title_sort | on the fine grained complexity of empirical risk minimization kernel methods and neural networks |
url | https://hdl.handle.net/1721.1/137640 |
work_keys_str_mv | AT indykpiotr onthefinegrainedcomplexityofempiricalriskminimizationkernelmethodsandneuralnetworks AT schmidtludwig onthefinegrainedcomplexityofempiricalriskminimizationkernelmethodsandneuralnetworks AT backursarturs onthefinegrainedcomplexityofempiricalriskminimizationkernelmethodsandneuralnetworks |