Pseudo-derandomizing learning and approximation
We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser [Eran Gat and Shafi Goldwasser, 2011]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed output with high probability. We explore pseudo-determinism in the settings of learning...
المؤلفون الرئيسيون: | , |
---|---|
التنسيق: | Conference item |
منشور في: |
Schloss Dagstuhl
2018
|
_version_ | 1826296118901735424 |
---|---|
author | Oliveira, I Santhanam, R |
author_facet | Oliveira, I Santhanam, R |
author_sort | Oliveira, I |
collection | OXFORD |
description | We continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser [Eran Gat and Shafi Goldwasser, 2011]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed output with high probability. We explore pseudo-determinism in the settings of learning and approximation. Our goal is to simulate known randomized algorithms in these settings by pseudo-deterministic algorithms in a generic fashion - a goal we succinctly term pseudo-derandomization. Learning. In the setting of learning with membership queries, we first show that randomized learning algorithms can be derandomized (resp. pseudo-derandomized) under the standard hardness assumption that E (resp. BPE) requires large Boolean circuits. Thus, despite the fact that learning is an algorithmic task that requires interaction with an oracle, standard hardness assumptions suffice to (pseudo-)derandomize it. We also unconditionally pseudo-derandomize any {quasi-polynomial} time learning algorithm for polynomial size circuits on infinitely many input lengths in sub-exponential time. Next, we establish a generic connection between learning and derandomization in the reverse direction, by showing that deterministic (resp. pseudo-deterministic) learning algorithms for a concept class C imply hitting sets against C that are computable deterministically (resp. pseudo-deterministically). In particular, this suggests a new approach to constructing hitting set generators against AC^0[p] circuits by giving a deterministic learning algorithm for AC^0[p]. Approximation. Turning to approximation, we unconditionally pseudo-derandomize any poly-time randomized approximation scheme for integer-valued functions infinitely often in subexponential time over any samplable distribution on inputs. As a corollary, we get that the (0,1)-Permanent has a fully pseudo-deterministic approximation scheme running in sub-exponential time infinitely often over any samplable distribution on inputs. Finally, we {investigate} the notion of approximate canonization of Boolean circuits. We use a connection between pseudodeterministic learning and approximate canonization to show that if BPE does not have sub-exponential size circuits infinitely often, then there is a pseudo-deterministic approximate canonizer for AC^0[p] computable in quasi-polynomial time. |
first_indexed | 2024-03-07T04:11:26Z |
format | Conference item |
id | oxford-uuid:c7f33f22-d7aa-41ae-aa7d-a4f37c1a1f18 |
institution | University of Oxford |
last_indexed | 2024-03-07T04:11:26Z |
publishDate | 2018 |
publisher | Schloss Dagstuhl |
record_format | dspace |
spelling | oxford-uuid:c7f33f22-d7aa-41ae-aa7d-a4f37c1a1f182022-03-27T06:48:55ZPseudo-derandomizing learning and approximationConference itemhttp://purl.org/coar/resource_type/c_5794uuid:c7f33f22-d7aa-41ae-aa7d-a4f37c1a1f18Symplectic Elements at OxfordSchloss Dagstuhl2018Oliveira, ISanthanam, RWe continue the study of pseudo-deterministic algorithms initiated by Gat and Goldwasser [Eran Gat and Shafi Goldwasser, 2011]. A pseudo-deterministic algorithm is a probabilistic algorithm which produces a fixed output with high probability. We explore pseudo-determinism in the settings of learning and approximation. Our goal is to simulate known randomized algorithms in these settings by pseudo-deterministic algorithms in a generic fashion - a goal we succinctly term pseudo-derandomization. Learning. In the setting of learning with membership queries, we first show that randomized learning algorithms can be derandomized (resp. pseudo-derandomized) under the standard hardness assumption that E (resp. BPE) requires large Boolean circuits. Thus, despite the fact that learning is an algorithmic task that requires interaction with an oracle, standard hardness assumptions suffice to (pseudo-)derandomize it. We also unconditionally pseudo-derandomize any {quasi-polynomial} time learning algorithm for polynomial size circuits on infinitely many input lengths in sub-exponential time. Next, we establish a generic connection between learning and derandomization in the reverse direction, by showing that deterministic (resp. pseudo-deterministic) learning algorithms for a concept class C imply hitting sets against C that are computable deterministically (resp. pseudo-deterministically). In particular, this suggests a new approach to constructing hitting set generators against AC^0[p] circuits by giving a deterministic learning algorithm for AC^0[p]. Approximation. Turning to approximation, we unconditionally pseudo-derandomize any poly-time randomized approximation scheme for integer-valued functions infinitely often in subexponential time over any samplable distribution on inputs. As a corollary, we get that the (0,1)-Permanent has a fully pseudo-deterministic approximation scheme running in sub-exponential time infinitely often over any samplable distribution on inputs. Finally, we {investigate} the notion of approximate canonization of Boolean circuits. We use a connection between pseudodeterministic learning and approximate canonization to show that if BPE does not have sub-exponential size circuits infinitely often, then there is a pseudo-deterministic approximate canonizer for AC^0[p] computable in quasi-polynomial time. |
spellingShingle | Oliveira, I Santhanam, R Pseudo-derandomizing learning and approximation |
title | Pseudo-derandomizing learning and approximation |
title_full | Pseudo-derandomizing learning and approximation |
title_fullStr | Pseudo-derandomizing learning and approximation |
title_full_unstemmed | Pseudo-derandomizing learning and approximation |
title_short | Pseudo-derandomizing learning and approximation |
title_sort | pseudo derandomizing learning and approximation |
work_keys_str_mv | AT oliveirai pseudoderandomizinglearningandapproximation AT santhanamr pseudoderandomizinglearningandapproximation |