Better Hardness via Algorithms, and New Forms of Hardness versus Randomness
One central theme of complexity theory is the rich interplay between hardness (the existence of functions that are hard to compute) and pseudorandomness (the procedure that converts randomized algorithms into equivalent deterministic algorithms). In one direction, from the classic works of Nisan-Wid...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2023
|
Online Access: | https://hdl.handle.net/1721.1/147560 |
_version_ | 1811093538901852160 |
---|---|
author | Chen, Lijie |
author2 | Williams, Ryan |
author_facet | Williams, Ryan Chen, Lijie |
author_sort | Chen, Lijie |
collection | MIT |
description | One central theme of complexity theory is the rich interplay between hardness (the existence of functions that are hard to compute) and pseudorandomness (the procedure that converts randomized algorithms into equivalent deterministic algorithms). In one direction, from the classic works of Nisan-Widgerson and Impagliazzo-Widgerson, we know certain hardness hypothesis (circuit lower bounds) implies that all randomized algorithms can be derandomized with a polynomial overhead. In another direction, A decade ago, Williams have proved that certain circuit lower bounds follows from non-trivial derandomization.
In this thesis we establish many new connections between hardness and pseudorandomness, strengthening and refining the classic works mentioned above.
• New circuit lower bounds from non-trivial derandomization. Following Williams’ algorithmic method, we prove several new circuit lower bounds using various non-trivial derandomization algorithms, including almost-everywhere and strongly average-case lower bound against ACC0 circuits and a new construction of rigid matrices.
• Superfast and non-black-box derandomization from plausible hardness assumptions. Under plausible hardness hypotheses, we obtain almost optimal worst-case derandomization of both randomized algorithms and constant-round Arthur-Merlin protocols. We also propose a new framework for non-black-box derandomization and demonstrate its usefulness by showing (1) it connects derandomization to a new type of hardness assumptions that is against uniform algorithms and (2) (from plausible assumptions) it gives derandomization of both randomized algorithms and constant-round doubly efficient proof systems with almost no overhead such that no polynomial-time adversary can find a mistake. |
first_indexed | 2024-09-23T15:46:43Z |
format | Thesis |
id | mit-1721.1/147560 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T15:46:43Z |
publishDate | 2023 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1475602023-01-20T03:07:30Z Better Hardness via Algorithms, and New Forms of Hardness versus Randomness Chen, Lijie Williams, Ryan Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science One central theme of complexity theory is the rich interplay between hardness (the existence of functions that are hard to compute) and pseudorandomness (the procedure that converts randomized algorithms into equivalent deterministic algorithms). In one direction, from the classic works of Nisan-Widgerson and Impagliazzo-Widgerson, we know certain hardness hypothesis (circuit lower bounds) implies that all randomized algorithms can be derandomized with a polynomial overhead. In another direction, A decade ago, Williams have proved that certain circuit lower bounds follows from non-trivial derandomization. In this thesis we establish many new connections between hardness and pseudorandomness, strengthening and refining the classic works mentioned above. • New circuit lower bounds from non-trivial derandomization. Following Williams’ algorithmic method, we prove several new circuit lower bounds using various non-trivial derandomization algorithms, including almost-everywhere and strongly average-case lower bound against ACC0 circuits and a new construction of rigid matrices. • Superfast and non-black-box derandomization from plausible hardness assumptions. Under plausible hardness hypotheses, we obtain almost optimal worst-case derandomization of both randomized algorithms and constant-round Arthur-Merlin protocols. We also propose a new framework for non-black-box derandomization and demonstrate its usefulness by showing (1) it connects derandomization to a new type of hardness assumptions that is against uniform algorithms and (2) (from plausible assumptions) it gives derandomization of both randomized algorithms and constant-round doubly efficient proof systems with almost no overhead such that no polynomial-time adversary can find a mistake. Ph.D. 2023-01-19T19:58:36Z 2023-01-19T19:58:36Z 2022-09 2022-10-19T19:07:51.126Z Thesis https://hdl.handle.net/1721.1/147560 In Copyright - Educational Use Permitted Copyright MIT http://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Chen, Lijie Better Hardness via Algorithms, and New Forms of Hardness versus Randomness |
title | Better Hardness via Algorithms, and New Forms of Hardness versus Randomness |
title_full | Better Hardness via Algorithms, and New Forms of Hardness versus Randomness |
title_fullStr | Better Hardness via Algorithms, and New Forms of Hardness versus Randomness |
title_full_unstemmed | Better Hardness via Algorithms, and New Forms of Hardness versus Randomness |
title_short | Better Hardness via Algorithms, and New Forms of Hardness versus Randomness |
title_sort | better hardness via algorithms and new forms of hardness versus randomness |
url | https://hdl.handle.net/1721.1/147560 |
work_keys_str_mv | AT chenlijie betterhardnessviaalgorithmsandnewformsofhardnessversusrandomness |