An Equivalence Class for Orthogonal Vectors

The Orthogonal Vectors problem (OV) asks: given n vectors in {0, 1}O(log n), are two of them orthogonal? OV is easily solved in O(n2 log n) time, and it is a central problem in fine-grained complexity: dozens of conditional lower bounds are based on the popular hypothesis that OV cannot be solved in...

Full description

Bibliographic Details
Main Authors: Chen, Lijie, Williams, Richard Ryan
Other Authors: Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
Format: Book
Language:English
Published: Society for Industrial and Applied Mathematics 2021
Online Access:https://hdl.handle.net/1721.1/130333
Description
Summary:The Orthogonal Vectors problem (OV) asks: given n vectors in {0, 1}O(log n), are two of them orthogonal? OV is easily solved in O(n2 log n) time, and it is a central problem in fine-grained complexity: dozens of conditional lower bounds are based on the popular hypothesis that OV cannot be solved in (say) n1.99 time. However, unlike the APSP problem, few other problems are known to be non-trivially equivalent to OV. We show OV is truly-subquadratic equivalent to several fundamental problems, all of which (a priori) look harder than OV. A partial list is given below: 1. (Min-IP/Max-IP) Find a red-blue pair of vectors with minimum (respectively, maximum) inner product, among n vectors in {0, 1}O(log n) . 2. (Exact-IP) Find a red-blue pair of vectors with inner product equal to a given target integer, among n vectors in {0, 1}O(log n) . 3. (Apx-Min-IP/Apx-Max-IP) Find a red-blue pair of vectors that is a 100-approximation to the minimum (resp. maximum) inner product, among n vectors in {0, 1}O(log n) . 4. (Approximate Bichrom.-`p-Closest-Pair) Compute a (1+ Ω(1))-approximation to the `p-closest red-blue pair (for a constant p ∈ [1, 2]), among n points in Rd, d ≤ no(1). 5. (Approximate `p-Furthest-Pair) Compute a (1 + Ω(1))approximation to the `p-furthest pair (for a constant p ∈ [1, 2]), among n points in Rd, d ≤ no(1). Therefore, quick constant-factor approximations to maximum inner product imply quick exact solutions to maximum inner product, in the O(log n)-dimensional setting. Another consequence is that the ability to find vectors with zero inner product suffices for finding vectors with maximum inner product. Our equivalence results are robust enough that they continue to hold in the data structure setting. In particular, we show that there is a poly(n) space, n1−ε query time data structure for Partial Match with vectors from {0, 1}O(log n) if and only if such a data structure exists for 1 + Ω(1) Approximate Nearest Neighbor Search in Euclidean space. To establish the equivalences, we introduce two general frameworks for reductions to OV: one based on Σ2 communication protocols, and another based on locality-sensitive hashing families. In addition, we obtain an n2−1/O(log c) time algorithm for Apx-Min-IP with n vectors from {0, 1}c log n, matching state-of-the-art algorithms for OV and Apx-Max-IP. As an application, we obtain a faster algorithm for approximating “almost solvable” MAX-SAT instances.