Summary: | The rapid development of autonomous vehicles (AVs) holds vast potential for
transportation systems through improved safety, efficiency, and access to
mobility. However, the progression of these impacts, as AVs are adopted, is not
well understood. Numerous technical challenges arise from the goal of analyzing
the partial adoption of autonomy: partial control and observation,
multi-vehicle interactions, and the sheer variety of scenarios represented by
real-world networks. To shed light into near-term AV impacts, this article
studies the suitability of deep reinforcement learning (RL) for overcoming
these challenges in a low AV-adoption regime. A modular learning framework is
presented, which leverages deep RL to address complex traffic dynamics. Modules
are composed to capture common traffic phenomena (stop-and-go traffic jams,
lane changing, intersections). Learned control laws are found to improve upon
human driving performance, in terms of system-level velocity, by up to 57% with
only 4-7% adoption of AVs. Furthermore, in single-lane traffic, a small neural
network control law with only local observation is found to eliminate
stop-and-go traffic - surpassing all known model-based controllers to achieve
near-optimal performance - and generalize to out-of-distribution traffic
densities.
|