Can the adaptive metropolis algorithm collapse without the covariance lower bound?

The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step n+1 Sn = Cov(X1,..., Xn)+εI, that is, the sample covariance matrix of the history of the chain plus a (small) consta...

Full description

Bibliographic Details
Main Author: Vihola, M
Format: Journal article
Language:English
Published: 2011
Description
Summary:The Adaptive Metropolis (AM) algorithm is based on the symmetric random-walk Metropolis algorithm. The proposal distribution has the following time-dependent covariance matrix at step n+1 Sn = Cov(X1,..., Xn)+εI, that is, the sample covariance matrix of the history of the chain plus a (small) constant ε > 0 multiple of the identity matrix I. The lower bound on the eigenvalues of Sn induced by the factor εI is theoretically convenient, but practically cumbersome, as a good value for the parameter ε may not always be easy to choose. This article considers variants of the AM algorithm that do not explicitly bound the eigenvalues of Sn away from zero. The behaviour of Sn is studied in detail, indicating that the eigenvalues of Sn do not tend to collapse to zero in general. In dimension one, it is shown that Sn is bounded away from zero if the logarithmic target density is uniformly continuous. For a modification of the AM algorithm including an additional fixed component in the proposal distribution, the eigenvalues of Sn are shown to stay away from zero with a practically non-restrictive condition. This result implies a strong law of large numbers for super-exponentially decaying target distributions with regular contours.