On optimality of Bayesian testimation in the normal means problem
Abbreviated Journal Title
adaptivity; complexity penalty; maximum a posteriori rule; minimax; estimation; sequence estimation; sparsity; thresholding; FALSE DISCOVERY RATE; INFLATION CRITERION; VARIABLE SELECTION; REGRESSION; SHRINKAGE; MODEL; RISK; Statistics & Probability
We consider a problem of recovering a high-dimensional vector mu observed in white noise, where the unknown vector g is assumed to be sparse. The objective of the paper is to develop a Bayesian formalism which gives rise to a family of l(0)-type penalties. The penalties are associated with various choices of the prior distributions pi(n)(center dot) on the number of nonzero entries of mu and, hence, are easy to interpret. The resulting Bayesian estimators lead to a general thresholding rule which accommodates many of the known thresholding and model selection procedures as particular cases corresponding to specific choices of pi(n)(center dot). Furthermore, they achieve optimality in a rather general setting under very mild conditions on the prior. We also specify the class of priors pi(n)(center dot) for which the resulting estimator is adaptively optimal (in the minimax sense) for a wide range of sparse sequences and consider several examples of such priors.
Annals of Statistics
"On optimality of Bayesian testimation in the normal means problem" (2007). Faculty Bibliography 2000s. 6793.