As data have become "Big", shrinkage estimators of various forms have become standard tools in statistical analysis. Common justifications for them include penalized maximum likelihood, empirical Bayes posterior means, and full Bayes posterior modes. None of these, however, directly addresses the question of why one might want a shrunken estimate in the first place. In this talk we outline a general approach to shrinkage, as a result of balancing veracity (getting close to the truth) and simplicity (getting close to zero, typically). While yielding "simple" shrunk estimates, the approach does not require any assumption that the truth is actually full of zeros - an assumption that is often unreasonable. Several well-known shrinkage estimates will be derived as special cases, illustrating close connections between them.