The lasso procedure is an estimator-shrinkage and variable selection method. This paper shows that there always exists and interval of tuning parameter values such that the cor-responding mean spuared prediction error for the lasso estimator is smaller than for the ordinary least spuares estimator. For an estimator satisfying some condition such as unbi-asedness, the paper defines the corresponding generalized lasso estimator. Its mean squared prediction error is shown to be smaller than that of the estimator for values of the tuning parameter in some interval. This implies that all unbiased estimators are not admissible. Simulation results for five models support the theoretical results.