Forecast Evaluation of Small Nested Model Sets / Kirstin Hubrich, Kenneth D. West.
Material type:
- Hardcopy version available to institutional subscribers
Item type | Home library | Collection | Call number | Status | Date due | Barcode | Item holds | |
---|---|---|---|---|---|---|---|---|
Working Paper | Biblioteca Digital | Colección NBER | nber w14601 (Browse shelf(Opens below)) | Not For Loan |
Collection: Colección NBER Close shelf browser (Hides shelf browser)
December 2008.
We propose two new procedures for comparing the mean squared prediction error (MSPE) of a benchmark model to the MSPEs of a small set of alternative models that nest the benchmark. Our procedures compare the benchmark to all the alternative models simultaneously rather than sequentially, and do not require reestimation of models as part of a bootstrap procedure. Both procedures adjust MSPE differences in accordance with Clark and West (2007); one procedure then examines the maximum t-statistic, the other computes a chi-squared statistic. Our simulations examine the proposed procedures and two existing procedures that do not adjust the MSPE differences: a chi-squared statistic, and White's (2000) reality check. In these simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum t-statistic has best power. We illustrate our procedures by comparing forecasts of different models for U.S. inflation.
Hardcopy version available to institutional subscribers
System requirements: Adobe [Acrobat] Reader required for PDF files.
Mode of access: World Wide Web.
Print version record
There are no comments on this title.