RDP 2018-08: Econometric Perspectives on Economic Measurement 1. Introduction
July 2018
- Download the Paper 1,726KB
All measures of macroeconomic change rely on microeconomic data. Key examples are consumer price inflation, housing price inflation, output growth, productivity growth, and purchasing power parities. Each come from micro data on market transactions, and each are cornerstones of evidence-based policy.
Yet it remains unclear how the measures should handle changes in the quality composition of the items being transacted. For instance, how should a consumer price index adjust for the improving quality of mobile phones? What even defines quality? Measurement scholars find these questions difficult. With technological advances delivering large quality improvements, sensible solutions are important. Jaimovich, Rebelo and Wong (2015) show that quality compositions can swing with business cycles as well.
When it is not the types of items being transacted that are changing – just their market shares – the primary tools for handling quality change are index functions. The literature contains hundreds of options and three main approaches for distinguishing among them:
- The ‘test’ approach (also called ‘axiomatic’ or ‘instrumental’) distinguishes functions by their ability to satisfy certain desirable mathematical properties. (Balk (2008) provides a review.)
- The ‘economic’ approach distinguishes functions by how closely they measure the changing cost of attaining a given economic objective, such as an amount of output or living standard. (Diewert (1981) provides a review.)
- The ‘stochastic’ approach distinguishes functions by how well they estimate parameters in econometric descriptions of the measurement task. (See Selvanathan and Rao (1994) and Clements, Izan and Selvanathan (2006) for reviews.) Currently the literature identifies only some functions as having stochastic justifications.
When the types of items being transacted are changing, standard index functions become undefined and alternative tools are needed. Here, extensions of the econometric methods behind the stochastic approach have been influential.
Still, this paper shows that the relevance of econometrics to economic measurement runs a lot deeper. It turns out that practically all price index functions have origins that are nested in the same econometric model. Through the model we can view the functions as comparing averages of quality-adjusted prices at different places or times. The options are distinguished by their type of average, their definition of quality, and their stance on what I label ‘equal interest’. Following normal practice, each price index implies a quantity index. What look like being exceptions to the paradigm are minor.
This result changes the stochastic approach in useful ways. First, by covering practically all bilateral and multilateral price index functions, it is more comprehensive. So the approach becomes a more complete tool for choosing among the different function types. Second, to distinguish between the types, the approach now relies on attributes that are conceptual. Previous versions of the stochastic approach have distinguished functions using modelling assumptions. The overall outcome is not a recommendation to use any specific index functions, but a logically consistent framework for differentiating and choosing among them.
In turn, the changes to the stochastic approach offer new avenues to understand and tackle measurement problems. The paper highlights three examples, by challenging: the use of a bias correction from Goldberger (1968); the widespread reliance on so-called unit values; and some common views on adjusting for quality change when the types of transacted items are changing. Sensible alternatives are sometimes immediate. With time, the deeper connections to the econometrics literature could yield others.
The new framework and the results that flow from it are the paper's main contributions. Before establishing those results though, it is necessary to do some groundwork. In particular, the next section demonstrates that econometric estimators in measurement applications are often inconsistent for the parameters of interest that are defined (or ‘identified’) by the corresponding model assumptions. Strangely, it is not the estimators that need to change, but the standard model set-up that defines the parameters of interest. These ideas overlap with other ideas that are already in the literature. By connecting and re-specifying them, it is hoped that a shift in the consensus on appropriate model specification will occur.
For the wider macro community, a side-goal of the paper is to simplify issues of measurement. Macro researchers could use the new framework to appreciate the many compromises built into macro data. The next section is therefore also intended to provide sufficient background for macro researchers without a specialist understanding of measurement.