RDP 2022-04: The Unit-effect Normalisation in Set-identified Structural Vector Autoregressions 2. Framework
October 2022
- Download the Paper 2,225KB
This section describes the SVAR model, outlines the concepts of identifying restrictions and identified sets, and describes the robust Bayesian approach to inference.
2.1 SVAR and orthogonal reduced form
Let yt = (y1t,...,ynt)′ be an n×1 vector of random variables following the SVAR (p) process:
where A0 is an invertible n×n matrix with positive diagonal elements (which is a normalisation on the signs of the structural shocks) and . Conditional on past information, is normally distributed with mean zero and identity variance-covariance matrix. The ‘orthogonal reduced form’ of the model is:
where is the matrix of reduced-form coefficients, is the lower-triangular Cholesky factor of the variance-covariance matrix of the reduced-form VAR innovations, with ut = yt – Bxt, and Q is an n×n orthonormal matrix (i.e. QQ' = In).
The reduced-form parameters are denoted by and the space of n×n orthonormal matrices by (n).
Impulse responses to standard deviation shocks can be obtained from the coefficients of the vector moving average representation of the VAR:
where Ch is defined recursively by for with C0=In. The (i, j) th element of the matrix is the horizon- h impulse response of the i th variable to the j th structural shock, denoted by , where is the i th row of and qj = Qej,n is the j th column of Q.
The horizon- h impulse response of the i th variable to a shock in the first variable that raises the first variable by one unit on impact is
which is well-defined whenever I refer to as an ‘impulse response to a unit shock’ or a ‘unit impulse response’ and as the ‘normalising impulse response’. I will sometimes suppress the dependence of the impulse responses on . The assumption that the normalising impulse response is the impact response of the first variable to the first shock is made to ease notation, but the discussion below extends straightforwardly to more general settings.[6]
2.2 Identifying restrictions and identified sets
Imposing identifying restrictions on functions of the structural parameters is equivalent to imposing restrictions on Q given ; for example, consider a sign restriction on an impulse response such that This is a linear inequality restriction on qj, where the coefficients in the restriction are a function of . More generally, let represent a collection of s sign restrictions (including the sign normalisation . Similarly, represent a collection of f zero restrictions by For example, these sign and zero restrictions could include restrictions on impulse responses or elements of A0.[7]
Let fi represent the number of zero restrictions constraining the i th column of Q with Assume that the variables are ordered such that fi is weakly decreasing and that for i = 1,...,n with strict inequality for at least one i; this is a sufficient condition for the model to be set identified under zero restrictions (Rubio-Ramírez et al 2010; Bacchiocchi and Kitagawa 2021). This ordering convention is also useful when using numerical algorithms to iteratively construct columns of Q satisfying the identifying restrictions (as in Giacomini and Kitagawa (2021)).
Given a collection of sign and zero restrictions, the identified set for Q is
collects observationally equivalent parameter values, which are parameter values corresponding to the same value of the likelihood function (Rothenberg 1971). Note that the identified set may be empty. The identified set for a particular impulse response is the set of values of that impulse response as Q varies over its identified set; that is,
2.3 Robust Bayesian inference in set-identified SVARs
The standard approach to conducting Bayesian inference in set-identified SVARs involves specifying a prior for the reduced-form parameters and a uniform prior for the orthonormal matrix Q (Uhlig 2005; Rubio-Ramírez et al 2010; Arias et al 2018). To draw from the resulting posterior in practice, one samples from its posterior and Q from a uniform distribution over and discards draws that violate the sign restrictions. Assume there is a scalar parameter of interest that is a function of the structural parameters, (e.g. a particular impulse response). Draws of are obtained by transforming the draws of and Q, and the posterior is summarised using quantities such as the posterior mean and quantiles.
Let be a prior for , where is the space of reduced-form parameters such that is non-empty. A joint prior for the full set of parameters can be decomposed as , where is the conditional prior for Q given (which assigns zero prior density outside of ). After observing the data Y, the posterior is , where is the posterior for . The prior for is therefore updated via the likelihood, whereas the conditional prior for Q given is not, because Q does not appear in the likelihood. This raises the concern that posterior inferences may be sensitive to changes in It is therefore important for researchers to assess or eliminate this sensitivity.[8]
To this end, I adopt the ‘robust’ (multiple-prior) Bayesian approach to inference in set-identified models proposed by Giacomini and Kitagawa (2021). In the context of a SVAR, this approach eliminates the source of posterior sensitivity arising due to the fact that is never updated. The key feature of the approach is that it replaces with the class of all conditional priors that are consistent with the identifying restrictions:
Combining the class of priors with generates a class of posteriors for :
The class of posteriors for induces a class of posteriors for Giacomini and Kitagawa (2021) suggest summarising by reporting the ‘set of posterior means’, which is an interval that contains all posterior means corresponding to the posteriors in :
where is the lower bound of the identified set for and is the upper bound. Similarly, one can construct a ‘set of posterior -quantiles’ as an interval with end points equal to the th quantiles of and . Giacomini and Kitagawa (2021) also suggest reporting a robust credible region, which is an interval estimate for that is assigned at least a given posterior probability under all posteriors in Additionally, the class of posteriors generates a set of posterior probabilities assigned to any given hypothesis (e.g. the output response to a monetary policy shock is negative at some horizon); this set can be summarised by the posterior lower and upper probabilities, which are, respectively, the smallest and largest posterior probabilities assigned to the hypothesis over all posteriors in Appendix C describes how I compute these quantities in the context of the empirical application in Section 5.
Footnotes
For example, when estimating the effects of news shocks, it may be natural to normalise a longer-horizon impulse response (Stock and Watson 2018). [6]
See Stock and Watson (2016) or Kilian and Lütkepohl (2017) for overviews of identification in SVARs. See Giacomini and Kitagawa (2021) for more information about the form of and under different types of restrictions. [7]
Inoue and Kilian (2022) and Kilian (2022) argue that posterior sensitivity to the choice of prior is typically not quantitatively important in SVAR applications. However, the evidence that they cite is based on comparing prior and posterior distributions. As discussed in Poirier (1998) and Giacomini, Kitagawa and Read (2021b, 2022a), this comparison is not informative about posterior sensitivity when models are set identified; instead, the relevant measure of posterior sensitivity is the extent to which the posterior changes when the unrevisable component of the prior changes. While the standard Bayesian approach to inference assumes a uniform prior for Q on the basis that this is ‘uninformative’, Baumeister and Hamilton (2015) show that the implicit prior over individual impulse responses is not necessarily uniform (i.e. it may be informative). Arias, Rubio-Ramírez and Waggoner (2022) show that the uniform prior for Q implies a conditional (joint) prior over the vector of impulse responses that is uniform. Giacomini et al (2022a) argue that a uniform prior does not necessarily reflect the absence of prior information about the parameters, which may be better represented by ‘ambiguity’. [8]