RDP 2023-07: Identification and Inference under Narrative Restrictions Appendix B: Proofs
October 2023
- Download the Paper 1.10MB
Proof of Proposition 4.1. can be written as
The likelihood for the reduced-form parameters point identifies , so holds only at . Hence, we set and consider ,
if and only if holds almost surely under . In terms of the reduced-form residuals entering the NR, the latter condition is equivalent to up to a null set under . Hence, defined in the proposition collects observationally equivalent values of Q at in terms of the unconditional likelihood.
Next, for the case of the the conditional likelihood, consider
where the inequality follows from the Cauchy-Schwartz inequality. The inequality is satisfied with equality if and only if holds almost surely under . Hence, by repeating the argument for the unconditional likelihood case, we conclude that consists of observationally equivalent values of Q at in terms of the conditional likelihood.
Proof of Theorem 6.1. Since satisfies the imposed NR and the other sign restrictions (if any imposed), holds for any yT. Hence, for all T,
To prove the claim, it suffices to focus on the asymptotic behaviour of the coverage probability for the conditional identified set shown in the right-hand side.
Under Assumptions 2 and 3, the asymptotically correct coverage for the conditional identified set can be obtained by applying Proposition 2 in GK21.
B.1 Primitive Conditions for Assumption 3.
In what follows, we present sufficient conditions for convexity, continuity and differentiability (both in ) of the conditional impulse response identified set under the assumption that there is a fixed number of shock-sign restrictions constraining the first structural shock only (possibly in multiple periods).[20]
Proposition B.1. (Convexity) Let the parameter of interest be . Assume that there are shock-sign restrictions on for t = t1,...,tK, so Then the set of values of satisfying the shock-sign restrictions and sign normalisation, is convex for all i and h if there exists a unit-length vector satisfying
Proof. If there exists a unit-length vector q satisfying the inequality in Equation (B2), it must lie within the intersection of the K half-spaces defined by the inequalities the half-space defined by the sign normalisation, , and the unit sphere in . The intersection of these K+1 half-spaces and the unit sphere is a path-connected set. Since is a continuous function of q1, the set of values of satisfying the restrictions is an interval and is thus convex, because the set of a continuous function with a path-connected domain is always an interval.
Proposition B.2. (Continuity) Let the parameter of interest and restrictions be as in Proposition B.1, and assume that the conditions in the proposition are satisfied. If there exists a unit-length vector such that, at
then and are continuous at for all i and h.[21]
Proof. YT enters the NR through the reduced-form VAR innovations, ut. After noting that the reduced-form VAR innovations are (implicitly) continuous in , continuity of and follows by the same logic as in the proof of Proposition B.2 of Giacomini and Kitagawa (2021b). We omit the detail for brevity.
Proposition B.3. (Differentiability) Let the parameter of interest and restrictions be as in Proposition B.1, and assume that the conditions in the proposition are satisfied. Denote the unit sphere in by If, at , the set of solutions to the optimisation problem
is singleton, the optimised value is non-zero, and the number of binding inequality restrictions at the optimum is at most n – 1, then is almost surely differentiable at .
Proof. One-to-one differentiable reparameterisation of the optimisation problem in Equation (B4) using yields the optimisation problem in Equation (2.5) of Gafarov et al (2018), with a set of inequality restrictions that are a function of the data through the reduced-form VAR innovations entering the NR. Noting that ut is (implicitly) differentiable in , differentiability of at follows from their Theorem 2 under the assumptions that, at , the set of solutions to the optimisation problem is singleton, the optimised value is non-zero, and the number of binding sign restrictions at the optimum is at most n – 1. Differentiability of follows similarly. Note that Theorem 2 of Gafarov et al (2018), when applied to the current context, additionally requires that the column vectors of are linearly independent, but this occurs almost surely under the probability law for YT
Footnotes
Giacomini and Kitagawa (2021b) present similar conditions for SVARs identified using traditional signs and/or zero restrictions. It would be straightforward to extend the conditions here to additionally allow for sign and zero restrictions on the first column of Q. [20]
For a vector means that for all i = 1,...,m. [21]