
on Econometrics 
By:  Zhongjun Qu (Boston University) 
Abstract:  This paper builds upon the composite likelihood concept of Lindsay (1988) to develop a framework for parameter identification, estimation, inference and forecasting in DSGE models allowing for stochastic singularity. The framework consists of the following four components. First, it provides a necessary and sufficient condition for parameter identification, where the identifying information is provided by the first and second order properties of the nonsingular submodels. Second, it provides an MCMC based procedure for parameter estimation. Third, it delivers confidence sets for the structural parameters and the impulse responses that allow for model misspecification. Fourth, it generates forecasts for all the observed endogenous variables, irrespective of the number of shocks in the model. The framework encompasses the conventional likelihood analysis as a special case when the model is nonsingular. Importantly, it enables the researcher to start with a basic model and then gradually incorporate more shocks and other features, meanwhile confronting all the models with the data to assess their implications. The methodology is illustrated using both small and medium scale DSGE models. These models have numbers of shocks ranging between one and seven. 
Keywords:  business cycle, dynamic stochastic general equilibrium models, identification, impulse response, MCMC, stochastic singularity 
JEL:  C13 C32 C51 E1 
Date:  2015–06 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015003&r=ecm 
By:  Zhongjun Qu (Boston University); Denis Tkachenko (National University of Singapore) 
Abstract:  This paper presents a framework for analyzing global identification in log linearized DSGE models that encompasses both determinacy and indeterminacy. First, it considers a frequency domain expression for the KullbackLeibler distance between two DSGE models, and shows that global identification fails if and only if the minimized distance equals zero. This result has three features. (1) It can be applied across DSGE models with different structures. (2) It permits checking whether a subset of frequencies can deliver identification. (3) It delivers parameter values that yield observational equivalence if there is identification failure. Next, the paper proposes a measure for the empirical closeness between two DSGE models for a further understanding of the strength of identification. The measure gauges the feasibility of distinguishing one model from another based on a finite number of observations generated by the two models. It is shown to be equal to the highest possible power in a Gaussian model under a local asymptotic framework. The above theory is illustrated using two small scale and one medium scale DSGE models. The results document that certain parameters can be identified under indeterminacy but not determinacy, that different monetary policy rules can be (nearly) observationally equivalent, and that identification properties can differ substantially between small and medium scale models. For implementation, two procedures are developed and made available, both of which can be used to obtain and thus to cross validate the findings reported in the empirical applications. Although the paper focuses on DSGE models, the results are also applicable to other vector linear processes with well defined spectra, such as the (factor augmented) vector autoregression. 
Keywords:  Dynamic stochastic general equilibrium models, frequency domain, global identification, multiple equilibria, spectral density 
JEL:  C10 C30 C52 E1 E3 
Date:  2015–08 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015002&r=ecm 
By:  Majid M. AlSadoon 
Abstract:  The methodology of multivariate Granger noncausality testing at various horizons is extended to allow for inference on its directionality. This paper presents empirical manifestations of these subspaces and provides useful interpretations for them. It then proposes methods for estimating these subspaces and finding their dimensions utilizing simple vector autoregressions modelling that is easy to implement. The methodology is illustrated by an application to empirical monetary policy. 
Keywords:  Granger causality, VAR model, rank testing, Okun's law, policy tradeoffs 
JEL:  C12 C13 C15 C32 C53 E3 E4 E52 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:bge:wpaper:850&r=ecm 
By:  Zhongjun Qu (Boston University); Fan Zhuo (Boston University) 
Abstract:  Markov regime switching models are widely considered in economics and finance. Although there have been persistent interests (see e.g., Hansen, 1992, Garcia, 1998, and Cho and White, 2007), the asymptotic distributions of likelihood ratio based tests have remained unknown. This paper considers such tests and establishes their asymptotic distributions in the context of non linear models allowing for multiple switching parameters. The analysis simultaneously addresses three difficulties: (i) some nuisance parameters are unidentified under the null hypothesis, (ii) the null hypothesis yields a local optimum, and (iii) conditional regime probabilities follow stochastic processes that can only be represented recursively. Addressing these issues permits substantial power gains in empirically relevant situations. Besides obtaining the tests' asymptotic distributions, this paper also obtains four sets of results that can be of independent interest: (1) a characterization of conditional regime probabilities and their high order derivatives with respect to the model's parameters, (2) a high order approximation to the log likelihood ratio permitting multiple switching parameters, (3) a refinement to the asymptotic distribution, and (4) a unified algorithm for simulating the critical values. For models that are linear under the null hypothesis, the elements needed for the algorithm can all be computed analytically. The above results also shed light on why some bootstrap procedures can be inconsistent and why standard information criteria, such as the Bayesian information criterion (BIC), can be sensitive to the hypothesis and the model's structure. When applied to the US quarterly real GDP growth rates, the methods suggest fairly strong evidence favoring the regime switching specification, which holds consistently over a range of sample periods. 
Keywords:  Hypothesis testing, likelihood ratio, Markov switching, nonlinearity 
JEL:  C12 C22 E32 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2015004&r=ecm 
By:  Joshua C.C. Chan 
Abstract:  We introduce a class of large Bayesian vector autoregressions (BVARs) that allows for nonGaussian, heteroscedastic and serially dependent innovations. To make estimation computationally tractable, we exploit a certain Kronecker structure of the likelihood implied by this class of models. We propose a unified approach for estimating these models using Markov chain Monte Carlo (MCMC) methods. In an application that involves 20 macroeconomic variables, we find that these BVARs with more flexible covariance structures outperform the standard variant with independent, homoscedastic Gaussian innovations in both insample modelfit and outofsample forecast performance. 
Keywords:  stochastic volatility, nonGaussian, ARMA, forecasting 
JEL:  C11 C51 C53 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201541&r=ecm 
By:  Samantha Leorato (University of Rome Tor Vergata); Franco Peracchi (Georgetown University, University of Rome Tor Vergata and EIEF) 
Abstract:  We study the sampling properties of two alternative approaches to estimating the conditional distribution of a continuous outcome Y given a vector X of regressors. One approach – distribution regression – is based on direct estimation of the conditional distribution function; the other approach – quantile regression – is instead based on direct estimation of the conditional quantile function. Indirect estimates of the conditional quantile function and the conditional distribution function may then be obtained by inverting the direct estimates obtained from either approach or, to guarantee monotonicity, their rearranged versions. We provide a systematic comparison of the asymptotic and finite sample performance of monotonic estimators obtained from the two approaches, considering both cases when the underlying linearinparameter models are correctly specified and several types of model misspecification of considerable practical relevance. 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:eie:wpaper:1511&r=ecm 
By:  Tsubasa Ito (Graduate School of Economics, The University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo) 
Abstract:  In estimation of the large precision matrix, this paper suggests a new shrinkage estimator, called the linear ridge estimator. This estimator is motivated from a Bayesian aspect for a spike and slab prior distribution of the precision matrix, and has a form of convex combination of the ridge estimator and the identity matrix multiplied by scalar. The optimal parameters in the linear ridge estimator are derived in terms of minimizing a Frobenius loss function and estimated in closed forms based on the random matrix theory. Finally, the performance of the linear ridge estimator is numerically investigated and compared with some existing estimators. 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:tky:fseres:2015cf995&r=ecm 
By:  Polanski, Arnold (University of East Anglia); Stoja, Evarist (University of Bristol) 
Abstract:  Tail interdependence is defined as the situation where extreme outcomes for some variables are informative about such outcomes for other variables. We extend the concept of multiinformation to quantify tail interdependence at different levels of extremity, decompose it into systemic and residual part and measure the contribution of a constituent to the interdependence of a system. Further, we devise statistical procedures to test: a) tail independence; b) whether an empirical interdependence structure is generated by a theoretical model; and c) symmetry of the interdependence structure in the tails. The application of this approach to multidimensional financial data confirms some known and uncovers new stylized facts on extreme returns. 
Keywords:  Coexceedance; KullbackLeibler divergence; multiinformation; relative entropy; risk contribution; risk interdependence. 
JEL:  C12 C14 C52 
Date:  2015–11–06 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0563&r=ecm 
By:  Aue, Alexander; Horvath, Lajos; Pellatt, Daniel 
Abstract:  Heteroskedasticity is a common feature of financial time series and is commonly addressed in the model building process through the use of ARCH and GARCH processes. More recently multivariate variants of these processes have been in the focus of research with attention given to methods seeking an efficient and economic estimation of a large number of model parameters. Due to the need for estimation of many parameters, however, these models may not be suitable for modeling now prevalent highfrequency volatility data. One potentially useful way to bypass these issues is to take a functional approach. In this paper, theory is developed for a new functional version of the generalized autoregressive conditionally heteroskedastic process, termed fGARCH. The main results are concerned with the structure of the fGARCH(1,1) process, providing criteria for the existence of a strictly stationary solutions both in the space of squareintegrable and continuous functions. An estimation procedure is introduced and its consistency verified. A small empirical study highlights potential applications to intraday volatility estimation. 
Keywords:  Econometrics; Financial time series; Functional data; GARCH processes; Stationary solutions 
JEL:  C1 C13 C4 
Date:  2015–08–20 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:67702&r=ecm 
By:  Joshua C.C. Chan 
Abstract:  We propose an easy technique to test for timevariation in coefficients and volatilities. Specifically, by using a noncentered parameterization for state space models, we develop a method to directly calculate the relevant Bayes factor using the SavageDickey density ratio—thus avoiding the computation of the marginal likelihood altogether. The proposed methodology is illustrated via two empirical applications. In the first application we test for timevariation in the volatility of inflation in the G7 countries. The second application investigates if there is substantial timevariation in the NAIRU in the US. 
Keywords:  Bayesian model comparison, state space, inflation uncertainty, NAIRU 
JEL:  C11 C32 E31 E52 
Date:  2015–11 
URL:  http://d.repec.org/n?u=RePEc:een:camaaa:201542&r=ecm 
By:  Saruta Benjanuvatra; Peter Burridge 
Abstract:  We investigate QML estimation of a parametric form for the spatial weight matrix, W, appearing in the mixed regressive, spatial autoregressive (MRSAR) model and extend the identifiability, consistency, and asymptotic Normality results given by Lee (2004, 2007) to the case when W depends on an unknown parameter, y, that is to be estimated from a single crosssection. Numerical experiments illustrate that the QML estimator works quite well inmoderate sized samples, yielding wellbehaved parameter estimates and tstatistics with approximately correct size in most cases. These findings should open the door to a much more flexible approach to the construction of spatial regression models. Finally, the QML estimator using two types of submodels for the spatial weights is applied to the crosssectional dataset used in Ertur and Koch (2007), to illustrate the utility of the approach. 
Keywords:  Spatial autoregressive model, estimated spatial weight matrix, quasimaximum likelihood estimator, growth spillovers. 
JEL:  C13 C15 C21 R15 
Date:  2015–09 
URL:  http://d.repec.org/n?u=RePEc:yor:yorken:15/24&r=ecm 
By:  Ferman, Bruno; Pinto, Cristine 
Abstract:  DifferencesinDifferences (DID) is one of the most widely used identification strategies in applied economics. However, inference in DID models when there are few treated groups is still an open question. We show that usual inference methods used in DID models might not perform well when there are few treated groups and residuals are heteroskedastic. In particular, when there is variation in the number of observations per group, we show that inference methods designed to work when there are few treated groups would tend to (under) overreject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups would have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) dataset to show that this problem is relevant even in datasets with large number of observations per group. Then we derive alternative inference methods that provide accurate hypothesis testing in situations of few treated groups and many control groups in the presence of heteroskedasticity (including the case of only one treated group). The main assumption is that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. Finally, we also show that an inference method for the Synthetic Control Estimator proposed by Abadie et al. (2010) can correct for the heteroskedasticity problem, and derive conditions under which this inference method provides accurate hypothesis testing. 
Keywords:  differencesindifferences; inference; heteroskedasticity; clustering; few clusters; bootstrap 
JEL:  C12 C21 C33 
Date:  2015–11–06 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:67665&r=ecm 
By:  Peter Farkas; Laszlo Matyas 
Abstract:  This paper introduces a nonparametric, nonasymptotic method for statistical testing based on boundary crossing events. The method is presented by showing it’s use for unit root testing. Two versions of the test are discussed. The first is designed for time series data as well as for cross sectionally independent panel data. The second is taking into account crosssectional dependence as well. Through Monte Carlo studies we show that the proposed tests are more powerful than existing unit root tests when the error term has tdistribution and the sample size is small. The paper also discusses two empirical applications. The first one analyzes the possibility of mean reversion in the excess returns for the S&P500. Here, the unobserved mean is identified using Shiller’s CAPE ratio. Our test supports mean reversion, which can be interpreted as evidence against strong eﬃcient market hypothesis. The second application cannot confirm the PPP hypothesis in exchangerate data of OECD countries. 
Date:  2015–11–03 
URL:  http://d.repec.org/n?u=RePEc:ceu:econwp:2015_5&r=ecm 
By:  Vékás, Péter 
Abstract:  Conditional ValueatRisk (equivalent to the Expected Shortfall, Tail ValueatRisk and Tail Conditional Expectation in the case of continuous probability distributions) is an increasingly popular risk measure in the fields of actuarial science, banking and finance, and arguably a more suitable alternative to the currently widespread ValueatRisk. In my paper, I present a brief literature survey, and propose a statistical test of the location of the CVaR, which may be applied by practising actuaries to test whether CVaRbased capital levels are in line with observed data. Finally, I conclude with numerical experiments and some questions for future research. 
Keywords:  risk measures, Conditional ValueatRisk, hypothesis testing, actuarial science 
JEL:  C01 
Date:  2015–10–21 
URL:  http://d.repec.org/n?u=RePEc:cvh:coecwp:2015/19&r=ecm 
By:  Cimadomo, Jacopo; D'Agostino, Antonello 
Abstract:  In this paper, we propose a timevarying parameter VAR model with stochastic volatility which allows for estimation on data sampled at different frequencies. Our contribution is twofold. First, we extend the methodology developed by Cogley and Sargent (2005), and Primiceri (2005), to a mixedfrequency setting. In particular, our approach allows for the inclusion of two different categories of variables (highfrequency and lowfrequency) into the same time varying model. Second, we use this model to study the macroeconomic effects of government spending shocks in Italy over the 1988Q42013Q3 period. Italy  as well as most other euro area economies  is characterised by short quarterly time series for fiscal variables, whereas annual data are generally available for a longer sample before 1999. Our results show that the proposed timevarying mixedfrequency model improves on the performance of a simple linear interpolation model in generating the true path of the missing observations. Second, our empirical analysis suggests that government spending shocks tend to have positive effects on output in Italy. The fiscal multiplier, which is maximized at the one year horizon, follows a Ushape over the sample considered: it peaks at around 1.5 at the beginning of the sample, it then stabilizes between 0.8 and 0.9 from the mid1990s to the late 2000s, before rising again to above unity during of the recent crisis. JEL Classification: C32, E62, H30, H50 
Keywords:  government spending multiplier, mixedfrequency data, time variation 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:ecb:ecbwps:20151856&r=ecm 
By:  Mare, Davide Salvatore; Moreira, Fernando; Rossi, Roberto 
Abstract:  In this work we develop advanced techniques for measuring bank insolvency risk. More speciﬁcally, we contribute to the existing body of research on the ZScore. We develop bias reduction strategies for stateoftheart ZScore measures in the literature. We introduce novel estimators whose aim is to eﬀectively capture nonstationary returns; for these estimators, as well as for existing ones in the literature, we discuss analytical conﬁdence regions. We exploit momentbased error measures to assess the eﬀectiveness of these estimators. We carry out an extensive empirical study that contrasts stateoftheart estimators to our novel ones on over ten thousand banks. Finally, we contrast results obtained by using Zscore estimators against business news on the banking sector obtained from Factiva. Our work has important implications for researchers and practitioners. First, accounting for the degree of nonstationarity in returns yields a more accurate quantiﬁcation of the degree of solvency. Second, our measure allows researchers to factor in the degree of uncertainty in the estimation due to the availability of data while estimating the overall risk of bank insolvency. 
Keywords:  bank stability; prudential regulation; insolvency risk; ﬁnancial distress; ZScore 
JEL:  C20 C60 G21 
Date:  2015–11–11 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:67840&r=ecm 
By:  Nobuhiko Terui; Shohei Hasegawa; Greg M. Allenby 
Abstract:  We develop a structural model of horizontal and temporal variety seeking using an dynamic factor model that relates attribute satiation to brand preferences. The factor model employs a threshold specification that triggers preference changes when customer satiation exceeds an admissible level but does not change otherwise. The factor model can be applied to high dimensional switching data often encountered when multiple brands are purchased across multiple time periods. The model is applied to two panel datasets, an experimental field study and a traditional scanner panel dataset, where we find large improvements in model fit that reflect distinct shifts in consumer preferences over time. The model can identify the product attributes responsible for satiation, and can be used to produce a dynamic joint space map that displays brand positions and temporal changes in consumer preferences over time. 
Date:  2015–10 
URL:  http://d.repec.org/n?u=RePEc:toh:tmarga:122&r=ecm 