nep-ecm New Economics Papers
on Econometrics
Issue of 2015‒11‒15
seventeen papers chosen by
Sune Karlsson
Örebro universitet

  1. A Composite Likelihood Framework for Analyzing Singular DSGE Models By Zhongjun Qu
  2. Global Identification in DSGE Models Allowing for Indeterminacy By Zhongjun Qu; Denis Tkachenko
  3. Testing Subspace Granger Causality By Majid M. Al-Sadoon
  4. Likelihood Ratio Based Tests for Markov Regime Switching By Zhongjun Qu; Fan Zhuo
  5. Large Bayesian VARs: A flexible Kronecker error covariance structure By Joshua C.C. Chan
  6. Comparing Distribution and Quantile Regression By Samantha Leorato; Franco Peracchi
  7. "Linear Ridge Estimator of High-Dimensional Precision Matrix Using Random Matrix Theory " By Tsubasa Ito; Tatsuya Kubokawa
  8. Extreme risk interdependence By Polanski, Arnold; Stoja, Evarist
  9. Functional generalized autoregressive conditional heteroskedasticity By Aue, Alexander; Horvath, Lajos; Pellatt, Daniel
  10. Specification tests for time-varying parameter models with stochastic volatility By Joshua C.C. Chan
  11. QML Estimation of the Spatial Weight Matrix in the MR-SAR Model By Saruta Benjanuvatra; Peter Burridge
  12. Inference in Differences-in-Differences with Few Treated Groups and Heteroskedasticity By Ferman, Bruno; Pinto, Cristine
  13. Testing for Unit Roots in Panel Data with Boundary Crossing Counts By Peter Farkas; Laszlo Matyas
  14. An asymptotic test for the Conditional Value-at-Risk By Vékás, Péter
  15. Combining time-variation and mixed-frequencies: an analysis of government spending multipliers in Italy By Cimadomo, Jacopo; D'Agostino, Antonello
  16. Nonstationary Z-score measures By Mare, Davide Salvatore; Moreira, Fernando; Rossi, Roberto
  17. A Threshold Model for Discontinuous Preference Change and Satiation By Nobuhiko Terui; Shohei Hasegawa; Greg M. Allenby

  1. By: Zhongjun Qu (Boston University)
    Abstract: This paper builds upon the composite likelihood concept of Lindsay (1988) to develop a framework for parameter identification, estimation, inference and forecasting in DSGE models allowing for stochastic singularity. The framework consists of the following four components. First, it provides a necessary and sufficient condition for parameter identification, where the identifying information is provided by the first and second order properties of the nonsingular submodels. Second, it provides an MCMC based procedure for parameter estimation. Third, it delivers confidence sets for the structural parameters and the impulse responses that allow for model misspecification. Fourth, it generates forecasts for all the observed endogenous variables, irrespective of the number of shocks in the model. The framework encompasses the conventional likelihood analysis as a special case when the model is nonsingular. Importantly, it enables the researcher to start with a basic model and then gradually incorporate more shocks and other features, meanwhile confronting all the models with the data to assess their implications. The methodology is illustrated using both small and medium scale DSGE models. These models have numbers of shocks ranging between one and seven.
    Keywords: business cycle, dynamic stochastic general equilibrium models, identification, impulse response, MCMC, stochastic singularity
    JEL: C13 C32 C51 E1
    Date: 2015–06
  2. By: Zhongjun Qu (Boston University); Denis Tkachenko (National University of Singapore)
    Abstract: This paper presents a framework for analyzing global identification in log linearized DSGE models that encompasses both determinacy and indeterminacy. First, it considers a frequency domain expression for the Kullback-Leibler distance between two DSGE models, and shows that global identification fails if and only if the minimized distance equals zero. This result has three features. (1) It can be applied across DSGE models with different structures. (2) It permits checking whether a subset of frequencies can deliver identification. (3) It delivers parameter values that yield observational equivalence if there is identification failure. Next, the paper proposes a measure for the empirical closeness between two DSGE models for a further understanding of the strength of identification. The measure gauges the feasibility of distinguishing one model from another based on a finite number of observations generated by the two models. It is shown to be equal to the highest possible power in a Gaussian model under a local asymptotic framework. The above theory is illustrated using two small scale and one medium scale DSGE models. The results document that certain parameters can be identified under indeterminacy but not determinacy, that different monetary policy rules can be (nearly) observationally equivalent, and that identification properties can differ substantially between small and medium scale models. For implementation, two procedures are developed and made available, both of which can be used to obtain and thus to cross validate the findings reported in the empirical applications. Although the paper focuses on DSGE models, the results are also applicable to other vector linear processes with well defined spectra, such as the (factor augmented) vector autoregression.
    Keywords: Dynamic stochastic general equilibrium models, frequency domain, global identification, multiple equilibria, spectral density
    JEL: C10 C30 C52 E1 E3
    Date: 2015–08
  3. By: Majid M. Al-Sadoon
    Abstract: The methodology of multivariate Granger non-causality testing at various horizons is extended to allow for inference on its directionality. This paper presents empirical manifestations of these subspaces and provides useful interpretations for them. It then proposes methods for estimating these subspaces and finding their dimensions utilizing simple vector autoregressions modelling that is easy to implement. The methodology is illustrated by an application to empirical monetary policy.
    Keywords: Granger causality, VAR model, rank testing, Okun's law, policy trade-offs
    JEL: C12 C13 C15 C32 C53 E3 E4 E52
    Date: 2015–11
  4. By: Zhongjun Qu (Boston University); Fan Zhuo (Boston University)
    Abstract: Markov regime switching models are widely considered in economics and finance. Although there have been persistent interests (see e.g., Hansen, 1992, Garcia, 1998, and Cho and White, 2007), the asymptotic distributions of likelihood ratio based tests have remained unknown. This paper considers such tests and establishes their asymptotic distributions in the context of non- linear models allowing for multiple switching parameters. The analysis simultaneously addresses three difficulties: (i) some nuisance parameters are unidentified under the null hypothesis, (ii) the null hypothesis yields a local optimum, and (iii) conditional regime probabilities follow stochastic processes that can only be represented recursively. Addressing these issues permits substantial power gains in empirically relevant situations. Besides obtaining the tests' asymptotic distributions, this paper also obtains four sets of results that can be of independent interest: (1) a characterization of conditional regime probabilities and their high order derivatives with respect to the model's parameters, (2) a high order approximation to the log likelihood ratio permitting multiple switching parameters, (3) a refinement to the asymptotic distribution, and (4) a unified algorithm for simulating the critical values. For models that are linear under the null hypothesis, the elements needed for the algorithm can all be computed analytically. The above results also shed light on why some bootstrap procedures can be inconsistent and why standard information criteria, such as the Bayesian information criterion (BIC), can be sensitive to the hypothesis and the model's structure. When applied to the US quarterly real GDP growth rates, the methods suggest fairly strong evidence favoring the regime switching specification, which holds consistently over a range of sample periods.
    Keywords: Hypothesis testing, likelihood ratio, Markov switching, nonlinearity
    JEL: C12 C22 E32
    Date: 2015–10
  5. By: Joshua C.C. Chan
    Abstract: We introduce a class of large Bayesian vector autoregressions (BVARs) that allows for non-Gaussian, heteroscedastic and serially dependent innovations. To make estimation computationally tractable, we exploit a certain Kronecker structure of the likelihood implied by this class of models. We propose a unified approach for estimating these models using Markov chain Monte Carlo (MCMC) methods. In an application that involves 20 macroeconomic variables, we find that these BVARs with more flexible covariance structures outperform the standard variant with independent, homoscedastic Gaussian innovations in both in-sample model-fit and out-of-sample forecast performance.
    Keywords: stochastic volatility, non-Gaussian, ARMA, forecasting
    JEL: C11 C51 C53
    Date: 2015–11
  6. By: Samantha Leorato (University of Rome Tor Vergata); Franco Peracchi (Georgetown University, University of Rome Tor Vergata and EIEF)
    Abstract: We study the sampling properties of two alternative approaches to estimating the conditional distribution of a continuous outcome Y given a vector X of regressors. One approach – distribution regression – is based on direct estimation of the conditional distribution function; the other approach – quantile regression – is instead based on direct estimation of the conditional quantile function. Indirect estimates of the conditional quantile function and the conditional distribution function may then be obtained by inverting the direct estimates obtained from either approach or, to guarantee monotonicity, their rearranged versions. We provide a systematic comparison of the asymptotic and finite sample performance of monotonic estimators obtained from the two approaches, considering both cases when the underlying linear-in-parameter models are correctly specified and several types of model misspecification of considerable practical relevance.
    Date: 2015
  7. By: Tsubasa Ito (Graduate School of Economics, The University of Tokyo); Tatsuya Kubokawa (Faculty of Economics, The University of Tokyo)
    Abstract: In estimation of the large precision matrix, this paper suggests a new shrinkage estimator, called the linear ridge estimator. This estimator is motivated from a Bayesian aspect for a spike and slab prior distribution of the precision matrix, and has a form of convex combination of the ridge estimator and the identity matrix multiplied by scalar. The optimal parameters in the linear ridge estimator are derived in terms of minimizing a Frobenius loss function and estimated in closed forms based on the random matrix theory. Finally, the performance of the linear ridge estimator is numerically investigated and compared with some existing estimators.
    Date: 2015–11
  8. By: Polanski, Arnold (University of East Anglia); Stoja, Evarist (University of Bristol)
    Abstract: Tail interdependence is defined as the situation where extreme outcomes for some variables are informative about such outcomes for other variables. We extend the concept of multi-information to quantify tail interdependence at different levels of extremity, decompose it into systemic and residual part and measure the contribution of a constituent to the interdependence of a system. Further, we devise statistical procedures to test: a) tail independence; b) whether an empirical interdependence structure is generated by a theoretical model; and c) symmetry of the interdependence structure in the tails. The application of this approach to multidimensional financial data confirms some known and uncovers new stylized facts on extreme returns.
    Keywords: Co-exceedance; Kullback-Leibler divergence; multi-information; relative entropy; risk contribution; risk interdependence.
    JEL: C12 C14 C52
    Date: 2015–11–06
  9. By: Aue, Alexander; Horvath, Lajos; Pellatt, Daniel
    Abstract: Heteroskedasticity is a common feature of financial time series and is commonly addressed in the model building process through the use of ARCH and GARCH processes. More recently multivariate variants of these processes have been in the focus of research with attention given to methods seeking an efficient and economic estimation of a large number of model parameters. Due to the need for estimation of many parameters, however, these models may not be suitable for modeling now prevalent high-frequency volatility data. One potentially useful way to bypass these issues is to take a functional approach. In this paper, theory is developed for a new functional version of the generalized autoregressive conditionally heteroskedastic process, termed fGARCH. The main results are concerned with the structure of the fGARCH(1,1) process, providing criteria for the existence of a strictly stationary solutions both in the space of square-integrable and continuous functions. An estimation procedure is introduced and its consistency verified. A small empirical study highlights potential applications to intraday volatility estimation.
    Keywords: Econometrics; Financial time series; Functional data; GARCH processes; Stationary solutions
    JEL: C1 C13 C4
    Date: 2015–08–20
  10. By: Joshua C.C. Chan
    Abstract: We propose an easy technique to test for time-variation in coefficients and volatilities. Specifically, by using a noncentered parameterization for state space models, we develop a method to directly calculate the relevant Bayes factor using the Savage-Dickey density ratio—thus avoiding the computation of the marginal likelihood altogether. The proposed methodology is illustrated via two empirical applications. In the first application we test for time-variation in the volatility of inflation in the G7 countries. The second application investigates if there is substantial time-variation in the NAIRU in the US.
    Keywords: Bayesian model comparison, state space, inflation uncertainty, NAIRU
    JEL: C11 C32 E31 E52
    Date: 2015–11
  11. By: Saruta Benjanuvatra; Peter Burridge
    Abstract: We investigate QML estimation of a parametric form for the spatial weight matrix, W, appearing in the mixed regressive, spatial autoregressive (MR-SAR) model and extend the identifiability, consistency, and asymptotic Normality results given by Lee (2004, 2007) to the case when W depends on an unknown parameter, y, that is to be estimated from a single cross-section. Numerical experiments illustrate that the QML estimator works quite well inmoderate sized samples, yielding well-behaved parameter estimates and t-statistics with approximately correct size in most cases. These findings should open the door to a much more flexible approach to the construction of spatial regression models. Finally, the QML estimator using two types of sub-models for the spatial weights is applied to the cross-sectional dataset used in Ertur and Koch (2007), to illustrate the utility of the approach.
    Keywords: Spatial autoregressive model, estimated spatial weight matrix, quasi-maximum likelihood estimator, growth spillovers.
    JEL: C13 C15 C21 R15
    Date: 2015–09
  12. By: Ferman, Bruno; Pinto, Cristine
    Abstract: Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, inference in DID models when there are few treated groups is still an open question. We show that usual inference methods used in DID models might not perform well when there are few treated groups and residuals are heteroskedastic. In particular, when there is variation in the number of observations per group, we show that inference methods designed to work when there are few treated groups would tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups would have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) dataset to show that this problem is relevant even in datasets with large number of observations per group. Then we derive alternative inference methods that provide accurate hypothesis testing in situations of few treated groups and many control groups in the presence of heteroskedasticity (including the case of only one treated group). The main assumption is that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. Finally, we also show that an inference method for the Synthetic Control Estimator proposed by Abadie et al. (2010) can correct for the heteroskedasticity problem, and derive conditions under which this inference method provides accurate hypothesis testing.
    Keywords: differences-in-differences; inference; heteroskedasticity; clustering; few clusters; bootstrap
    JEL: C12 C21 C33
    Date: 2015–11–06
  13. By: Peter Farkas; Laszlo Matyas
    Abstract: This paper introduces a nonparametric, non-asymptotic method for statistical testing based on boundary crossing events. The method is presented by showing it’s use for unit root testing. Two versions of the test are discussed. The first is designed for time series data as well as for cross sectionally independent panel data. The second is taking into account cross-sectional dependence as well. Through Monte Carlo studies we show that the proposed tests are more powerful than existing unit root tests when the error term has t-distribution and the sample size is small. The paper also discusses two empirical applications. The first one analyzes the possibility of mean reversion in the excess returns for the S&P500. Here, the unobserved mean is identified using Shiller’s CAPE ratio. Our test supports mean reversion, which can be interpreted as evidence against strong efficient market hypothesis. The second application cannot confirm the PPP hypothesis in exchange-rate data of OECD countries.
    Date: 2015–11–03
  14. By: Vékás, Péter
    Abstract: Conditional Value-at-Risk (equivalent to the Expected Shortfall, Tail Value-at-Risk and Tail Conditional Expectation in the case of continuous probability distributions) is an increasingly popular risk measure in the fields of actuarial science, banking and finance, and arguably a more suitable alternative to the currently widespread Value-at-Risk. In my paper, I present a brief literature survey, and propose a statistical test of the location of the CVaR, which may be applied by practising actuaries to test whether CVaR-based capital levels are in line with observed data. Finally, I conclude with numerical experiments and some questions for future research.
    Keywords: risk measures, Conditional Value-at-Risk, hypothesis testing, actuarial science
    JEL: C01
    Date: 2015–10–21
  15. By: Cimadomo, Jacopo; D'Agostino, Antonello
    Abstract: In this paper, we propose a time-varying parameter VAR model with stochastic volatility which allows for estimation on data sampled at different frequencies. Our contribution is twofold. First, we extend the methodology developed by Cogley and Sargent (2005), and Primiceri (2005), to a mixed-frequency setting. In particular, our approach allows for the inclusion of two different categories of variables (high-frequency and low-frequency) into the same time varying model. Second, we use this model to study the macroeconomic effects of government spending shocks in Italy over the 1988Q4-2013Q3 period. Italy - as well as most other euro area economies - is characterised by short quarterly time series for fiscal variables, whereas annual data are generally available for a longer sample before 1999. Our results show that the proposed time-varying mixed-frequency model improves on the performance of a simple linear interpolation model in generating the true path of the missing observations. Second, our empirical analysis suggests that government spending shocks tend to have positive effects on output in Italy. The fiscal multiplier, which is maximized at the one year horizon, follows a U-shape over the sample considered: it peaks at around 1.5 at the beginning of the sample, it then stabilizes between 0.8 and 0.9 from the mid-1990s to the late 2000s, before rising again to above unity during of the recent crisis. JEL Classification: C32, E62, H30, H50
    Keywords: government spending multiplier, mixed-frequency data, time variation
    Date: 2015–10
  16. By: Mare, Davide Salvatore; Moreira, Fernando; Rossi, Roberto
    Abstract: In this work we develop advanced techniques for measuring bank insolvency risk. More specifically, we contribute to the existing body of research on the Z-Score. We develop bias reduction strategies for state-of-the-art Z-Score measures in the literature. We introduce novel estimators whose aim is to effectively capture nonstationary returns; for these estimators, as well as for existing ones in the literature, we discuss analytical confidence regions. We exploit moment-based error measures to assess the effectiveness of these estimators. We carry out an extensive empirical study that contrasts state-of-the-art estimators to our novel ones on over ten thousand banks. Finally, we contrast results obtained by using Z-score estimators against business news on the banking sector obtained from Factiva. Our work has important implications for researchers and practitioners. First, accounting for the degree of nonstationarity in returns yields a more accurate quantification of the degree of solvency. Second, our measure allows researchers to factor in the degree of uncertainty in the estimation due to the availability of data while estimating the overall risk of bank insolvency.
    Keywords: bank stability; prudential regulation; insolvency risk; financial distress; Z-Score
    JEL: C20 C60 G21
    Date: 2015–11–11
  17. By: Nobuhiko Terui; Shohei Hasegawa; Greg M. Allenby
    Abstract: We develop a structural model of horizontal and temporal variety seeking using an dynamic factor model that relates attribute satiation to brand preferences. The factor model employs a threshold specification that triggers preference changes when customer satiation exceeds an admissible level but does not change otherwise. The factor model can be applied to high dimensional switching data often encountered when multiple brands are purchased across multiple time periods. The model is applied to two panel datasets, an experimental field study and a traditional scanner panel dataset, where we find large improvements in model fit that reflect distinct shifts in consumer preferences over time. The model can identify the product attributes responsible for satiation, and can be used to produce a dynamic joint space map that displays brand positions and temporal changes in consumer preferences over time.
    Date: 2015–10

This nep-ecm issue is ©2015 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.