
on Econometrics 
By:  Cizek, P. (Tilburg University, Center For Economic Research); Sadikoglu, S. (Tilburg University, Center For Economic Research) 
Abstract:  Motivated by weak smallsample performance of the censored regression quantile estimator proposed by Powell (1986a), two and threestep estimation methods were introduced for estimation of the censored regression model under conditional quantile restriction. While those stepwise estimators have been proven to be consistent and asymptotically normal, their finite sample performance greatly depends on the specification of an initial estimator that selects the subsample to be used in subsequent steps. In this paper, an alternative semiparametric estimator is introduced that does not involve a selection procedure in the first step. The proposed estimator is based on the indirect inference principle and is shown to be consistent and asymptotically normal under appropriate regularity conditions. Its performance is demonstrated and compared to existing methods by means of Monte Carlo simulations. 
Keywords:  asymptotic normality; censored regression; indirect inference; quantile regression 
JEL:  C21 C24 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:b351916f03f74763b47c4f8cbc49f265&r=ecm 
By:  Jiti Gao; Han Hong 
Abstract:  Estimation of unknown parameters and functions involved in complex nonlinear econometric models is a very important issue. Existing estimation methods include generalised method of moments (GMM) by Hansen (1982) and others, efficient method of moments (EMM) by Gallant and Tauchen (1997), Markov chain Monte Carlo (MCMC) method by Chernozhukov and Hong (2003), and nonparametric simulated maximum likelihood estimation (NSMLE) method by Creel and Kristensen (2011), and Kristensen and Shin (2012). Except the NSMLE method, other existing methods do not provide closedform solutions. This paper proposes non and semiparametric based closedform approximations to the estimation and computation of posterior means involved in complex nonlinear econometric models. We first consider the case where the samples can be independently drawn from both the likelihood function and the prior density. The samples and observations are then used to nonparametrically estimate posterior mean functions. The estimation method is also applied to estimate the posterior mean of the parameterofinterest on a summary statistic. Both the asymptotic theory and the finite sample study show that the nonparametric estimate of this posterior mean is superior to existing estimates, including the conventional sample mean. 
Keywords:  then proposes some non and semiparametric dimension reductions methods to deal with the case where the dimensionality of either the regressors or the summary statistics is large. Meanwhile, the paper develops a nonparametric estimation method for the case where the samples are obtained from using a resampling algorithm. The asymptotic theory shows that in each case the rate of convergence of the nonparametric estimate based on the resamples is faster than that of the conventional nonparametric estimation method by an order of the number of the resamples. The proposed models and estimation methods are evaluated through using simulated and empirical examples. Both the simulated and empirical examples show that the proposed nonparametric estimation based on resamples outperforms existing estimation methods. 
JEL:  C12 C14 C22 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201425&r=ecm 
By:  Jin, Xin; Maheu, John M 
Abstract:  This paper introduces several new Bayesian nonparametric models suitable for capturing the unknown conditional distribution of realized covariance (RCOV) matrices. Existing dynamic Wishart models are extended to countably infinite mixture models of Wishart and inverseWishart distributions. In addition to mixture models with constant weights we propose models with timevarying weights to capture time dependence in the unknown distribution. Each of our models can be combined with returns to provide a coherent joint model of returns and RCOV. The extensive forecast results show the new models provide very significant improvements in density forecasts for RCOV and returns and competitive point forecasts of RCOV. 
Keywords:  multiperiod density forecasts, inverseWishart distribution, beam sampling, hierarchical Dirichlet process, infinite hidden Markov model 
JEL:  C11 C14 C32 C58 G17 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60102&r=ecm 
By:  Giuseppe Arbia; Marco Bee; Giuseppe Espa; Flavio Santi 
Abstract:  Maximum likelihood estimation of spatial models based on weight matrices typically requires a sizeable computational capacity, even in rel atively small samples. The unilateral approximation approach to spatial models estimation has been suggested in Besag (1974) as a viable alternat ive to MLE for conditionally specified processes. In this paper we revisit the method, extend it to simultaneous spatial processes and study the finitesample properties of the resulting estimators by means of Monte Carlo simulations, using several Conditional Autoregressive Models. Ac cording to the results, the performance of the unilateral estimators is very good, both in terms of statistical properties (accuracy and precision) and in terms of computing time. 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:trn:utwpem:2014/08&r=ecm 
By:  Can, S.U. (Tilburg University, Center For Economic Research); Einmahl, J.H.J. (Tilburg University, Center For Economic Research); Khmaladze, E.V.; Laeven, R.J.A. (Tilburg University, Center For Economic Research) 
Abstract:  Let (X1, Y1),…., (Xn, Yn) be an i.i.d. sample from a bivariate distribution function that lies in the maxdomain of attraction of an extreme value distribution. The asymptotic joint distribution of the standardized componentwise maxima √n i=1 Xi and √n i=1 Yi is then characterized by the marginal extreme value indices and the tail copula R. We propose a procedure for constructing asymptotically distributionfree goodnessoffit tests for the tail copula R. The procedure is based on a transformation of a suitable empirical process derived from a semiparametric estimator of R. The transformed empirical process converges weakly to a standard Wiener process, paving the way for a multitude of asymptotically distributionfree goodnessoffit tests. We also extend our results to the mvariate (m > 2) case. In a simulation study we show that the limit theorems provide good approximations for finite samples and that tests based on the transformed empirical process have high power. 
Keywords:  Extreme value theory; tail dependence; goodnessoffit testing; martingale transformation 
JEL:  C12 C14 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:tiu:tiucen:0ec969ab46e64228b44d40d52aaf3aab&r=ecm 
By:  Fabio Busetti (Bank of Italy) 
Abstract:  Quantile aggregation (or 'Vincentization') is a simple and intuitive way of combining probability distributions, originally proposed by S. B. Vincent in 1912. In certain cases, such as under Gaussianity, the Vincentized distribution belongs to the same family as that of the individual distributions and can be obtained by averaging the individual parameters. This paper compares the properties of quantile aggregation with those of the forecast combination schemes normally adopted in the econometric forecasting literature, based on linear or logarithmic averages of the individual densities. In general we find that: (i) larger differences among the combination schemes occur when there are biases in the individual forecasts, in which case quantile aggregation seems preferable overall; (ii) the choice of the combination weights is important in determining the performance of the various methods. Monte Carlo simulation experiments indicate that the properties of quantile aggregation fall between those of the linear and the logarithmic pool, and that quantile averaging is particularly useful for combining forecast distributions with large differences in location. An empirical illustration is provided with density forecasts from time series and econometric models for Italian GDP. 
Keywords:  Fan charts, macroeconomic forecasts, model combination. 
JEL:  C53 E17 
Date:  2014–10 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_979_14&r=ecm 
By:  Claudio Heinrich (Aarhus University); Mark Podolskij (Aarhus University and CREATES) 
Abstract:  In this paper we present the asymptotic theory for spectral distributions of high dimensional covariation matrices of Brownian diffusions. More specifically, we consider Ndimensional Itô integrals with time varying matrixvalued integrands. We observe n equidistant high frequency data points of the underlying Brownian diffusion and we assume that N/n > c in (0,oo). We show that under a certain mixed spectral moment condition the spectral distribution of the empirical covariation matrix converges in distribution almost surely. Our proof relies on method of moments and applications of graph theory. 
Keywords:  Diffusion processes, graphs, high frequency data, random matrices. 
JEL:  C10 C13 C14 
Date:  2014–12–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201454&r=ecm 
By:  Christiane Baumeister; James D. Hamilton 
Abstract:  This paper makes the following original contributions to the literature. (1) We develop a simpler analytical characterization and numerical algorithm for Bayesian inference in structural vector autoregressions that can be used for models that are overidentified, justidentified, or underidentified. (2) We analyze the asymptotic properties of Bayesian inference and show that in the underidentified case, the asymptotic posterior distribution of contemporaneous coefficients in an nvariable VAR is confined to the set of values that orthogonalize the population variancecovariance matrix of OLS residuals, with the height of the posterior proportional to the height of the prior at any point within that set. For example, in a bivariate VAR for supply and demand identified solely by sign restrictions, if the population correlation between the VAR residuals is positive, then even if one has available an infinite sample of data, any inference about the demand elasticity is coming exclusively from the prior distribution. (3) We provide analytical characterizations of the informative prior distributions for impulseresponse functions that are implicit in the traditional signrestriction approach to VARs, and note, as a special case of result (2), that the influence of these priors does not vanish asymptotically. (4) We illustrate how Bayesian inference with informative priors can be both a strict generalization and an unambiguous improvement over frequentist inference in justidentified models. (5) We propose that researchers need to explicitly acknowledge and defend the role of prior beliefs in influencing structural conclusions and illustrate how this could be done using a simple model of the U.S. labor market. 
JEL:  C11 C32 E24 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:20741&r=ecm 
By:  Stephen G. Hall,; P. A. V. B. Swamy; George S. Tavlas 
Abstract:  The method of instrumental variables (IV) and the generalized method of moments (GMM), and their applications to the estimation of errorsinvariables and simultaneous equations models in econometrics, require data on a sufficient number of instrumental variables that are both exogenous and relevant. We argue that, in general, such instruments (weak or strong) cannot exist. 
Keywords:  instrumental variables; generalized method of moments; random coefficient models 
JEL:  C11 C13 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:14/19&r=ecm 
By:  Gregor Chliamovitch; Alexandre Dupuis; Bastien Chopard; Anton Golub 
Abstract:  We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, at least in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly nonstationary processes, for then it provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates. 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1411.7805&r=ecm 
By:  BAUWENS, Luc (Université catholique de Louvain, CORE, Belgium); GRIGORYEVA, Lyudmila (Laboratoire de Mathematiques de Besançon, Université de FrancheComté, France); ORTEGA, JuanPablo (Laboratoire de Mathematiques de Besançon, Université de FrancheComté, France) 
Abstract:  This paper presents a method capable of estimating richly parametrized versions of the dynamic conditional correlation (DCC) model that go beyond the standard scalar case. The algorithm is based on the maximization of a Gaussian quasilikelihood using a Bregmanproximal trustregion method to handle the various nonlinear stationarity and positivity constraints that arise in this context. We consider the general matrix Hadamard DCC model with full rank, rank equal to two and, additionally, two different rank one matrix specifications. In the last mentioned case, the elements of the vectors that determine the rank one parameter matrices are either arbitrary or parsimoniously defined using the Almon lag function. We use actual stock returns data in dimensions up to thirty in order to carry out performance comparisons according to several in and outofsample criteria. Our empirical results show that the use of richly parametrized models adds value with respect to the conventional scalar case. 
Keywords:  multivariate volatility modeling, dynamic conditional correlations (DCC), nonscalar DCC models, constrained optimization, Bregman divergences, Bregmanproximal trustregion method 
JEL:  C13 C32 G17 
Date:  2014–06–11 
URL:  http://d.repec.org/n?u=RePEc:cor:louvco:2014012&r=ecm 
By:  Roberto Casarin (Department of Economics, University of Venice Cà Foscari) 
Abstract:  This article discusses Windle and Carvalho's (2014) statespace model for observations and latent variables in the space of positive symmetric matrices. The present discussion focuses on the model specification and on the contribution to the positivevalue time series literature. I apply the proposed model to financial data with a view to shedding light on some modeling issues. 
Keywords:  Exponential Smoothing, PositiveValued Processes, StateSpace Models, Stochastic Volatility. 
JEL:  C11 C18 C22 C53 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:ven:wpaper:2014:23&r=ecm 
By:  Inoue, Atsushi; Kuo, ChunHung; Rossi, Barbara 
Abstract:  In this paper we propose empirical methods for detecting and identifying misspecifications in DSGE models. We introduce wedges in a DSGE model and identify potential misspecification via forecast error variance decomposition (FEVD) and marginal likelihood analyses. Our simulation results based on a smallscale DSGE model demonstrate that our method can correctly identify the source of misspecification. Our empirical results show that the mediumscale New Keynesian DSGE model that incorporates features in the recent empirical macro literature is still very much misspecified; our analysis highlights that the asset and labor markets may be the source of the misspecification. 
Keywords:  DSGE models; empirical macroeconomics; model misspecification 
JEL:  C32 E32 
Date:  2014–09 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10140&r=ecm 
By:  Bayram, Deniz; Dayé, Modeste 
Abstract:  The aim of this work is to review the paper by Hellerstein & Imbens (1982) focusing on the use of auxiliary data and a formal derivation of the asymptotic properties of the underlying Weighted Least Squares estimator. 
Keywords:  auxiliary data, asymptotic properties, WLS. 
JEL:  C13 C4 C5 
Date:  2014 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:60465&r=ecm 
By:  Ahmet Celikoglu; Ugur Tirnakli 
Abstract:  In a recent paper [\textit{M. Cristelli, A. Zaccaria and L. Pietronero, Phys. Rev. E 85, 066108 (2012)}], Cristelli \textit{et al.} analysed relation between skewness and kurtosis for complex dynamical systems and identified two powerlaw regimes of nonGaussianity, one of which scales with an exponent of 2 and the other is with $4/3$. Finally the authors concluded that the observed relation is a universal fact in complex dynamical systems. Here, we test the proposed universal relation between skewness and kurtosis with large number of synthetic data and show that in fact it is not universal and originates only due to the small number of data points in the data sets considered. The proposed relation is tested using two different nonGaussian distributions, namely $q$Gaussian and Levy distributions. We clearly show that this relation disappears for sufficiently large data sets provided that the second moment of the distribution is finite. We find that, contrary to the claims of Cristelli \textit{et al.} regarding a powerlaw scaling regime, kurtosis saturates to a single value, which is of course different from the Gaussian case ($K=3$), as the number of data is increased. On the other hand, if the second moment of the distribution is infinite, then the kurtosis seems to never converge to a single value. The converged kurtosis value for the finite second moment distributions and the number of data points needed to reach this value depend on the deviation of the original distribution from the Gaussian case. We also argue that the use of kurtosis to compare distributions to decide which one deviates from the Gaussian more can lead to incorrect results even for finite second moment distributions for small data sets, whereas it is totally misleading for infinite second moment distributions where the difference depends on $N$ for all finite $N$. 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1412.1293&r=ecm 
By:  Otsu, Taisuke; Pesendorfer, Martin; Takahashi, Yuya 
Abstract:  This paper proposes several statistical tests for finite state Markov games to examine the null hypothesis that the data are generated from a single equilibrium. We formulate tests of (i) the conditional choice and state transition probabilities, (ii) the steadystate distribution, and (iii) the conditional state distribution given an initial state. In a Monte Carlo study we find that the test based on the steadystate distribution performs well and has high power even with small numbers of markets and time periods. We apply the tests to the empirical study of Ryan (2012) that analyzes dynamics of the U.S. Portland Cement industry and assess if his assumption of single equilibrium is supported by the data. 
Keywords:  dynamic Markov game; hypothesis testing; multiplicity of equilibria 
JEL:  C12 C72 D44 
Date:  2014–08 
URL:  http://d.repec.org/n?u=RePEc:cpr:ceprdp:10111&r=ecm 
By:  Fani Lea Cymrot Bader; Sérgio Mikio Koyama; Marcos Hiroyuki Tsuchida 
Abstract:  This study proposes a new methodology called Canonical FAVAR that incorporates the canonical correlation analysis in the estimation of twostep FAVAR models to obtain more appropriate factors for forecasting. The canonical correlation technique identifies a small number of linear combinations of major components that have better correlation with the variables of interest and therefore greater predictive ability. The Canonical FAVAR was applied in forecasting the Brazilian financial system credit variables and their predictive ability was compared to one and two  step FAVAR models. These models were adjusted for five variables of the credit market, and the results were better than those obtained by traditional FAVAR 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:bcb:wpaper:369&r=ecm 
By:  Offer Lieberman (BarIlan University); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  This paper extends recent findings of Lieberman and Phillips (2014) on stochastic unit root (SUR) models to a multivariate case including a comprehensive asymptotic theory for estimation of the model's parameters. The extensions are useful because they lead to a generalization of the BlackScholes formula for derivative pricing. In place of the standard assumption that the price process follows a geometric Brownian motion, we derive a new form of the BlackScholes equation that allows for a multivariate time varying coefficient element in the price equation. The corresponding formula for the value of a Europeantype call option is obtained and shown to extend the existing option price formula in a manner that embodies the effect of a stochastic departure from a unit root. An empirical application reveals that the new model is consistent with excess skewness and kurtosis in the price distribution relative to a lognormal distribution. 
Keywords:  Autoregression; Derivative, Diffusion, Options, Similarity, Stochastic unit root, Timevarying coefficients 
JEL:  C22 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1964&r=ecm 