
on Econometrics 
By:  Taisuke Otsu; Myung Hwan Seo 
Abstract:  Abstract. Since Manski's (1975) seminal work, the maximum score method for discrete choice models has been applied to various econometric problems. Kim and Pollard (1990) established the cube root asymptotics for the maximum score estimator. Since then, however, econometricians posed several open questions and conjectures in the course of generalizing the maximum score approach, such as (a) asymptotic distribution of the conditional maximum score estimator for a panel data dynamic discrete choice model (Honoré and Kyriazidou, 2000), (b) convergence rate of the modified maximum score estimator for an identified set of parameters of a binary choice model with an interval regressor (Manski and Tamer, 2002), and (c) asymptotic distribution of the conventional maximum score estimator under dependent observations. To address these questions, this article extends the cube root asymptotics into four directions to allow (i) criterions drifting with the sample size typically due to a bandwidth sequence, (ii) partially identified parameters of interest, (iii) weakly dependent observations, and/or (iv) nuisance parameters with possibly increasing dimension. For dependent empirical processes that characterize criterions inducing cube root phenomena, maximal inequalities are established to derive the convergence rates and limit laws of the Mestimators. This limit theory is applied not only to address the open questions listed above but also to develop a new econometric method, the random coefficient maximum score. Furthermore, our limit theory is applied to address other open questions in econometrics and statistics, such as (d) convergence rate of the minimum volume predictive region (Polonik and Yao, 2000), (e) asymptotic distribution of the least median of squares estimator under dependent observations, (f) asymptotic distribution of the nonparametric monotone density estimator under dependent observations, and (g) asymptotic distribution of the mode regression and related estimators containing bandwidths. 
Keywords:  Maximum score, Cube root asymptotics, Set inference 
JEL:  C13 
Date:  2014–01 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:571&r=ecm 
By:  Taisuke Otsu; Luke Taylor 
Abstract:  In this paper we develop a nonparametric estimator for the local average response of a censored dependent variable to endogenous regressors in a nonseparable model where the unobservable error term is not restricted to be scalar and where the nonseparable function need not be monotone in the unobservables. We formalise the identification argument put forward in Altonji, Ichimura and Otsu (2012), construct the nonparametric estimator, characterise its asymptotic property, and conduct a Monte Carlo investigation to study the small sample properties. Identification is constructive and is achieved through a control function approach. We show that the estimator is consistent and asymptotically normally distributed. The Monte Carlo results are encouraging. 
JEL:  C24 C34 C14 
Date:  2014–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:575&r=ecm 
By:  Lorenzo Camponovo; Yukitoshi Matsushita; Taisuke Otsu 
Abstract:  We propose a nonparametric likelihood inference method for the integrated volatility under high frequency financial data. The nonparametric likelihood statistic, which contains the conventional statistics such as empirical likelihood and Pearson's chisquare as special cases, is not asymptotically pivotal under the socalled infill asymptotics, where the number of high frequency observations in a fixed time interval increases to infinity. We show that multiplying a correction term recovers the chisquare limiting distribution. Furthermore, we establish Bartlett correction for our modified nonparametric likelihood statistic under the constant and general nonconstant volatility cases. In contrast to the existing literature, the empirical likelihood statistic is not Bartlett correctable under the infill asymptotics. However, by choosing adequate tuning constants for the power divergence family, we show that the second order refinement to the order n^2 can be achieved. 
Keywords:  Nonparametric likelihood, Volatility, High frequency data 
JEL:  C14 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:581&r=ecm 
By:  Clifford Lam; Pedro Souza 
Abstract:  This paper proposes a model for estimating the underlying crosssectional dependence structure of a large panel of time series. Technical difficulties meant such a structure is usually assumed before further analysis. We propose to estimate this by penalizing the elements in the spatial weight matrices using the adaptive LASSO proposed by Zou (2006). Nonasymptotic oracle inequalities and the asymptotic sign consistency of the estimators are proved when the dimension of the time series can be larger than the sample size, and they tend to infinity jointly. Asymptotic normality of the LASSO/adaptive LASSO estimator for the model regression parameter is also presented. All the proofs involve nonstandard analysis of LASSO/adaptive LASSO estimators, since our model, albeit like a standard regression, always has the response vector as one of the covariates. A block coordinate descent algorithm is introduced, with simulations and a real data analysis carried out to demonstrate the performance of our estimators. 
Keywords:  spatial econometrics, adaptive LASSO, sign consistency, asymptotic normality, nonasymptotic oracle inequalities, spatial weight matrices 
JEL:  C33 C4 C52 
Date:  2014–11 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:578&r=ecm 
By:  Kirill Evdokimov; Yuichi Kitamura; Taisuke Otsu 
Abstract:  This paper considers robust estimation of moment condition models with time series data. Researchers frequently use moment condition models in dynamic econometric analysis. These models are particularly useful when one wishes to avoid fully parameterizing the dynamics in the data. It is nevertheless desirable to use an estimation method that is robust against deviations from the model assumptions. For example, measurement errors can contaminate observations and thereby lead to such deviations. This is an important issue for time series data: in addition to conventional sources of mismeasurement, it is known that an inappropriate treatment of seasonality can cause serially correlated measurement errors. Efficiency is also a critical issue since time series sample sizes are often limited. This paper addresses these problems. Our estimator has three features: (i) it achieves an asymptotic optimal robust property, (ii) it treats time series dependence nonparametrically by a data blocking technique, and (iii) it is asymptotically as efficient as the optimally weighted GMM if indeed the model assumptions hold. A small scale simulation experiment suggests that our estimator performs favorably compared to other estimators including GMM, thereby supporting our theoretical findings. 
Keywords:  Blocking, Generalized Empirical Likelihood, Hellinger Distance, Robustness, Efficient Estimation, Mixing 
JEL:  C14 
Date:  2014–12 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:579&r=ecm 
By:  Natalia Bailey (Queen Mary University of London); Liudas Giraitis (Queen Mary University of London) 
Abstract:  A relatively simple frequencytype testing procedure for unit root potentially contaminated by an additive stationary noise is introduced, which encompasses general settings and allows for linear trends. The proposed test for unit root versus stationarity is based on a finite number of periodograms computed at low Fourier frequencies. It is not sensitive to the selection of tuning parameters defining the range of frequencies so long as they are in the vicinity of zero. The test does not require augmentation, has parameterfree nonstandard asymptotic distribution and is correctly sized. The consistency rate under the alternative of stationarity reveals the relation between the power of the test and the longrun variance of the process. The finite sample performance of the test is explored in a Monte Carlo simulation study, and its empirical application suggests rejection of the unit root hypothesis for some of the NelsonPlosser time series. 
Keywords:  Unit root test, Additive noise, Parameterfree distribution 
JEL:  C21 C23 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp746&r=ecm 
By:  Javier Hidalgo; Marcia M Schafgans 
Abstract:  This paper is concerned with various issues related to inference in large dynamic panel data models (where both n and T increase without bound) in the presence of, possibly, strong crosssectional dependence. Our first aim is to provide a Central Limit Theorem for estimators of the slope parameters of the model under mild conditions. To that end, we extend and modify existing results available in the literature. Our second aim is to study two, although similar, tests for breaks/homogeneity in the time dimension. The first test is based on the CUSUM principle; whereas the second test is based on a HausmanDurbinWu approach. Some of the key features of the tests are that they have nontrivial power when the number of individuals, for which the slope parameters may differ, is a "negligible" fraction or when the break happens to be towards the end of the sample. Due to the fact that the asymptotic distribution of the tests may not provide a good approximation for their finite sample distribution, we describe a simple bootstrap algorithm to obtain (asymptotic) valid critical values for our statistics. An important and surprising feature of the bootstrap is that there is no need to know the underlying model of the crosssectional dependence, and hence the bootstrap does not require to select any bandwidth parameter for its implementation, as is the case with moving block bootstrap methods which may not be valid with crosssectional dependence and may depend on the particular ordering of the individuals. Finally, we present a MonteCarlo simulation analysis to shed some light on the small sample behaviour of the tests and their bootstrap analogues. 
Keywords:  Large panel data, dynamic models, crosssectional strongdependence, central limit theorems, homogeneity, bootstrap algorithms 
JEL:  C12 C13 C23 
Date:  2015–04 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:583&r=ecm 
By:  Javier Hidalgo; Jungyoon Lee 
Abstract:  This paper examines a nonparametric CUSUMtype test for common trends in large panel data sets with individual fixed effects. We consider, as in Zhang, Su and Phillips (2012), a partial linear regression model with unknown functional form for the trend component, although our test does not involve local smoothings. This conveniently forgoes the need to choose a bandwidth parameter, which due to a lack of a clear and sensible information criteria it is difficult for testing purposes. We are able to do so after making use that the number of individuals increases with no limit. After removing the parametric component of the model, when the errors are homoscedastic, our test statistic converges to a Gaussian process whose critical values are easily tabulated. We also examine the consequences of having heteroscedasticity as well as discussing the problem of how to compute valid critical values due to the very complicated covariance structure of the limiting process. Finally, we present a small MonteCarlo experiment to shed some light on the finite sample performance of the test. 
Keywords:  Common Trends, large data set, Partial linear models,Bootstrap algorithms 
JEL:  C12 C13 C23 
Date:  2014–08 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:576&r=ecm 
By:  P. Burdejova; W.K. Härdle; Kokoszka; Q.Xiong 
Abstract:  Motivated by the conjectured existence of trends in the intensity of tropical storms, this paper proposes new inferential methodology to detect a trend in the annual pattern of environmental data. The new methodology can be applied to data which can be represented as annual curves which evolve from year to year. Other examples include annual temperature or log–precipitation curves at specific locations. Within a framework of a functional regression model, we derive two tests of significance of the slope function, which can be viewed as the slope coefficient in the regression of the annual curves on year. One of the tests relies on a Monte Carlo distribution to compute the critical values, the other is pivotal with the chi– square limit distribution. Full asymptotic justification of both tests is provided. Their finite sample properties are investigated by a simulation study. Applied to tropical storm data, these tests show that there is a significant trend in the shape of the annual pattern of upper wind speed levels of hurricanes. 
Keywords:  change point, trend test, tropical storms, expectiles, functional data analysis 
JEL:  C12 C15 C32 Q54 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2015029&r=ecm 
By:  Taisuke Otsu; Yoshiyasu Rai 
Abstract:  Abadie and Imbens (2008) showed that the standard naive bootstrap is inconsistent to estimate the distribution of the matching estimator for treatment effects with a fixed number of matches. This article proposes an asymptotically valid inference method for the matching estimators based on the wild bootstrap. The key idea is to resample not only the regression residuals of treated and untreated observations but also the ones to estimate the average treatment effects. The proposed method is valid even for the case of vector covariates by incorporating the bias correction method in Abadie and Imbens (2011), and is applicable to estimate the average treatment effect and the counterpart for the treated population. A simulation study indicates that our wild bootstrap method is favorably comparable to the asymptotic normal approximation. As an empirical illustration, we apply our bootstrap method to the National Supported Work data. 
Keywords:  Treatment effect, matching, bootstrap 
JEL:  C21 
Date:  2015–01 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:580&r=ecm 
By:  Myung Hwan Seo; Yongcheol Shin 
Abstract:  This paper addresses an important and challenging issue as how best to model nonlinear asymmetric dynamics and crosssectional heterogeneity, simultaneously, in the dynamic threshold panel data framework, in which both threshold variable and regressors are allowed to be endogenous. Depending on whether the threshold variable is strictly exogenous or not, we propose two different estimation methods: firstdifferenced twostep least squares and firstdifferenced GMM. The former exploits the fact that the threshold variable is strictly exogenous to achieve the superconsistency of the threshold estimator. We provide asymptotic distributions of both estimators. The bootstrapbased test for the presence of threshold effect as well as the exogeneity test of the threshold variable are also developed. Monte Carlo studies provide a support for our theoretical predictions. Finally, using the UK and the US company panel data, we provide two empirical applications investigating an asymmetric sensitivity of investment to cash flows and an asymmetric dividend smoothing. 
Keywords:  Dynamic Panel Threshold Models, Endogenous Threshold Effects and Regressors, FDGMM and FD2SLS Estimation, Linearity Test, Exogeneity Test, Investment and Dividend Smoothing. 
JEL:  C13 C33 G31 G35 
Date:  2014–09 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:577&r=ecm 
By:  Karun Adusumilli; Taisuke Otsu 
Abstract:  We extend the method of empirical likelihood to cover hypotheses involving the Aumann expectation of random sets. By exploiting the properties of random sets, we convert the testing problem into one involving a continuum of moment restrictions for which we propose two inferential procedures. The first, which we term marked empirical likelihood, corresponds to constructing a nonparametric likelihood for each moment restriction and assessing the resulting process. The second, termed sieve empirical likelihood, corresponds to constructing a likelihood for a vector of moments with growing dimension. We derive the asymptotic distributions under the null and sequence of local alternatives for both types of tests and prove their consistency. The applicability of these inferential procedures is demonstrated in the context of two examples on the mean of interval observations and best linear predictors for interval outcomes. 
Date:  2014–06 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:574&r=ecm 
By:  Jungyoon Lee; Peter M Robinson 
Abstract:  An asymptotic theory is developed for nonparametric and semiparametric series estimation under general crosssectional dependence and heterogeneity. A uniform rate of consistency, asymptotic normality, and sufficient conditions for convergence, are established, and a datadriven studentization new to crosssectional data is justifi ed. The conditions accommodate various crosssectional settings plausible in economic applications, and apply also to panel and time series data. Strong, as well as weak dependence are covered, and conditional heteroscedasticity is allowed. 
Keywords:  Series estimation, Nonparametric regression, Spatial data, Crosssectional dependence, Uniform rate of consistency, Functional central limit the orem, Datadriven studentization 
JEL:  C12 C13 C14 C21 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:570&r=ecm 
By:  Taisuke Otsu; Martin Pesendorfer; Yuya Takahashi 
Abstract:  This paper proposes several statistical tests for finite state Markov games to examine the null hypothesis that data from distinct markets can be pooled. We formulate tests of (i) the conditional choice and state transition probabilities, (ii) the steadystate distribution, and (iii) the conditional state distribution given an initial state. If the null cannot be rejected, then the data across markets can be pooled. A rejection of the null implies that the data cannot be pooled across markets. In a Monte Carlo study we find that the test based on the steadystate distribution performs well and has high power even with small numbers of markets and time periods. We apply the tests to the empirical study of Ryan (2012) that analyzes dynamics of the U.S. Portland Cement industry and assess if the single equilibrium assumption is supported by the data. 
Keywords:  Dynamic Markov game, Poolability, Multiplicity of equilibria, Hypothesis testing 
JEL:  C12 C72 D44 
Date:  2015–03 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:582&r=ecm 
By:  Wojciech Charemza; Carlos Díaz; Svetlana Makarova 
Abstract:  The paper discusses the consequences of possible misspecification in fitting skew normal distributions to empirical data. It is shown, through numerical experiments, that it is easy to choose a distribution which is different from that which generated the sample, if the minimum distance criterion is used. The distributions compared are the twopiece normal, weighted skew normal and the generalized Balakrishnan skew normal distribution which covers a variety of other skew normal distributions, including the Azzalini distribution. The estimation method applied is the simulated minimum distance estimation with the Hellinger distance. It is suggested that, in case of similarity in values of distance measures obtained for different distributions, the choice should be made on the grounds of parameters’ interpretation rather than the goodness of fit. For monetary policy analysis, this suggests application of the weighted skew normal distribution, which parameters are directly interpretable as signals and outcomes of monetary decisions. This is supported by empirical evidence of fitting different skew normal distributions to the expost monthly inflation forecast errors for Poland, Russia, Ukraine and U.S.A., where estimations do not allow for clear distinction between the fitted distributions for Poland and U.S.A. 
Keywords:  Skew Normal Distributions, Expost Uncertainty, Inflation Forecasting, Economic Policy 
JEL:  E17 C46 E52 E37 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:15/08&r=ecm 
By:  Tatiana Komarova 
Abstract:  The paper considers nonparametric estimation of absolutely continuous distribution functions of lifetimes of nonidentical components in koutofn systems from the observed "autopsy" data. In economics,ascending "button" or "clock" auctions with n heterogeneous bidders present 2outofn systems. Classical competing risks models are examples of noutofn systems. Under weak conditions on the underlying distributions the estimation problem is shown to be wellposed and the suggested extremum sieve estimator is proven to be consistent. The paper illustrates the suggested estimation method by using sieve spaces of Bernstein polynomials which allow an easy implementation of constraints on the monotonicity of estimated distribution functions. 
Keywords:  koutofn systems, competing risks, sieve estimation, Bernstein polynomials 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:564&r=ecm 
By:  Miguel A. Delgado; Peter M Robinson 
Abstract:  We develop nonnested tests in a general spatial, spatiotemporal or panel data context. The spatial aspect can be interpreted quite generally, in either a geographical sense, or employing notions of economic distance, or even when parametric modelling arises in part from a common factor or other structure. In the former case, observations may be regularlyspaced across one or more dimensions, as is typical with much spatiotemporal data, or irregularlyspaced across all dimensions; both isotropic models and nonisotropic models can be considered, and a wide variety of correlation structures. In the second case, models involving spatial weight matrices are covered, such as "spatial autoregressive models". The setting is sufficiently general to potentially cover other parametric structures such as certain factor models, and vectorvalued observations, and here our preliminary asymptotic theory for parameter estimates is of some independent value. The test statistic is based on a Gaussian pseudolikelihood ratio, and is shown to have an asymptotic standard normal distribution under the null hypothesis that one of the two models is correct. A small Monte Carlo study of finitesample performance is included. 
Keywords:  onnested test, spatial correlation, pseudo maximum likelihood estimation 
JEL:  C12 C21 
Date:  2013–11 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:568&r=ecm 
By:  Reese, Simon (Department of Economics, Lund University) 
Abstract:  The most popular approach to modelling and forecasting mortality rates is the model of Lee and Carter (Modeling and Forecasting U. S. Mortality, Journal of the American Statistical Association, 87, 659–671, 1992). The popularity of the model rests mainly on its good fit to the data, its theoretical properties being obscure. The present paper provides asymptotic results for the LeeCarter model and illustrates its inherent weaknesses formally. Requirements on the underlying data are established and variance estimators are presented in order to allow hypothesis testing and the computation of confidence intervals. 
Keywords:  LeeCarter model; mortality; common factor models; panel data 
JEL:  C33 C51 C53 J11 
Date:  2015–05–26 
URL:  http://d.repec.org/n?u=RePEc:hhs:lunewp:2015_016&r=ecm 
By:  Yanyun Zhao 
Abstract:  In this paper we consider adaptive Bayesian semiparametric analysis of the linear regression model in the presence of conditional heteroskedasticity. The distribution of the error term on predictors are modelled by a normal distribution with covariatedependent variance. We show that a rateadaptive procedure for all smoothness levels of this standard deviation function is performed if the prior is properly chosen. More specifically, we derive adaptive posterior distribution rate up to a logarithm factor for the conditional standard deviation based on a transformation of hierarchical Gaussian spline prior and logspline prior respectively 
Keywords:  Bayesian linear regression , Conditional heteroskedasticity , Rate of convergence , Posterior distribution , Adaptation , Hierarchical Gaussian spline prior , Logspline prior 
Date:  2015–04 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1504&r=ecm 
By:  Lafit, Ginette; Nogales Martín, Francisco Javier; Zamar, Rubén 
Abstract:  In this article we present an approach to rank edges in a network modeled through a Gaussian Graphical Model. We obtain a path of precision matrices such that, in each step of the procedure, an edge is added. We also guarantee that the matrices along the path are symmetric and positive definite. To select the edges, we estimate the covariates that have the largest absolute correlation with a node conditional to the set of edges estimated in previous iterations. Simulation studies show that the procedure is able to detect true edges until the sparsity level of the population network is recovered. Moreover, it can add efficiently true edges in the first iterations avoiding to enter false ones. We show that the toprank edges are associated with the largest partial correlated variables. Finally, we compare the graph recovery performance with that of Glasso under different settings. 
Keywords:  Highdimensional statistics , Precision Matrix , Covariance selection , Gaussian Graphical Models , Edge Ranking , Least Angle Regression 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws1511&r=ecm 
By:  Aldanondo, Ana M.; Casasnovas, Valero L. 
Abstract:  The results of an experiment with simulated data show that combining inputs with different criteria (as cost, material inputs aggregates and other) increases the accuracy of the Data Envelopment Analysis (DEA) technical efficiency estimator in data sets with dimensionality problems. The positive impact of this approach surpasses that of reducing the number of variables, since replacement of the original inputs with an equal number of aggregates improves DEA performance in a wide range of cases. 
Keywords:  Technical efficiency, Aggregation bias, Monte Carlo, DEA Estimator accuracy 
JEL:  C14 C61 D20 
Date:  2015–04 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:64120&r=ecm 
By:  AndréMarie Taptué 
Abstract:  This paper shows how to compare the size of the middle class in income distributions using a polarization index that do not account for identification. We derive a class of polarization indices where the antagonism function is constant in identification. The comparison of distributions using an index from this class motivates the introduction of an alienation dominance surface, which is a function of an alienation threshold. We first prove that a distribution has a large alienation component in polarization compared to another if the former always has a larger dominance surface than the latter regardless of the value of the alienation threshold. Then, we show that the distribution with large dominance surface is more concentrated in the tails and has a smaller middle class than the other distribution. We implement statistical inference and test dominance between pairs of distributions using the asymptotic theory and Intersection Union tests. Our methodology is illustrated in comparing the declining of the middle class across pairwise distributions of twentytwo countries from the Luxembourg Income Study data base. 
Keywords:  Alienation, Identification, Middle class, Polarization 
JEL:  C15 D31 D63 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:1511&r=ecm 
By:  Andrew Binning (Norges Bank (Central Bank of Norway)); Junior Maih (Norges Bank (Central Bank of Norway) and BI Norwegian Business School) 
Abstract:  In this paper we take three well known Sigma Point Filters, namely the Unscented Kalman Filter, the Divided Difference Filter, and the Cubature Kalman Filter, and extend them to allow for a very general class of dynamic nonlinear regime switching models. Using both a Monte Carlo study and real data, we investigate the properties of our proposed filters by using a regime switching DSGE model solved using nonlinear methods. We find that the proposed filters perform well. They are both fast and reasonably accurate, and as a result they will provide practitioners with a convenient alternative to Sequential Monte Carlo methods. We also investigate the concept of observability and its implications in the context of the nonlinear filters developed and propose some heuristics. Finally, we provide in the RISE toolbox, the codes implementing these three novel filters. 
Keywords:  Regime Switching, Higherorder Perturbation, Sigma Point Filters, Nonlinear DSGE estimation, Observability 
Date:  2015–05–18 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2015_10&r=ecm 
By:  Peter M Robinson; Francesca Rossi 
Abstract:  For testing lack of correlation against spatial autoregressive alternatives, Lagrange multiplier tests enjoy their usual computational advantages, but the (x squared) firstorder asymptotic approximation to critical values can be poor in small samples. We develop refined tests for lack of spatial error correlation in regressions, based on Edgeworth expansion. In Monte Carlo simulations these tests, and bootstrap ones, generally significantly outperform x squaredbased tests. 
Keywords:  Spatial autocorrelation, Lagrange multiplier test, Edgeworth expansion, bootstrap, finitesample corrections. 
JEL:  C29 
Date:  2013–10 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:566&r=ecm 
By:  Filippo Ferroni; Stefano Grassi; Miguel A. LeonLedesma 
Abstract:  DSGE models are typically estimated assuming the existence of certain structural shocks that drive macroeconomic fluctuations. We analyze the consequences of introducing nonfundamental shocks for the estimation of DSGE model parameters and propose a method to select the structural shocks driving uncertainty. We show that forcing the existence of nonfundamental structural shocks produces a downward bias in the estimated internal persistence of the model. We then show how these distortions can be reduced by allowing the covariance matrix of the structural shocks to be rank deficient using priors for standard deviations whose support includes zero. The method allows us to accurately select fundamental shocks and estimate model parameters with precision. Finally, we revisit the empirical evidence on an industry standard mediumscale DSGE model and find that government, price, and wage markup shocks are nonfundamental. 
Keywords:  Reduced rank covariance matrix; DSGE models; stochastic dimension search 
JEL:  C10 E27 E32 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:ukc:ukcedp:1508&r=ecm 
By:  Roman Horváth (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nábreží 6, 111 01 Prague 1, Czech Republic; Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic, Pod Vodarenskou Vezi 4, 182 00, Prague, Czech Republic); Boril Sopov (Institute of Economic Studies, Faculty of Social Sciences, Charles University in Prague, Smetanovo nábreží 6, 111 01 Prague 1, Czech Republic) 
Abstract:  We perform a large simulation study to examine the extent to which various generalized autoregressive conditional heteroskedasticity (GARCH) models capture extreme events in stock market returns. We estimate Hill's tail indexes for individual S&P 500 stock market returns ranging from 1995{2014. and compare these to the tail indexes produced by simulating GARCH models. Our results suggest that actual and simulated values differ greatly for GARCH models with normal conditional distributions, which underestimate the tail risk. By contrast, the GARCH models with Student's t conditional distributions capture the tail shape more accurately, with GARCH and GJRGARCH being the top performers. 
Keywords:  GARCH, extreme events, S&P 500 study, tail index 
JEL:  C15 C58 G17 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:fau:wpaper:wp2015_09&r=ecm 
By:  Sylvain Barde 
Abstract:  The present paper aims to test a new model comparison methodology by calibrating and comparing three agentbased models of financial markets on the daily returns of 18 indices. The models chosen for this empirical application are the herding model of Gilli & Winker, its asymmetric version by Alfarano, Lux & Wagner and the more recent model by Franke & Westerhoff, which all share a common lineage to the herding model introduced by Kirman (1993). In addition, standard ARCH processes are included for each financial series to provide a benchmark for the explanatory power of the models. The methodology provides a clear and consistent ranking of the three models. More importantly, it also reveals that the best performing model, Franke & Westerhoff, is generally not distinguishable from an ARCHtype process, suggesting their explanatory power on the data is similar. 
Keywords:  Model selection; agentbased models; herding behaviour 
JEL:  C15 C52 G12 
Date:  2015–04 
URL:  http://d.repec.org/n?u=RePEc:ukc:ukcedp:1507&r=ecm 
By:  Clément Goulet (Centre d'Economie de la Sorbonne); Dominique Guegan (Centre d'Economie de la Sorbonne); Philippe De Peretti (Centre d'Economie de la Sorbonne) 
Abstract:  In this paper we introduce a simple method to compute the empirical distribution of the Lyapunov exponent, that allows to test whether a dynamical system is chaotic or not. The main stake is to know whether we should use a stochastic approach to forecast a time series or a chaotic evolution function inside a phase spaces. Our method is based on a Maximum Entropy bootstrap. This algorithm allows for heterogeneity in the time series including nonstationarity or jumps. The estimators obtained satisfy both ergodic and central limit theorems. To our knowledge this is the first time that such technique is used to estimate the empirical distribution of the Lyapunov exponent. We apply our algorithm on Lorenz and Rössler systems. At last, applications are presented on financial data 
Keywords:  Chaos; Lyapunov exponent; maximum entropy; bootstrapping; empirical distribution 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:mse:cesdoc:15045&r=ecm 
By:  Claude Godreche; Satya N. Majumdar; Gregory Schehr 
Abstract:  We investigate the statistics of records in a random sequence $\{x_B(0)=0,x_B(1),\cdots, x_B(n)=x_B(0)=0\}$ of $n$ time steps. The sequence $x_B(k)$'s represents the position at step $k$ of a random walk `bridge' of $n$ steps that starts and ends at the origin. At each step, the increment of the position is a random jump drawn from a specified symmetric distribution. We study the statistics of records and record ages for such a bridge sequence, for different jump distributions. In absence of the bridge condition, i.e., for a free random walk sequence, the statistics of the number and ages of records exhibits a `strong' universality for all $n$, i.e., they are completely independent of the jump distribution as long as the distribution is continuous. We show that the presence of the bridge constraint destroys this strong `all $n$' universality. Nevertheless a `weaker' universality still remains for large $n$, where we show that the record statistics depends on the jump distributions only through a single parameter $0<\mu\le 2$, known as the L\'evy index of the walk, but are insensitive to the other details of the jump distribution. We derive the most general results (for arbitrary jump distributions) wherever possible and also present two exactly solvable cases. We present numerical simulations that verify our analytical results. 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1505.06053&r=ecm 
By:  Kelly D.T.Trinh (School of Economics, The University of Queensland); Valentin Zelenyuk (School of Economics, The University of Queensland) 
Abstract:  Traditional data envelopment analysis (DEA) views a production technology process as a â€˜black boxâ€™, while network DEA allows a researcher to look into the â€˜black boxâ€™, to evaluate the overall performance and the performance of each subprocess of the system. The technical efficiency scores calculated from these approaches can be slightly, or sometimes vastly different. Our aim is to develop two bootstrapbased algorithms to test whether any observed difference between the results from the two approaches is statistically significant, or whether it is due to sampling and estimation noise. We focus on testing the equality of the first moment (i.e., the mean) and of the entire distribution of the technical efficiency scores. The bootstrapbased procedures can also be used for pairwise comparison between two network DEA models to perform sensitivity analysis of the resulting estimates across various network structures. In our empirical illustration of nonlife insurance companies in Taiwan, both algorithms provide fairly robust results. We find statistical evidence suggesting that the first moment and the entire distribution of the overall technical efficiencies are significantly different between the DEA and network DEA models. However, the differences are not statistically significant for the two subprocesses across these models. 
Keywords:  DEA,Network DEA,Subsampling Bootstrap 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:qld:uqcepa:103&r=ecm 
By:  Javier Hidalgo; Pedro Souza; Pedro Souza 
Abstract:  Nowadays it is very frequent that a practitioner faces the problem of modelling large data sets. Relevant examples include spatiotemporal or panel data models with large N and T. In these cases deciding a particular dynamic model for each individual/population, which plays a crucial role in prediction and inferences, can be a very onerous and complex task. The aim of this paper is thus to examine a nonparametric test for the equality of the linear dynamic models as the number of individuals increases without bound. The test has two main features: (a) there is no need to choose any bandwidth parameter and (b) the asymptotic distribution of the test is a normal random variable. 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:563&r=ecm 
By:  AndréMarie Taptué 
Abstract:  In the context of polarized societies, income homogeneity is linked to the frequency and the intensity of social unrest. Most homogenous countries exhibit a lower frequency of intense social conflicts and less homogeneous countries show a higher frequency of moderate social conflicts. This paper develops a methodology to compare the degree of homogeneity of two income distributions. We use for that purpose and index of polarization that does not account for alienation. This index is the identification component of polarization that measures the degree to which individuals feel alike in an income distribution. This development leads to identification dominance curves and derives firstorder and higherorder stochastic dominance conditions. Firstorder stochastic dominance is performed through identification dominance curves drawn on a support of identification thresholds. These curves are used to determine whether identification, homogeneity, or similarity of individuals is greater in one distribution than in another for general classes of polarization indices and ranges of possible identification thresholds. We also derive the asymptotic sampling distribution of identification dominance curves and test dominance between two distributions using Intersection Union tests and bootstrapped pvalues. Our methodology is illustrated by comparing pairs of distributions of eleven countries drawn from the Luxembourg Income Study database. 
Keywords:  Alienation, Identification, Polarization, Stochastic dominance 
JEL:  C15 D31 D63 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:lvl:lacicr:1512&r=ecm 
By:  Archil Gulisashvili; Frederi Viens; Xin Zhang 
Abstract:  We consider the class of selfsimilar Gaussian stochastic volatility models, and compute the smalltime (nearmaturity) asymptotics for the corresponding asset price density, the call and put pricing functions, and the implied volatilities. Unlike the wellknown modelfree behavior for extremestrike asymptotics, smalltime behaviors of the above depend heavily on the model, and require a control of the asset price density which is uniform with respect to the asset price variable, in order to translate into results for call prices and implied volatilities. Away from the money, we express the asymptotics explicitly using the volatility process' selfsimilarity parameter H, its first KarhunenLo\`{e}ve eigenvalue at time 1, and the latter's multiplicity. Several modelfree estimators for H result. At the money, a separate study is required: the asymptotics for small time depend instead on the integrated variance's moments of orders 1/2 and 3/2, and the estimator for H sees an affine adjustment, while remaining modelfree. 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1505.05256&r=ecm 
By:  Grigoryeva, Lyudmila; Ortega, JuanPablo; Peresetsky, Anatoly 
Abstract:  This paper introduces a method based on the use of various linear and nonlinear state space models that uses nonsynchronous data to extract global stochastic financial trends (GST). These models are specifically constructed to take advantage of the intraday arrival of closing information coming from different international markets in order to improve the quality of volatility description and forecasting performances. A set of three major asynchronous international stock market indices is used in order to empirically show that this forecasting scheme is capable of significant performance improvements when compared with those obtained with standard models like the dynamic conditional correlation (DCC) family. 
Keywords:  multivariate volatility modeling and forecasting, global stochastic trend, extended Kalman filter, CAPM, dynamic conditional correlations (DCC), nonsynchronous data 
JEL:  C32 C5 
Date:  2015 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:64503&r=ecm 
By:  Robert Engle (New York University Stern School of Business); Emil Siriwardane (Office of Financial Research) 
Abstract:  We propose a new model of volatility where financial leverage amplifies equity volatility by what we call the "leverage multiplier." The exact specification is motivated by standard structural models of credit; however, our parametrization departs from the classic Merton (1974) model and can accommodate environments where the firm's asset volatility is stochastic, asset returns can jump, and asset shocks are nonnormal. In addition, our specification nests both a standard GARCH and the Merton model, which allows for a statistical test of how leverage interacts with equity volatility. Empirically, the Structural GARCH model outperforms a standard asymmetric GARCH model for approximately 74 percent of the financial firms we analyze. We then apply the Structural GARCH model to two empirical applications: the leverage effect and systemic risk measurement. As a part of our systemic risk analysis, we define a new measure called "precautionary capital" that uses our model to quantify the advantages of regulation aimed at reducing financial firm leverage. 
Keywords:  Structural GARCH, Volatility, Leverage 
Date:  2014–10–23 
URL:  http://d.repec.org/n?u=RePEc:ofr:wpaper:1407&r=ecm 
By:  Wojciech Charemza; Carlos Díaz; Svetlana Makarova 
Abstract:  The paper introduces the concept of conditional inflation forecast uncertainty. It is proposed that the joint and conditional distributions of the bivariate forecast uncertainty can be derived from estimation unconditional distributions of these uncertainties and applying appropriate copula function. Empirical results have been obtained for Canada and US. Term structure has been evaluated in the form of unconditional and conditional probabilities of hitting the inflation range of ±1% around the Canadian inflation target. The paper suggests a new measure of inflation forecast uncertainty that accounts for possible intercountry dependence. It is shown that evaluation of targeting precision can be effectively improved with the use of exante formulated conditional and unconditional probabilities of inflation being within the predefined band around the target. 
Keywords:  Macroeconomic Forecasting, Inflation, Uncertainty, Nonnormality, Density Forecasting, Forecast Term Structure, Copula Modelling 
JEL:  C53 E37 E52 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:15/07&r=ecm 
By:  Wojciech Charemza; Carlos Díaz; Svetlana Makarova 
Abstract:  Empirical evaluation of macroeconomic uncertainty and its use for probabilistic forecasting are investigated. New indicators of forecast uncertainty, which either include or exclude effects of macroeconomic policy, are developed. These indicators are derived from the weighted skew normal distribution proposed in this paper, which parameters are interpretable in relation to monetary policy outcomes and actions. This distribution is fitted to forecast errors, obtained recursively, of annual inflation recorded monthly for 38 countries. Forecast uncertainty term structure is evaluated for U.K. and U.S. using new indicators and compared with earlier results. This paper has supplementary material. 
Keywords:  forecast term structure, macroeconomic forecasting, monetary policy, nonnormality 
JEL:  C54 E37 E52 
Date:  2015–05 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:15/09&r=ecm 