
on Econometrics 
By:  Thorsten Dickhaus; ; 
Abstract:  Based on the theory of multiple statistical hypothesis testing, we elaborate simultaneous statistical inference methods in dynamic factor models. In particular, we employ structural properties of multivariate chisquared distributions in order to construct critical regions for vectors of likelihood ratio statistics in such models. In this, we make use of the asymptotic distribution of the vector of test statistics for large sample sizes, assuming that the model is identified and model restrictions are testable. Examples of important multiple test problems in dynamic factor models demonstrate the relevance of the proposed methods for practical applications. 
Keywords:  familywise error rate, false discovery rate, likelihood ratio statistic, multiple hypothesis testing, multivariate chisquared distribution, time series regression, Wald statistic 
JEL:  C12 C32 C52 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012033&r=ecm 
By:  Hayakawa, K.; Pesaran, M.H. 
Abstract:  This paper extends the transformed maximum likelihood approach for estimation of dynamic panel data models by Hsiao, Pesaran, and Tahmiscioglu (2002) to the case where the errors are crosssectionally heteroskedastic. This extension is not trivial due to the incidental parameters problem that arises, and its implications for estimation and inference. We approach the problem by working with a misspecified homoskedastic model. It is shown that the transformed maximum likelihood estimator continues to be consistent even in the presence of crosssectional heteroskedasticity. We also obtain standard errors that are robust to crosssectional heteroskedasticity of unknown form. By means of Monte Carlo simulation, we investigate the finite sample behavior of the transformed maximum likelihood estimator and compare it with various GMM estimators proposed in the literature. Simulation results reveal that, in terms of median absolute errors and accuracy of inference, the transformed likelihood estimator outperforms the GMM estimators in almost all cases. 
Keywords:  Dynamic Panels, Crosssectional heteroskedasticity, Monte Carlo simulation, GMM estimation 
JEL:  C12 C13 C23 
Date:  2012–05–09 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:1224&r=ecm 
By:  Eric Hillebrand (Aarhus University and CREATES); TaeHwy Lee (University of California, Riverside) 
Abstract:  We examine the Steinrule shrinkage estimator for possible improvements in estimation and forecasting when there are many predictors in a linear time series model. We consider the Steinrule estimator of Hill and Judge (1987) that shrinks the unrestricted unbiased OLS estimator towards a restricted biased principal component (PC) estimator. Since the Steinrule estimator combines the OLS and PC estimators, it is a modelaveraging estimator and produces a combined forecast. The conditions under which the improvement can be achieved depend on several unknown parameters that determine the degree of the Steinrule shrinkage. We conduct Monte Carlo simulations to examine these parameter regions. The overall picture that emerges is that the Steinrule shrinkage estimator can dominate both OLS and principal components estimators within an intermediate range of the signaltonoise ratio. If the signaltonoise ratio is low, the PC estimator is superior. If the signaltonoise ratio is high, the OLS estimator is superior. In outofsample forecasting with AR(1) predictors, the Steinrule shrinkage estimator can dominate both OLS and PC estimators when the predictors exhibit low persistence. 
Keywords:  Steinrule, shrinkage, risk, variancebias tradeo, OLS, principal components. 
JEL:  C1 C2 C5 
Date:  2012–04–30 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201218&r=ecm 
By:  Abderrahim Taamouti; Taoufik Bouezmarni; Anouar El Ghouch 
Abstract:  We propose a nonparametric estimator and a nonparametric test for Granger causality measures that quantify linear and nonlinear Granger causality in distribution between random variables. We first show how to write the Granger causality measures in terms of copula densities. We suggest a consistent estimator for these causality measures based on nonparametric estimators of copula densities. Further, we prove that the nonparametric estimators are asymptotically normally distributed and we discuss the validity of a local smoothed bootstrap that we use in finite sample settings to compute a bootstrap biascorrected estimator and test for our causality measures. A simulation study reveals that the biascorrected bootstrap estimator of causality measures behaves well and the corresponding test has quite good finite sample size and power properties for a variety of typical data generating processes and different sample sizes. Finally, we illustrate the practical relevance of nonparametric causality measures by quantifying the Granger causality between S&P500 Index returns and many exchange rates (US/Canada, US/UK and US/Japen exchange rates). 
Keywords:  Causality measures, Nonparametric estimation, Time series, Copulas, Bernstein copula density, Local bootstrap, Conditional distribution function, Stock returns 
JEL:  C12 C14 C15 C19 G1 G12 E3 E4 
Date:  2012–03 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we1212&r=ecm 
By:  Taoufik Bouezmarni; Abderrahim Taamouti 
Abstract:  The concept of causality is naturally defined in terms of conditional distribution, however almost all the empirical works focus on causality in mean. This paper aim to propose a nonparametric statistic to test the conditional independence and Granger noncausality between two variables conditionally on another one. The test statistic is based on the comparison of conditional distribution functions using an L2 metric. We use NadarayaWatson method to estimate the conditional distribution functions. We establish the asymptotic size and power properties of the test statistic and we motivate the validity of the local bootstrap. Further, we ran a simulation experiment to investigate the finite sample properties of the test and we illustrate its practical relevance by examining the Granger noncausality between S&P 500 Index returns and VIX volatility index. Contrary to the conventional ttest, which is based on a linear meanregression model, we find that VIX index predicts excess returns both at short and long horizons. 
Keywords:  Nonparametric tests, Time series, Conditional independence, Granger noncausality, NadarayaWatson estimator, Conditional distribution function, VIX volatility index, S&P500 index 
JEL:  C12 C14 C15 C19 G1 G12 E3 E4 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we1211&r=ecm 
By:  Parrini, Alessandro 
Abstract:  Several studies have highlighted the fact that heavytailedness of asset returns can be the consequence of conditional heteroskedasticity. GARCH models have thus become very popular, given their ability to account for volatility clustering and, implicitly, heavy tails. However, these models encounter some difficulties in handling financial time series, as they respond equally to positive and negative shocks and their tail behavior remains too short even with Studentt error terms. To overcome these weaknesses we apply GARCHtype models with alphastable innovations. The stable family of distributions constitutes a generalization of the Gaussian distribution that has intriguing theoretical and practical properties. Indeed it is stable under addiction and, having four parameters, it allows for asymmetry and heavy tails. Unfortunately stable models do not have closed likelihood function, but since simulated values from αstable distributions can be straightforwardly obtained, the indirect inference approach is particularly suited to the situation at hand. In this work we provide a description of how to estimate a GARCH(1,1) and a TGARCH(1,1) with symmetric stable shocks using as auxiliary model a GARCH(1,1) with skewt innovations. Monte Carlo simulations, conducted using GAUSS, are presented and finally the proposed models are used to estimate the IBM weekly return series as an illustration of how they perform on real data. 
Keywords:  GARCH; alphastable distribution; indirect estimation; skewt distribution; Monte Carlo simulations 
JEL:  C13 C32 C87 C15 C01 
Date:  2012–04–18 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:38544&r=ecm 
By:  Stefan Lang; Nikolaus Umlauf; Peter Wechselberger; Kenneth Harttgen; Thomas Kneib 
Abstract:  Models with structured additive predictor provide a very broad and rich framework for complex regression modeling. They can deal simultaneously with nonlinear covariate effects and time trends, unit or clusterspecific heterogeneity, spatial heterogeneity and complex interactions between covariates of different type. In this paper, we propose a hierarchical or multilevel version of regression models with structured additive predictor where the regression coefficients of a particular nonlinear term may obey another regression model with structured additive predictor. In that sense, the model is composed of a hierarchy of complex structured additive regression models. The proposed model may be regarded as an extended version of a multilevel model with nonlinear covariate terms in every level of the hierarchy. The model framework is also the basis for generalized random slope modeling based on multiplicative random effects. Inference is fully Bayesian and based on Markov chain Monte Carlo simulation techniques. We provide an in depth description of several highly efficient sampling schemes that allow to estimate complex models with several hierarchy levels and a large number of observations within a couple of minutes (often even seconds). We demonstrate the practicability of the approach in a complex application on childhood undernutrition with large sample size and three hierarchy levels. 
Keywords:  Bayesian hierarchical models, kriging, Markov random fields, MCMC, multiplicative random effects, Psplines 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:inn:wpaper:201207&r=ecm 
By:  Sílvia Gonçalves; Benoit Perron 
Abstract:  The main contribution of this paper is to propose and theoretically justify bootstrap methods for regressions where some of the regressors are factors estimated from a large panel of data. We derive our results under the assumption that √T/N→c, where 0≤c<∞ (N and T are the crosssectional and the time series dimensions, respectively), thus allowing for the possibility that factors estimation error enters the limiting distribution of the OLS estimator. We consider general residualbased bootstrap methods and provide a set of high level conditions on the bootstrap residuals and on the idiosyncratic errors such that the bootstrap distribution of the OLS estimator is consistent. We subsequently verify these conditions for a simple wild bootstrap residualbased procedure. Our main results can be summarized as follows. When c=0, as in Bai and Ng (2006), the crucial condition for bootstrap validity is the ability of the bootstrap regression scores to mimic the serial dependence of the original regression scores. Mimicking the cross sectional and/or serial dependence of the idiosyncratic errors in the panel factor model is asymptotically irrelevant in this case since the limiting distribution of the original OLS estimator does not depend on these dependencies. Instead, when c>0, a twostep residualbased bootstrap is required to capture the factors estimation uncertainty, which shows up as an asymptotic bias term (as we show here and as was recently discussed by Ludvigson and Ng (2009b)). Because the bias depends on the cross sectional dependence of the idiosyncratic error term, bootstrap validity depends crucially on the ability of the bootstrap panel factor model to capture this cross sectional dependence. <P>Cet article propose et justifie théoriquement des méthodes de bootstrap pour des régressions où certains régresseurs sont des facteurs estimés à partir de panel de données de grandes dimensions. Nous obtenons nos résultats sous la condition que √T/N→c, où 0≤c<∞ (N et T sont les dimensions individuelle et temporelle du panel respectivement), ce qui permet à l’erreur d’estimation des facteurs d’affecter la loi asymptotique de l’estimateur des moindres carrés ordinaires (MCO). Nous considérons des méthodes de bootstrap basées sur les résidus et donnons des conditions de haut niveau sur les résidus bootstrap et les erreurs idiosyncrasiques telles que la loi bootstrap de l’estimateur des MCO est convergente. Par la suite, nous vérifions ces conditions pour un algorithme du wild bootstrap. Nos résultats sont les suivants. Lorsque c = 0, comme dans Bai et Ng (2006), la condition essentielle pour la validité du bootstrap est la capacité de la régression bootstrap à reproduire la dépendance temporelle des scores de la régression originale. La dépendance transversale ou temporelle des erreurs idiosyncrasiques du modèle à facteurs est négligeable asymptotiquement puisque la loi asymptotique des MCO n’est pas affectée par ces phénomènes. Cependant, lorsque c > 0, une procédure de bootstrap à deux étapes est nécessaire pour capter l’incertitude reliée à l’estimation des facteurs qui apparaît comme un biais asymptotique (tel que discuté récemment par Ludvigson et Ng (2009b). Parce que ce biais dépend de la dépendance transversale des erreurs idiosyncrasiques, la validité du bootstrap dépend de sa capacité à reproduire cette dépendance. 
Keywords:  factor model, bootstrap, asymptotic bias, Modèle à facteurs, bootstrap, biais asymptotique 
Date:  2012–05–01 
URL:  http://d.repec.org/n?u=RePEc:cir:cirwor:2012s12&r=ecm 
By:  Vincenzo Verardi; Marjorie Gassner; Darwin Ugarte Ontiveros 
Abstract:  In the robust statistics literature, a wide variety of models have been devel oped to cope with outliers in a rather large number of scenarios. Nevertheless, a recurrent problem for the empirical implementation of these estimators is that optimization algorithms generally do not perform well when dummy vari ables are present. What we propose in this paper is a simple solution to this involving the replacement of the subsampling step of the maximization procedures by a projectionbased method. This allows us to propose robust estimators involving categorical variables, be they explanatory or dependent. Some Monte Carlo simulations are presented to illustrate the good behavior of the method. 
Keywords:  Sestimators; Robust Regression; Dummy Variables; Outliers 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/117087&r=ecm 
By:  Yuanhua Feng (University of Paderborn); David Hand (Imperial College); Yuanhua Feng (Brunel University) 
Abstract:  A new multivariate random walk model with slowly changing drift and crosscorrelations for multivariate processes is introduced and investigated in detail. In the model, not only the drifts and the crosscovariances but also the crosscorrelations between single series are allowed to change slowly over time. The model can accompany any number of components such as many number of assets. The model is particularly useful for modelling and forecasting the value of financial portfolios under very complex market conditions. Kernel estimation of local covariance matrix is used. The integrated effect of the estimation errors involved in estimating the integrated processes is derived. Practical relevance of the model and estimation is illustrated by application to several foreign exchange rates. 
Keywords:  Forecasting, Kernel estimation, Multivariate time series analysis, Portfolio return, Slowly changing multivariate random walk 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:pdn:wpaper:50&r=ecm 
By:  Anders Bredahl Kock (Aarhus University and CREATES); Laurent A.F. Callot (Aarhus University and CREATES) 
Abstract:  This paper establishes nonasymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order of magnitude than the sample size. Furthermore, it is shown that under suitable conditions the number of variables selected is of the right order of magnitude and that no relevant variables are excluded. Next, nonasymptotic probabilities are given for the Adaptive LASSO to select the correct sign pattern (and hence the correct sparsity pattern). Finally conditions under which the Adaptive LASSO reveals the correct sign pattern with probability tending to one are given. Again, the number of parameters may be much larger than the sample size. Some maximal inequalities for vector autoregressions which might be of independent interest are contained in the appendix. 
Keywords:  Vector autoregression, LASSO, Adaptive LASSO, Oracle inequality, Variable selection. 
JEL:  C01 C02 C13 C32 
Date:  2012–04–30 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201216&r=ecm 
By:  Taoufik Bouezmarni; Anouar El Ghouch; Abderrahim Taamouti 
Abstract:  We study the asymptotic properties of the Bernstein estimator for unbounded density copula functions. We show that the estimator converges to infinity at the corner. We establish its relative convergence when the copula is unbounded and we provide the uniform strong consistency of the estimator on every compact in the interior region. We also check the finite simple performance of the estimator via an extensive simulation study and we compare it with other well known nonparametric methods. Finally, we consider an empirical application where the asymmetric dependence between international equity markets (US, Canada, UK, and France) is reexamined. 
Keywords:  Unbounded copula, Nonparametric estimation, Bernstein polynomial, Asymptotic properties, Uniform strong consistency, Relative convergence, Boundary bias 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we1143&r=ecm 
By:  László Mátyás; László Balázsi 
Abstract:  The paper introduces for the most frequently used threedimensional fixed effects panel data models the appropriate within estimators. It analyzes the behaviour of these estimators in the case of noselfflow data, unbalanced data and dynamic autoregressive models. Then the main results are generalised for higher dimensional panel data sets as well. 
Date:  2012–04–23 
URL:  http://d.repec.org/n?u=RePEc:ceu:econwp:2012_2&r=ecm 
By:  A. Ronald Gallant; Han Hong; Ahmed Khwaja 
Abstract:  We consider dynamic games that can have state variables that are partially observed, serially correlated, endogenous, and heterogeneous. We propose a Bayesian method that uses a particle filter to compute an unbiased estimate of the likelihood within a Metropolis chain. Unbiasedness guarantees that the stationary density of the chain is the exact posterior, not an approximation. The number of particles required is easily determined. The regularity conditions are weak. Results are verified by simulation from two dynamic oligopolistic games with endogenous state. One is an entry game with feedback to costs based on past entry and the other a model of an industry with a large number of heterogeneous firms that compete on product quality. 
Keywords:  Dynamic Games, Partially Observed State, Endogenous State, Serially Correlated State, Particle Filter 
JEL:  E00 G12 C51 C52 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:duk:dukeec:1201&r=ecm 
By:  Merrett, Danielle 
Abstract:  This paper compares the performance of alternative estimation approaches for Public Goods Game data. A leaveoneout cross validation was applied to test the performance of five estimation approaches. Random effects is revealed as the best estimation approach because of its unbiased and precise estimates and its ability to estimate timeinvariant demographics. Surprisingly, approaches that treat the choice variable as continuous outperform those that treat the choice variable as discrete. Correcting for censoring is shown to induce biased estimates. A finite Poisson mixture model produced relatively unbiased estimates however lacked the precision of fixed and random effects estimation. 
Keywords:  finite mixture models; ordered logit; fixed effects; random effects; economic experiments; voluntary contributions mechanism; public goods 
Date:  2012–04 
URL:  http://d.repec.org/n?u=RePEc:syd:wpaper:2123/8256&r=ecm 
By:  FrançoisCharles Wolff (LEMNA  Laboratoire d'économie et de management de Nantes Atlantique  Université de Nantes : EA4272, INED  Institut National d'Etudes Démographiques Paris  INED) 
Abstract:  This paper proposes to decompose nonlinear models deduced from a latent regression framework using the latent dependent outcome as dependent variable and the OaxacaBlinder decomposition technique. Values of the unobserved latent outcome are obtained using simulated residuals. 
Keywords:  BlinderOaxaca ; nonlinear models ; simulated residuals 
Date:  2012–05–04 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:hal00694421&r=ecm 
By:  Stefan Aulbach; Verena Bayer; Michael Falk 
Abstract:  The univariate piecingtogether approach (PT) fits a univariate generalized Pareto distribution (GPD) to the upper tail of a given distribution function in a continuous manner. We propose a multivariate extension. First it is shown that an arbitrary copula is in the domain of attraction of a multivariate extreme value distribution if and only if its upper tail can be approximated by the upper tail of a multivariate GPD with uniform margins. The multivariate PT then consists of two steps: The upper tail of a given copula $C$ is cut off and substituted by a multivariate GPD copula in a continuous manner. The result is again a copula. The other step consists of the transformation of each margin of this new copula by a given univariate distribution function. This provides, altogether, a multivariate distribution function with prescribed margins whose copula coincides in its central part with $C$ and in its upper tail with a GPD copula. When applied to data, this approach also enables the evaluation of a wide range of rational scenarios for the upper tail of the underlying distribution function in the multivariate case. We apply this approach to operational loss data in order to evaluate the range of operational risk. 
Date:  2012–05 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1205.1617&r=ecm 
By:  Neville Francis; Michael T. Owyang; Özge Savascin 
Abstract:  Factor models have become useful tools for studying international business cycles. Block factor models [e.g., Kose, Otrok, and Whiteman (2003)] can be especially useful as the zero restrictions on the loadings of some factors may provide some economic interpretation of the factors. These models, however, require the econometrician to predefine the blocks, leading to potential misspecification. In Monte Carlo experiments, we show that even small misspecifica tion can lead to substantial declines in t. We propose an alternative model in which the blocks are chosen endogenously. The model is estimated in a Bayesian framework using a hierarchi cal prior, which allows us to incorporate serieslevel covariates that may influence and explain how the series are grouped. Using similar international business cycle data as Kose, Otrok, and Whiteman, we find our country clusters differ in important ways from those identified by geography alone. In particular, we find that similarities in institutions (e.g., legal systems, language diversity) may be just as important as physical proximity for analyzing business cycle comovements. 
Keywords:  Business cycles ; Economic conditions 
Date:  2012 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2012014&r=ecm 
By:  Nikolai V. Hovanov; Maria S. Yudaeva 
Abstract:  A method of alternatives’ probabilities estimation under deficiency of numeric information (obtained from different sources) is proposed. The method is based on the well known Bayesian model of uncertainty randomization. Additional nonnumeric, nonexact, and noncomplete information about the sources’ significance are used for final estimation of the alternatives’ probabilities. Some examples of the method application to commodities’ prices and currencies rates dynamics forecasting are presented. 
Keywords:  Ordinal and Interval Information; Randomization of Uncertainty; Random Probabilities 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:deg:conpap:c016_010&r=ecm 
By:  Kyriacou, Maria 
Abstract:  This paper studies the use of the overlapping blocking scheme in unit root autoregression. When the underlying process is that of a random walk, the blocksâ€™ initial conditions are not fixed, but are equal to the sum of all the previous observationsâ€™ error terms. When non overlapping subsamples are used, as first shown by Chambers and Kyriacou (2010), these initial conditions do not disappear asymptotically. In this paper we show that a simple way of overcoming this issue is to use overlapping blocks. By doing so, the effect of these initial conditions vanishes asymptotically. An application of these findings to jackknife estimators indicates that an estimator based on movingblocks is able to provide obvious reductions to the mean square error 
Date:  2012–05–01 
URL:  http://d.repec.org/n?u=RePEc:stn:sotoec:1203&r=ecm 
By:  Michael Greenacre 
Abstract:  Correspondence analysis, when used to visualize relationships in a table of counts (for example, abundance data in ecology), has been frequently criticized as being too sensitive to objects (for example, species) that occur with very low frequency or in very few samples. In this statistical report we show that this criticism is generally unfounded. We demonstrate this in several data sets by calculating the actual contributions of rare objects to the results of correspondence analysis and canonical correspondence analysis, both to the determination of the principal axes and to the chisquare distance. It is a fact that rare objects are often positioned as outliers in correspondence analysis maps, which gives the impression that they are highly influential, but their low weight offsets their distant positions and reduces their effect on the results. An alternative scaling of the correspondence analysis solution, the contribution biplot, is proposed as a way of mapping the results in order to avoid the problem of outlying and low contributing rare objects. 
Keywords:  Biplot, canonical correspondence analysis, contribution, correspondence analysis, influence, outlier, scaling 
JEL:  C19 C88 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:bge:wpaper:571&r=ecm 
By:  AraujoEnciso, Sergio Rene 
Abstract:  Economic theory states that the spatial equilibrium condition is a region where prices can be or not cointegrated. It is when prices are within such a region when they are no cointegrated, when prices are in its boundaries they are not only cointegrated but also fulfilling the Law of One Price (LOP). Nonetheless the econometric techniques assume a mean reverting process in order to test for cointegration, either linear or non linear. This research shows that in the absence of such mean reverting process by using prices in pure equilibrium, cointegration (linear and non linear) is often rejected. Such findings go in line with the Band Threshold Autoregressive Model where the neutral band is a region of no cointegration. Furthermore it can be concluded that the economic concept of perfect market integration (LOP) by itself is not sufficient for testing cointegration with some of the current econometric methods. 
Keywords:  Spatial Equilibrium Condition, Testing Cointegration, Demand and Price Analysis, Risk and Uncertainty, C15, E37, 
Date:  2012–02–23 
URL:  http://d.repec.org/n?u=RePEc:ags:eaa123:122545&r=ecm 
By:  Balli, Hatice Ozer; Sorensen, Bent E. 
Abstract:  We provide practical advice for applied economists regarding robust specification and interpretation of linear regression models with interaction terms. We replicate a number of prominent published results using interaction effects and examine if they are robust to reasonable specification permutations. 
Keywords:  NonLinear Regression; Interaction Terms 
JEL:  C13 C12 
Date:  2012–04–10 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:38608&r=ecm 