nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒03‒30
29 papers chosen by
Sune Karlsson
Orebro University

  1. Consistent Estimation of Panel Data Models with a Multifactor Error Structure when the Cross Section Dimension is Large By Bin Peng; Giovanni Forchini
  2. A simple parametric model selection test By Susanne Schennach; Daniel Wilhelm
  3. Estimating and Testing Threshold Regression Models with Multiple Threshold Variables By Chong, Terence Tai Leung; Yan, Isabel K.
  4. Improving Likelihood-Ratio-Based Confidence Intervals for Threshold Parameters in Finite Samples By Donayre, Luiggi; Eo, Yunjong; Morley, James
  5. Testing for Leverage Effect in Financial Returns. By Christophe Chorro; Dominique Guegan; Florian Ielpo; Hanjarivo Lalaharison
  6. Nonparametric estimation of finite measures By Stephane Bonhomme; Koen Jochmans; Jean-Marc Robin
  7. Simple Le Cam Optimal Inference for the Tail Weight of Multivariate Student t Distributions: Testing Procedures and Estimation By Christophe Ley; Anouk Neven
  8. Optimal bandwidth selection for robust generalized method of moments estimation By Daniel Wilhelm
  9. Self-Selection and Direct Estimation of Across-Regime Correlation Parameter By Giorgio Calzolari; Antonino Di Pino
  10. Specific Markov-switching behaviour for ARMA parameters By Jean-François Carpantier
  11. Quantile Spectral Processes: Asymptotic Analysis and Inference By Tobias Kley; Stanislav Volgushev; Holger Dette; Marc Hallin
  12. A Nonparametric Test for Grangercausality in Distribution with Application to Financial Contagion By Bertrand Candelon; Sessi Tokpavi
  13. Optimal Rank-Based Tests for the Location Parameter of a Rotationally Symmetric Distribution on the Hypersphere By Davy Paindaveine; Thomas Verdebout
  14. The cross-quantilogram: measuring quantile dependence and testing directional predictability between time series By Heejoon Han; Oliver Linton; Tatsushi Oka; Yoon-Jae Whang
  15. Baysesian inference and model comparison for ramdom choice structures By McCAUSLAND, William; MARLEY, A. A. J.
  16. Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions By Pan, Li; Politis, Dimitris N
  17. An Odd Couple: Monotone Instrumental Variables and Binary Treatments By Richey, Jeremiah
  18. Universal Asymptotics for High-Dimensional Sign Tests By Davy Paindaveine; Thomas Verdebout
  19. On Quadratic Expansions of Log-Likelihoods and a General Asymptotic Linearity Result By Marc Hallin; Ramon van den Akker; Bas Werker
  20. Counterfactual Spatial Distributions By Paul E. Carrillo; Jonathan Rothbaum
  21. On Conditions in Central Limit Theorems for Martingale Difference Arrays Long Version By Abdelkamel Alj; Rajae Azrak; Guy Melard
  22. Nonparametric Least Squares Methods for Stochastic Frontier Models By Leopold Simar; Ingrid Van Keilegom; Valentin Zelenyuk
  23. Econometric Filters By Stephen Pollock
  24. Currency Crisis Early Warning Systems: Why They should be Dynamic By Bertrand Candelon; Christophe Hurlin; Elena Dumitnescu
  25. Inference on the Shape of Elliptical Distribution Based on the MCD By Davy Paindaveine; Germain Van Bever
  26. Conditional Forecasts and Scenario Analysis with Vector Autoregressions for Large Cross-Sections By Martha Banbura; Domenico Giannone; Michèle Lenza
  27. Dynamic Factor Models, Cointegration and Error Correction Mechanisms By Matteo Barigozzi; Marco Lippi; Matteo Luciani
  28. Statistical inference for measures of predictive success By Demuynck T.
  29. A new concept of quantiles for directional data and the angular Mahalanobis depth By Christophe Ley; Camille Sabbah; Thomas Verdebout

  1. By: Bin Peng (Economics Discipline Group, University of Technology, Sydney); Giovanni Forchini (University of Surrey)
    Abstract: The paper studies a panel data models with a multifactor structure in both the errors and the regressors in a microeconometric setting in which the time dimension is fixed and possibly very small. An estimator is proposed that is consistent for fixed T as N tends to infinity and that does not impose restrictive conditions on the number of factors or the number of regressors or the time series properties of the panel. A small Monte Carlo simulation shows that this estimator is very accurate for very small values of T. Two empirical cases are provided to demonstrate performances of our estimator in practice.
    Keywords: Panel data model; cross-sectional dependence; asymptotic theory
    JEL: C10 C13 C23
    Date: 2014–03–01
    URL: http://d.repec.org/n?u=RePEc:uts:ecowps:20&r=ecm
  2. By: Susanne Schennach (Institute for Fiscal Studies and Brown University); Daniel Wilhelm (Institute for Fiscal Studies and UCL)
    Abstract: We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly specified or misspecified, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic together with a standard normal critical value can be used. The new procedure controls asymptotic size uniformly over a large class of data generating processes. We demonstrate its finite sample properties in a Monte Carlo experiment and its practical relevance in an empirical application comparing Keynesian versus new classical macroeconomic models.
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:10/14&r=ecm
  3. By: Chong, Terence Tai Leung; Yan, Isabel K.
    Abstract: Conventional threshold models contain only one threshold variable. This paper provides the theoretical foundation for threshold models with multiple threshold variables. The new model is very different from a model with a single threshold variable as several novel problems arisefrom having an additional threshold variable. First, the model is not analogous to a change-point model. Second, the asymptotic joint distribution of the threshold estimators is difficult to obtain. Third, the estimation time increases exponentially with the number of threshold variables. This paper derives the consistency and the asymptotic joint distribution of the threshold estimators. A fast estimation algorithm to estimate the threshold values is proposed. We also develop tests for the number of threshold variables. The theoretical results are supported by simulation experiments. Our model is applied to the study of currency crises.
    Keywords: Threshold Model; Multiple Threshold Variables; Currency Crisis; Panel Data
    JEL: C12 C13 C33 F3 F31 F37
    Date: 2014–03–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:54732&r=ecm
  4. By: Donayre, Luiggi; Eo, Yunjong; Morley, James
    Abstract: We propose an improved method for constructing likelihood-ratio-based confidence intervals for threshold parameters in threshold regressions. Related methods have been extensively developed in the literature and are asymptotically valid. However, their performance in finite samples is not satisfactory. We suggest two modifications to the standard inverted likelihood ratio approach. First, we consider a middle point adjustment for the boundaries of confidence intervals. Second, we propose an interpolation approach for evaluating the likelihood ratio profile at non-observable threshold values. Our extensive Monte Carlo simulations suggest that our proposed confidence intervals outperform existing methods, including bootstrap approaches, by attaining very accurate coverage rates with relatively short lengths in finite samples.
    Keywords: Threshold regression; Finite-sample inference; Inverted likelihood ratio
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2014-04&r=ecm
  5. By: Christophe Chorro (Centre d'Economie de la Sorbonne); Dominique Guegan (Centre d'Economie de la Sorbonne - Paris School of Economics); Florian Ielpo (Lombard Odier Darier Hentsch & Cie - Suisse); Hanjarivo Lalaharison (Centre d'Economie de la Sorbonne)
    Abstract: This article questions the empirical usefulness of leverage effects to describe the dynamics of equity returns. Using a recursive estimation scheme that accurately disentangles the asymmetry coming from the conditional distribution of returns and the asymmetry that is related to the past return to volatility component in GARCH models, we test for the statistical significance of the latter. Relying on both in and out of sample tests we consistently find a weak contribution of leverage effect over the past 25 years of S&P 500 returns, casting light on the importance of the conditional distribution in time series models.
    Keywords: Maximum likelihood method, related-GARCH process, recursive estimation method, mixture of Gaussian distributions, generalized hyperbolic distributions, S&P 500, forecast, leverage effect.
    JEL: C58 C13
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:14022&r=ecm
  6. By: Stephane Bonhomme; Koen Jochmans; Jean-Marc Robin (Institute for Fiscal Studies and Sciences Po)
    Abstract: The aim of this paper is to provide simple nonparametric methods to estimate finite mixture models from data with repeated measurements. Three measurements suffice for the mixture to be fully identified and so our approach can be used even with very short panel data. We provide distribution theory for estimators of the mixing proportions and the mixture distributions, and various functionals thereof. We also discuss inference on the number of components. These estimators are found to perform well in a series of Monte Carlo exercises. We apply our techniques to document heterogeneity in log annual earnings using PSID data spanning the period 1969–1998.
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:11/14&r=ecm
  7. By: Christophe Ley; Anouk Neven
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/143830&r=ecm
  8. By: Daniel Wilhelm (Institute for Fiscal Studies and UCL)
    Abstract: A two-step generalized method of moments estimation procedure can be made robust to heteroskedasticity and autocorrelation in the data by using a nonparametric estimator of the optimal weighting matrix. This paper addresses the issue of choosing the corresponding smoothing parameter (or bandwidth) so that the resulting point estimate is optimal in a certain sense. We derive an asymptotically optimal bandwidth that minimizes a higher-order approximation to the asymptotic mean-squared error of the estimator of interest. We show that the optimal bandwidth is of the same order as the one minimizing the mean-squared error of the nonparametric plugin estimator, but the constants of proportionality are significantly different. Finally, we develop a data-driven bandwidth selection rule and show, in a simulation experiment, that it may substantially reduce the estimator's mean-squared error relative to existing bandwidth choices, especially when the number of moment conditions is large.
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:15/14&r=ecm
  9. By: Giorgio Calzolari (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze); Antonino Di Pino (Dipartimento S.E.A.M., Università di Messina)
    Abstract: A direct Maximum Likelihood (ML) procedure to estimate the "generally unidentified" across-regime correlation parameter in a two-regime endogenous switching model is here provided. The results of a Monte Carlo experiment confirm consistency of our direct ML procedure, and its relative efficiency over widely applied models and methods. As an empirical application, we estimate a Two-Regime simultaneous equation model of domestic work of Italian married women in which the two regimes are given by their working status (employed or unemployed).
    Keywords: Endogenous switching model, Across-regime correlation parameter
    JEL: C31 C34 J22
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:fir:econom:wp2014_04&r=ecm
  10. By: Jean-François Carpantier (CREA, Université de Luxembourg)
    Abstract: We propose an estimation method that circumvents the path dependence problem existing in Change-Point (CP) and Markov Switching (MS) ARMA models. Our model embeds a sticky infinite hidden Markov-switching structure (sticky IHMM), which makes possible a self-determination of the number of regimes as well as of the specification : CP or MS. Furthermore, CP and MS frameworks usually assume that all the model parameters vary from one regime to another. We relax this restrictive assumption. As illustrated by simulations on moderate samples (300 observations), the sticky IHMM-ARMA algorithm detects which model parameters change over time. Applications to the U.S. GDP growth and the DJIA realized volatility highlight the relevance of estimating different structural breaks for the mean and variance parameters.
    Keywords: Bayesian interference, Markov-switching model, ARMA model, infinite hidden Markov model, Dirichlet Process
    JEL: C11 C15 C22 C58
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:luc:wpaper:14-07&r=ecm
  11. By: Tobias Kley; Stanislav Volgushev; Holger Dette; Marc Hallin
    Keywords: time series; spectral analysis; periodogram; quantiles; copulas; ranks; spearman; blomqvist; gini spectra
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/156105&r=ecm
  12. By: Bertrand Candelon; Sessi Tokpavi
    Abstract: This paper introduces a kernel-based nonparametric inferential proce-
    Keywords: Granger-causality, Distribution, Tails, Kernel-based test, Fi- nancial Spill-over.
    Date: 2014–02–25
    URL: http://d.repec.org/n?u=RePEc:ipg:wpaper:2014-162&r=ecm
  13. By: Davy Paindaveine; Thomas Verdebout
    Keywords: group invariance; rank-based tests; rotationally symmetric distributions; spherical statistics; uniform local and asymptotic normality
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/149452&r=ecm
  14. By: Heejoon Han; Oliver Linton (Institute for Fiscal Studies and Cambridge University); Tatsushi Oka; Yoon-Jae Whang (Institute for Fiscal Studies and Seoul National University)
    Abstract: This paper proposes the cross-quantilogram to measure the quantile dependence between two time series. We apply it to test the hypothesis that one time series has no directional predictability to another time series. We establish the asymptotic distribution of the cross quantilogram and the corresponding test statistic. The limiting distributions depend on nuisance parameters. To construct consistent confiÂ…dence intervals we employ the stationary bootstrap procedure; we show the consistency of this bootstrap. Also, we consider the self-normalized approach, which is shown to be asymptotically pivotal under the null hypothesis of no predictability. We provide simulation studies and two empirical applications. First, we use the cross-quantilogram to detect predictability from stock variance to excess stock return. Compared to existing tools used in the literature of stock return predictability, our method provides a more complete relationship between a predictor and stock return. Second, we investigate the systemic risk of individual fiÂ…nancial institutions, such as JP Morgan Chase, Goldman Sachs and AIG. This article has supplementary materials online.
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:06/14&r=ecm
  15. By: McCAUSLAND, William; MARLEY, A. A. J.
    Abstract: We complete the development of a testing ground for axioms of discrete stochastic choice. Our contribution here is to develop new posterior simulation methods for Bayesian inference, suitable for a class of prior distributions introduced by McCausland and Marley (2013). These prior distributions are joint distributions over various choice distributions over choice sets of different sizes. Since choice distributions over different choice sets can be mutually dependent, previous methods relying on conjugate prior distributions do not apply. We demonstrate by analyzing data from a previously reported experiment and report evidence for and against various axioms.
    Keywords: Random utility, discrete choice, Bayesian inference, MCMC
    JEL: C11 C35 C53 D01
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:mtl:montde:2013-06&r=ecm
  16. By: Pan, Li; Politis, Dimitris N
    Abstract: In order to construct prediction intervals without the combersome--and typically unjustifiable--assumption of Gaussianity, some form of resampling is necessary. The regression set-up has been well-studies in the literature but time series prediction faces additional difficulties. The paper at hand focuses on time series that can be modeled as linear, nonlinear or nonparametric autoregressions, and develops a coherent methodology for the constructuion of bootstrap prediction intervals. Forward and backward bootstrap methods for using predictive and fitted residuals are introduced and compared. We present detailed algorithms for these different models and show that the bootstrap intervals manage to capture both sources of variability, namely the innovation error as well as essimation error. In simulations, we compare the prediction intervals associated with different methods in terms of their acheived coverage level and length of interval. 
    Keywords: Physical Sciences and Mathematics, Confidence intervals, forecasting, time series
    Date: 2014–01–01
    URL: http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt67h5s74t&r=ecm
  17. By: Richey, Jeremiah
    Abstract: This paper investigates Monotone Instrumental Variables (MIV) and their ability to aid in identifying treatment effects when the treatment is binary in a nonparametric bounding framework. I show that an MIV can only aid in identification beyond that of a Monotone Treatment Selection assumption if for some region of the instrument the observed conditional-on-received-treatment outcomes exhibit monotonicity in the instrument in the opposite direction as that assumed by the MIV in a Simpson's Paradox-like fashion. Furthermore, an MIV can only aid in identification beyond that of a Monotone Treatment Response assumption if for some region of the instrument either the above Simpson's Paradox-like relationship exists or the instrument's indirect effect on the outcome (as through its influence on treatment selection) is the opposite of its direct effect as assumed by the MIV. The implications of the main findings for empirical work are discussed and the results are highlighted with an application investigating the effect of criminal convictions on job match quality using data from the 1997 National Longitudinal Survey of the Youth. Though the main results are shown to hold only for the binary treatment case in general, they are shown to have important implications for the multi-valued treatment case as well.
    Keywords: Instrumental variables, Nonparametric bounds, Partial identification, Criminal convictions
    JEL: C14 J63 K40
    Date: 2013–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:54785&r=ecm
  18. By: Davy Paindaveine; Thomas Verdebout
    Date: 2013–11
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/151199&r=ecm
  19. By: Marc Hallin; Ramon van den Akker; Bas Werker
    Abstract: Abstract Irrespective of the statistical model under study, the derivation of limits,in the Le Cam sense, of sequences of local experiments (see [7]-[10]) oftenfollows along very similar lines, essentially involving differentiability in quadraticmean of square roots of (conditional) densities. This chapter establishes two abstractand very general results providing sufficient and nearly necessary conditionsfor (i) the existence of a quadratic expansion, and (ii) the asymptotic linearity oflocal log-likelihood ratios (asymptotic linearity is needed, for instance, when unspecifiedmodel parameters are to be replaced, in some statistic of interest, withsome preliminary estimator). Such results have been established, for locally asymptoticallynormal (LAN) models involving independent and identically distributedobservations, by, e.g. [1], [11] and [12]. Similar results are provided here for modelsexhibiting serial dependencies which, so far, have been treated on a case-by-casebasis (see [4] and [5] for typical examples) and, in general, under stronger regularityassumptions. Unlike their i.i.d. counterparts, our results extend beyond the contextof LAN experiments, so that non-stationary unit-root time series and cointegrationmodels, for instance, also can be handled (see [6]).
    Date: 2013–09
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/149099&r=ecm
  20. By: Paul E. Carrillo (Department of Economics/Institute for International Economic Policy, George Washington University); Jonathan Rothbaum (Development Research Group, The World Bank)
    Abstract: The influential contributions of DiNardo, Fortin, and Lemieux (1996), Firpo, Fortin, and Lemieux (2009), Machado and Mata (2005), and Donald, Green, and Paarsch (2000) provide researchers with a useful toolbox to estimate counterfactual distributions of scalar random variables. These techniques have been widely applied in the literature. Typically, the dependent variable of interest has been a scalar and little consideration has been given to spatial factors. In this paper we propose a simple method to construct the counterfactual distribution of the location of a variable across space. We apply the spatial counterfactual technique to assess 1) how much changes in individual characteristics of Hispanics in the Washington, DC, area account for changes in the distribution of their residential location choices, and 2) how changes in the average characteristics of shareholders account for changes in the spatial distribution of new firms in Quito, Ecuador.
    Keywords: Decomposition; Non-parametric Estimation
    JEL: C14 R23 R30
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:gwi:wpaper:2014-05&r=ecm
  21. By: Abdelkamel Alj; Rajae Azrak; Guy Melard
    Keywords: unconditional Lyapunov condition; conditional Lindeberg condition
    JEL: C13 C22
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/154446&r=ecm
  22. By: Leopold Simar (Institut de statistique, biostatistique et sciences actuarielles, Universite catholique de Louvain); Ingrid Van Keilegom (Institut de statistique, biostatistique et sciences actuarielles, Universite catholique de Louvain); Valentin Zelenyuk (School of Economics, The University of Queensland)
    Abstract: When analyzing productivity and efficiency of firms, stochastic frontier models are very attractive because they allow, as in typical regression models, to introduce some noise in the Data Generating Process. Most of the approaches so far have been using very restrictive fully parametric specified models, both for the frontier function and for the components of the stochastic terms. Recently, local MLE approaches were introduced to relax these parametric hypotheses. However, the high computational complexity of the latter makes them difficult to use, in particular if bootstrap-based inference is needed. In this work we show that most of the benefits of the local MLE approach can be obtained with less assumptions and involving much easier, faster and numerically more robust computations, by using nonparametric least-squares methods. Our approach can also be viewed as a semi-parametric generalization of the so-called “modified OLS†that was introduced in the parametric setup. If the final evaluation of individual efficiencies requires, as in the local MLE approach, the local specification of the distributions of noise and inefficiencies, it is shown that a lot can be learned on the production process without such specifications. Even elasticities of the mean inefficiency can be analyzed with unspecified noise distribution and a general class of local one-parameter scale family for inefficiencies. This allows to discuss the variation in inefficiency levels with respect to explanatory variables with minimal assumptions on the Data Generating Process. Our method is illustrated and compared with other methods with a real data set.
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:qld:uqcepa:94&r=ecm
  23. By: Stephen Pollock
    Abstract: A variety of filters that are commonly employed by econometricians are analysed with a view to determining their effectiveness in extracting well-defined components of economic data sequences. These components can be defined in terms of their spectral structures—i.e. their frequency content—and it is argued that the process of econometric signal extraction should be guided by a careful appraisal of the periodogram of the detrended data sequence. A preliminary estimate of the trend can often be obtained by fitting a polynomial function to the data. This can provide a firm benchmark against which the deviations of the business cycle and the fluctuations of seasonal activities can be measured. The trend-cycle component may be estimated by adding the business cycle estimate to the trend function. In cases where there are evident structural breaks in the data, other means are suggested for estimating the underlying trajectory of the data. Whereas it is true that many annual and quarterly economic data sequences are amenable to relatively unsophisticated filtering techniques, it is often the case that monthly data that exhibit strong seasonal fluctuations require a far more delicate approach. In such cases, it may be appropriate to use filters that work directly in the frequency domain by selecting or modifying the spectral ordinates of a Fourier decomposition of data that have been subject to a preliminary detrending
    Keywords: Spectral analysis, Business cycles, Turning points, Seasonality.
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:14/07&r=ecm
  24. By: Bertrand Candelon; Christophe Hurlin; Elena Dumitnescu
    Abstract: Traditionally, nancial crisis Early Warning Systems (EWSs) rely on macroeconomic leading indicators to forecast the occurrence of such events. This paper extends such discrete-choice EWSs by taking into account the persistence of the crisis phenomenon. The dynamic logit EWS is estimated using an exact maximum likelihood estimation method both in a time series and panel form. This model's forecasting abilities are then scrutinized by using an evaluation methodology recently designed specically for EWSs. When applied for predict- ing currency crises for 16 countries, this new EWS turns out to exhibit signicantly better predictive abilities than the existing static one, both in- and out-of -sample, thus supporting the use of dynamic specications for EWSs for nancial crises.
    Keywords: dynamic models, currency crisis, Early Warning System.
    JEL: C33 F37
    Date: 2014–02–25
    URL: http://d.repec.org/n?u=RePEc:ipg:wpaper:2014-161&r=ecm
  25. By: Davy Paindaveine; Germain Van Bever
    Date: 2013–05
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/143831&r=ecm
  26. By: Martha Banbura; Domenico Giannone; Michèle Lenza
    Abstract: This paper describes an algorithm to compute the distribution of conditional forecasts,i.e. projections of a set of variables of interest on future paths of some othervariables, in dynamic systems. The algorithm is based on Kalman filtering methods andis computationally viable for large vector autoregressions (VAR) and dynamic factormodels (DFM). For a quarterly data set of 26 euro area macroeconomic and financialindicators, we show that both approaches deliver similar forecasts and scenario assessments.In addition, conditional forecasts shed light on the stability of the dynamicrelationships in the euro area during the recent episodes of financial turmoil and indicatethat only a small number of sources drive the bulk of the fluctuations in the euroarea economy.
    Keywords: vector autoregression; bayesian shrinkage; dynamic factor model; conditional forecast; large cross-sections
    JEL: C11 C13 C33 C53
    Date: 2014–03
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/158499&r=ecm
  27. By: Matteo Barigozzi; Marco Lippi; Matteo Luciani
    Keywords: dynamic factor models for I (1) variables; cointegration; granger representation theorem
    JEL: C00 C01 E00
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/157568&r=ecm
  28. By: Demuynck T. (GSBE)
    Abstract: We provide statistical inference for measures of predictive success. These measures are frequently used to evaluate and compare the performance of different models of individual and group decision making in experimental and revealed preference studies. We provide a brief illustration of our findings by comparing the predictive success of different revealed preference tests for models of intertemporal decision making.
    Keywords: Econometric and Statistical Methods and Methodology: General; Design of Experiments: General; Consumer Economics: Empirical Analysis;
    JEL: C10 C90 D12
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:unm:umagsb:2014009&r=ecm
  29. By: Christophe Ley; Camille Sabbah; Thomas Verdebout
    Keywords: Bahadur representation; directional statistics; DD- and QQ-Plot; Mahalanobis depth; rotationally symmetric distributions
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/143754&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.