nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒01‒17
33 papers chosen by
Sune Karlsson
Orebro University

  1. Non-Standard Rates of Convergence of Criterion-Function-Based Set Estimators By Jason R. Blevins
  2. Bagging Weak Predictors By Manuel Lukas; Eric Hillebrand
  3. Simultaneity in the Multivariate Count Data Autoregressive Model By Brännäs, Kurt
  4. On the limiting and empirical distributions of IV estimators when some of the instruments are actually endogenous By Jan F. KIVIET; Jerzy NIEMCZYK
  5. Extending the Scope of Cube Root Asymptotics By Taisuke Otsu; Myung Hwan Seo
  6. Mixed frequency structural VARs By Claudia Foroni; Massimiliano Marcellino
  7. Shrinkage Estimation of High-Dimensional Factor Models with Structural Instabilities By Xu Cheng; Zhipeng Liao; Frank Schorfheide
  8. Panel data models with grouped factor structure under unknown group membership By Bai, Jushan; Ando, Tomohiro
  9. Can Stock Price Fundamentals Properly be Captured?: Using Markov Switching in Heteroskedasticity Models to Test Identification Schemes By Anton Velinov
  10. Estimating the Standard Errors of Individual-Specific Parameters in Random Parameters Models By William Greene; Mark N Harris; Christopher Spencer
  11. Covariance Averaging for Improved Estimation and Portfolio Allocation By Dimitrios D. Thomakos; Fotis Papailias
  12. A simple and effective misspecification test for the double-hurdle model By Riccardo LUCCHETTI; Claudia PIGINI
  13. Testing the linearity of a time series By Dimitra Chatzi; Dikaios Tserkezos
  14. Temporal Aggregation and the Ramsey�s (RESET) Test for Functional Form : results from Empirical and Monte Carlo experiment By Dikaios Tserkezos
  15. Structural Vector Autoregressive Analysis in a Data Rich Environment: A Survey By Helmut Lütkepohl
  16. Temporal Aggregation and Systematic Sampling Effects on Non Linear Granger Causality Tests between Trade Volume and Returns. Some Monte Carlo and Empirical Results from the Athens Stocks Exchange. By Dikaios Tserkezos
  17. About predictions in spatial autoregressive models: Optimal and almost optimal strategies By Thomas-Agnan, Christine
  18. Data envelope fitting with constrained polynomial splines By Daouia, Abdelaati; Noh, Hohsuk; Park, Byeong U.
  19. Some Tools for Robustifying Econometric Analyses By Hoornweg, V.
  20. Multifactor asset pricing with a large number of observable risk factors and unobservable common and group-specific factors By Bai, Jushan; Ando, Tomohiro
  21. HIERARCHICAL GRAPHICAL MODELS, WITH APPLICATION TO SYSTEMIC RISK By Daniel Felix Ahelegbey; Paolo Giudici
  22. Approximate Maximum Likelihood Estimation of the Autologistic Model By Marco Bee; Diego GIuliani; Giuseppe Espa
  23. Identificación de series con tendencias comunes para mejorar las previsiones de agregados By Marcos Bujosa; Alfredo García-Hiernaux
  24. Mathematical framework for pseudo-spectra of linear stochastic difference equations By Andrés Bujosa Brun; Marcos Bujosa; Antonio García-Ferrer
  25. Structural Break Inference using Information Criteria in Models Estimated by Two Stage Least Squares By Alastair R. Hall; Denise R. Osborn; Nikolaos Sakkas
  26. Measures of Causality in Complex Datasets with application to financial data By Anna Zaremba; Tomaso Aste
  27. On various confidence intervals post-model-selection By Leeb, Hannes; Pötscher, Benedikt M.; Ewald, Karl
  28. System Priors: Formulating Priors about DSGE Models' Properties By Michal Andrle; Jaromir Benes
  29. Comparing two methods for the identification of news shocks By Beaudry, Paul; Portier, Franck; Seymen, Atılım
  30. The role of sensitivity analysis in estimating causal pathways from observational data By George Ploubidis
  31. A Zero Inflated Regression Model for Grouped Data By Sarah Brown; Alan S Duncan; Mark N Harris; Jennifer Roberts; Karl Taylor
  32. The Role of Indicator Selection in Nowcasting Euro Area GDP in Pseudo Real Time By A. Girardi; R. Golinelli; C. Pappalardo
  33. Ranking Leading Econometrics Journals Using Citations Data from ISI and RePEc By Chang, C-L.; McAleer, M.J.

  1. By: Jason R. Blevins (Department of Economics, Ohio State University)
    Abstract: This paper establishes conditions for consistency and potentially non-standard rates of convergence for set estimators based on contour sets of criterion functions. These conditions cover the standard parametric rate $n^{-1/2}$, non-standard polynomial rates such as $n^{-1/3}$, and an extreme case of arbitrarily fast convergence. We also establish the validity of a subsampling procedure for constructing confidence sets for the identified set. We then provide more convenient sufficient conditions on the underlying empirical processes for cube root convergence. We show that these conditions apply to a class of transformation models under weak semiparametric assumptions which may be partially identified due to potentially limited-support regressors. We focus in particular on a semiparametric binary response model under a conditional median restriction and show that a set estimator analogous to the maximum score estimator is essentially cube-root consistent for the identified set when a continuous but possibly bounded regressor is present. Arbitrarily fast convergence occurs when all regressors are discrete. Finally, we carry out a series of Monte Carlo experiments which verify our theoretical findings and shed light on the finite sample performance of the proposed procedures.
    Keywords: partial identification, cube-root asymptotics, semiparametric models, limited support regressors, transformation model, binary response model, maximum score estimator
    JEL: C13 C14 C25
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:osu:osuewp:13-02&r=ecm
  2. By: Manuel Lukas (Aarhus University and CREATES); Eric Hillebrand (Aarhus University and CREATES)
    Abstract: Relations between economic variables can often not be exploited for forecasting, suggesting that predictors are weak in the sense that estimation uncertainty is larger than bias from ignoring the relation. In this paper, we propose a novel bagging predictor designed for such weak predictor variables. The predictor is based on a test for finitesample predictive ability. Our predictor shrinks the OLS estimate not to zero, but towards the null of the test which equates squared bias with estimation variance. We derive the asymptotic distribution and show that the predictor can substantially lower the MSE compared to standard t-test bagging. An asymptotic shrinkage representation for the predictor is provided that simplifies computation of the estimator. Monte Carlo simulations show that the predictor works well in small samples. In the empirical application, we find that the new predictor works well for inflation forecasts.
    Keywords: Inflation forecasting, bootstrap aggregation, estimation uncertainty, weak predictors
    JEL: C32 E37
    Date: 2014–01–07
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-01&r=ecm
  3. By: Brännäs, Kurt (Department of Economics, Umeå School of Business and Economics)
    Abstract: This short paper proposes a simultaneous equations model formulation for time series of count data. Some of the basic moment properties of the model are obtained. The inclusion of real valued exogenous variables is suggested to be through the parameters of the model. Some remarks on the application of the model to spatial data are made. Instrumental variable and generalized method of moments estimators of the structural form parameters are also discussed.
    Keywords: Integer-valued; Spatial; INAR; Interdependence; Properties; Estimation
    JEL: C35 C36 C39 C51
    Date: 2014–01–07
    URL: http://d.repec.org/n?u=RePEc:hhs:umnees:0870&r=ecm
  4. By: Jan F. KIVIET (Division of Economics, School of Humanities and Social Sciences, Nanyang Technological University, Singapore, 637332.); Jerzy NIEMCZYK (European Central Bank, Frankfurt, Germany)
    Abstract: IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their validity is required when testing the orthogonality of additional instruments by so-called over-identi?cation restriction tests. Moreover, these tests have limited power when samples are small, especially when instruments are weak. Distinguishing between conditional and unconditional settings, we analyze the limiting distribution of inconsistent IV and examine normal ?rst-order asymptotic approximations to its density in ?nite samples. For simple classes of models we compare these approxi- mations with their simulated empirical counterparts over almost the full parameter space. The latter is expressed in measures for: model ?t, simultaneity, instrument invalidity and instrument weakness. Our major ?ndings are that for the accuracy of large sample asymptotic approximations instrument weakness is much more detri- mental than instrument invalidity. Also, IV estimators obtained from strong but possibly invalid instruments are usually much closer to the true parameter values than those obtained from valid but weak instruments.
    Keywords: empirical density, inconsistent estimators, invalid instruments, (un)conditional asymptotic distribution, weak instruments
    JEL: C13 C15 C30
    Date: 2013–11
    URL: http://d.repec.org/n?u=RePEc:nan:wpaper:1311&r=ecm
  5. By: Taisuke Otsu; Myung Hwan Seo
    Abstract: This article extends the scope of cube root asymptotics for M-estimators in two directions: allow weakly dependent observations and criterion functions drifting with the sample size typically due to a bandwidth sequence. For dependent empirical processes that characterize criterions inducing cube root phenomena, maximal inequalities are established to derive the convergence rates and limit laws of the M-estimators. The limit theory is applied not only to extend existing examples, such as the maximum score estimator, nonparametric maximum likelihood density estimator under monotonicity, and least median of squares, toward weakly dependent observations, but also to address some open questions, such as asymptotic properties of the minimum volume predictive region, conditional maximum score estimator for a panel data discrete choice model, and Hough transform estimator with a drifting tuning parameter.
    Keywords: Cube root asymptotics, M-estimator, Maximal inequality
    JEL: C13
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:cep:stiecm:/2014/571&r=ecm
  6. By: Claudia Foroni (Norges Bank (Central Bank of Norway)); Massimiliano Marcellino (Bocconi University and CEPR)
    Abstract: A mismatch between the time scale of a structural VAR (SVAR) model and that of the time series data used for its estimation can have serious consequences for identification, estimation and interpretation of the impulse response functions. However, the use of mixed frequency data, combined with a proper estimation approach, can alleviate the temporal aggregation bias, mitigate the identification issues, and yield more reliable responses to shocks. The problems and possible remedy are illustrated analytically and with both simulated and actual data.
    Keywords: Phillips curve, neoclassical, indexation, trend inflation, regime switch
    JEL: C32 C43 E32
    Date: 2014–01–13
    URL: http://d.repec.org/n?u=RePEc:bno:worpap:2014_01&r=ecm
  7. By: Xu Cheng; Zhipeng Liao; Frank Schorfheide
    Abstract: In high-dimensional factor models, both the factor loadings and the number of factors may change over time. This paper proposes a shrinkage estimator that detects and disentangles these instabilities. The new method simultaneously and consistently estimates the number of pre- and post-break factors, which liberates researchers from sequential testing and achieves uniform control of the family-wise model selection errors over an increasing number of variables. The shrinkage estimator only requires the calculation of principal components and the solution of a convex optimization problem, which makes its computation efficient and accurate. The finite sample performance of the new method is investigated in Monte Carlo simulations. In an empirical application, we study the change in factor loadings and emergence of new factors during the Great Recession.
    JEL: C13 C33 C52
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:19792&r=ecm
  8. By: Bai, Jushan; Ando, Tomohiro
    Abstract: This paper studies panel data models with unobserved group factor structures. The group membership of each unit and the number of groups are left unspecified. The number of explanatory variables can be large. We estimate the model by minimizing the sum of least squared errors with a shrinkage penalty. The regressions coefficients can be homogeneous or group specific. The consistency and asymptotic normality of the estimator are established. We also introduce new $C_p$-type criteria for selecting the number of groups, the numbers of group-specific common factors and relevant regressors. Monte Carlo results show that the proposed method works well. We apply the method to the study of US mutual fund returns under homogeneous regression coefficients, and the China mainland stock market under group-specific regression coefficients.
    Keywords: Clustering, penalized method, lasso, SCAD, serial and cross-sectional error correlations, factor structure
    JEL: C23 C52
    Date: 2013–12–16
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52782&r=ecm
  9. By: Anton Velinov
    Abstract: Structural identification schemes are of essential importance to vector autoregressive (VAR) analysis. This paper tests a commonly used structural parameter identification scheme to assess whether it can properly capture fundamental and non-fundamental shocks to stock prices. In particular, five related structural models, which are widely used in the literature on assessing stock price determinants are considered. They are either specified in vector error correction (VEC) or in VAR form. Restrictions on the long-run effects matrix are used to identify the structural parameters. These identifying restrictions are tested by means of a Markov switching in heteroskedasticity model. It is found that for two of the five models considered, the long-run identification scheme appropriately classifies shocks as being either fundamental or non-fundamental. A series of robustness tests are performed, which largely confirm the initial findings.
    Keywords: Markov switching model, vector autoregression, vector error correction, heteroskedasticity, stock prices
    JEL: C32 C34
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1350&r=ecm
  10. By: William Greene (New York University, USA); Mark N Harris (Curtin University, Perth, Australia); Christopher Spencer (School of Business and Economics, Loughborough University)
    Abstract: We consider the estimation of the standard errors of individual-specific parameters calculated ex post from a non-linear random parameters model. Our key contribution lies in introducing a simple method of appropriately calculating these standard errors, which explicitly takes into account the sampling variability of the estimation of the model's parameters. To demonstrate the applicability of the technique, we use it in a model of the voting behaviour of Bank of England MPC members. Our results have clear implications for drawing statistical inference on the estimated random parameters.
    Keywords: Random parameters, individual-specific parameters, standard errors, voting, Monetary Policy Committee.
    JEL: C25
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:lbo:lbowps:2014_01&r=ecm
  11. By: Dimitrios D. Thomakos (Department of Economics, University of Peloponnese, Greece; Quantf Research, Greece; Rimini Centre for Economic Analysis, Italy); Fotis Papailias (Queen's University Management School, Queen's University Belfast, UK; Quantf Research, Greece)
    Abstract: We propose a new method for estimating the covariance matrix of a multivariate time series of nancial returns. The method is based on estimating sample covariances from overlapping windows of observations which are then appropriately weighted to obtain the nal covariance estimate. We extend the idea of (model) covariance averaging oered in the covariance shrinkage approach by means of greater ease of use, exibility and robustness in averaging information over different data segments. The suggested approach does not suer from the curse of dimensionality and can be used without problems of either approximation or any demand for numerical optimization.
    Keywords: averaging, covariance estimation, nancial returns, multivariate time series, portfolio allocation, risk management, rolling window
    JEL: C32 C58 G11
    Date: 2013–12
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:66_13&r=ecm
  12. By: Riccardo LUCCHETTI (Universit… Politecnica delle Marche, Dipartimento di Scienze Economiche e Sociali); Claudia PIGINI (Universit… di Perugia)
    Abstract: The commonly-used version of the double-hurdle model rests on a rather restrictive set of statistical assumptions, which are very seldom tested by practitioners, mainly because of the lack of a standard procedure for doing so, although violation of such assumptions can lead to serious modelling aws. We propose here a bootstrap-corrected conditional moment portmanteau test which is simple to implement and has good size and power properties.
    Keywords: Bootstrap, Double-Hurdle model, Information Matrix Test
    JEL: C02 C12 C15
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:anc:wpaper:397&r=ecm
  13. By: Dimitra Chatzi (University of Crete); Dikaios Tserkezos (Department of Economics, University of Crete, Greece)
    Abstract: This letter proposes a simple test for the linearity of a time series. We compare the small and large samples properties of the suggested test via Monte Carlo techniques with well known time domain linearity tests. Our results suggest that the suggested test over performs the power of the other competitive tests in small samples.
    Keywords: Testing nonlinearity, Hinich portmanteau bicorrelation test, Keenan, Mcleodi-Li tests, ARCH & Luukkonen LST Test
    Date: 2014–01–10
    URL: http://d.repec.org/n?u=RePEc:crt:wpaper:1401&r=ecm
  14. By: Dikaios Tserkezos (Department of Economics, University of Crete, Greece)
    Abstract: This short paper demonstrates that the use of temporally aggregated data may affect the power and the size of the well known the Ramsey's (1969) RESET test. This test is widely used for testing the functional specification of a model. Using Empirical data and Monte Carlo techniques we found that temporal aggregation could affect seriously the power and the size of the test.
    Keywords: temporally aggregated data, RESET, Empirical data, Monte Carlo Experiment
    JEL: C32
    Date: 2013–12–01
    URL: http://d.repec.org/n?u=RePEc:crt:wpaper:1309&r=ecm
  15. By: Helmut Lütkepohl
    Abstract: Large panels of variables are used by policy makers in deciding on policy actions. Therefore it is desirable to include large information sets in models for economic analysis. In this survey methods are reviewed for accounting for the information in large sets of variables in vector autoregressive (VAR) models. This can be done by aggregating the variables or by reducing the parameter space to a manageable dimension. Factor models reduce the space of variables whereas large Bayesian VAR models and panel VARs reduce the parameter space. Global VARs use a mixed approach. They aggregate the variables and use a parsimonious parametrisation. All these methods are discussed in this survey although the main emphasize is on factor models.
    Keywords: factor models, structural vector autoregressive model, global vector autoregression, panel data, Bayesian vector autoregression
    JEL: C32
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1351&r=ecm
  16. By: Dikaios Tserkezos (Department of Economics, University of Crete, Greece)
    Abstract: Several tests for nonlinear causality are available on the literature. In this paper we investigate the effects of temporal aggregation and systematic sampling using some well known linear and nonlinear Granger causality tests. The conducted Monte Carlo simulation experiments and the empirical applications using data from the Athens Stocks Exchange Market, show that the use of temporally aggregated and systematic sampled data can affect seriously our conclusions about the linear or nonlinear causality effects between Trade Volume and Returns
    Keywords: Granger Non Linear Tests, Temporal Aggregation ,Systematic Sampling, Trade Volume and Returns
    JEL: C32 C43 C51
    Date: 2013–12–02
    URL: http://d.repec.org/n?u=RePEc:crt:wpaper:1310&r=ecm
  17. By: Thomas-Agnan, Christine
    Abstract: We address the problem of prediction in the classical spatial autoregressive lag model for areal data. In contrast with the spatial econometrics literature, the geostatistical literature has devoted much attention to prediction using the Best Linear Unbiased Prediction approach. From the methodological point of view, we explore the limits of the extension of BLUP formulas in the context of the spatial autoregressive lag models for in sample prediction as well as out-of-sample prediction simultaneously at several sites. We propose a more tractable “almost best” alternative. From an empirical perspective, we present data-based simulations to compare the efficiency of the classical formulas with the best and almost best predictions.
    Keywords: Spatial simultaneous autoregressive models, out of sample prediction,best linear unbiased prediction
    Date: 2013–12–18
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:27788&r=ecm
  18. By: Daouia, Abdelaati; Noh, Hohsuk; Park, Byeong U.
    Abstract: Estimation of support frontiers and boundaries often involves monotone and/or concave edge data smoothing. This estimation problem arises in various unrelated contexts, such as optimal cost and production assessments in econometrics and master curve prediction in the reliability programs of nuclear reactors. Very few constrained esti- mators of the support boundary of a bivariate distribution have been introduced in the literature. They are based on simple envelopment techniques which often suffer from lack of precision and smoothness. Combining the edge estimation idea of Hall, Park and Stern with the quadratic spline smoothing method of He and Shi, we develop a novel constrained fit of the boundary curve which benefits from the smoothness of spline approximation and the computational efficiency of linear programs. Using cubic splines is also feasible and more attractive under multiple shape constraints; computing the optimal spline smoother is then formulated into a second-order cone programming problem. Both constrained quadratic and cubic spline frontiers have a similar level of computational complexity to the unconstrained fits and inherit their asymptotic properties. The utility of this method is illustrated through applications to some real datasets and simulation evidence is also presented to show its superiority over the best known methods.
    Keywords: Boundary curve; Concavity; Least majorant; Linear programming; Monotone smoothing; Multiple shape constraints; Polynomial spline; Second-order cone programming
    Date: 2013–11
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:27752&r=ecm
  19. By: Hoornweg, V.
    Abstract: __Abstract__ We use automated algorithms to update and evaluate ad hoc judgments that are made in applied econometrics. Such an application of automated algorithms robustifies empirical econometric analyses, it achieves lower and more consistent prediction errors, and it helps to prevent data snooping. Tools are introduced to evaluate the algorithm, to see how configurations are updated by the algorithm, to study how forecasting accuracy is affected by the choice of configurations, and to find out which configurations can safely be ignored in order to increase the speed of the algorithm. In our case study we develop an algorithm that updates ad hoc judgments that are made in Cápistran and Timmermann's (2009) attempt to beat the mean survey forecast. Many of these ad hoc judgments are often made in time series forecasting and have hitherto been overlooked. We show that our algorithm improves their models and at the same time we further robustify the stylized fact that the mean survey forecast is difficult to beat. JEL classificatie is trouwens C52, mocht je dat nodig hebben.
    Keywords: robust, ad hoc, automated, algorithm, update, combine, forecast
    JEL: C52
    Date: 2013–11–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:50163&r=ecm
  20. By: Bai, Jushan; Ando, Tomohiro
    Abstract: This paper analyzes multifactor models in the presence of a large number of potential observable risk factors and unobservable common and group-specific pervasive factors. We show how relevant observable factors can be found from a large given set and how to determine the number of common and group-specific unobservable factors. The method allows consistent estimation of the beta coefficients in the presence of correlations between the observable and unobservable factors. The theory and method are applied to the study of asset returns for A-shares/B-shares traded on the Shanghai and Shenzhen stock exchanges, and to the study of risk prices in the cross section of returns.
    Keywords: factor models, panel data analysis, penalized method, LASSO, SCAD, heterogenous coefficients
    JEL: C31 C33 C52 G12
    Date: 2013–07–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52785&r=ecm
  21. By: Daniel Felix Ahelegbey (Department of Economics, University of Venice Ca' Foscari); Paolo Giudici (Department of Economics and Management, University of Pavia)
    Abstract: The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a fundamental role in the spread of systemic risks. In this paper we propose to enrich the topological perspective of network models with a more structured statistical framework, that of Bayesian graphical Gaussian models. From a statistical viewpoint, we propose a new class of hierarchical Bayesian graphical models, that can split correlations between institutions into country specific and idiosyncratic ones, in a way that parallels the decomposition of returns in the well-known Capital Asset Pricing Model. From a financial economics viewpoint, we suggest a way to model systemic risk that can explicitly take into account frictions between different financial markets, particularly suited to study the on-going banking union process in Europe. From a computational viewpoint, we develop a novel Markov Chain Monte Carlo algorithm based on Bayes factor thresholding.
    Keywords: Applied Bayesian models, Graphical Gaussian Models, Systemic financial risk
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:pav:demwpp:demwp0063&r=ecm
  22. By: Marco Bee; Diego GIuliani; Giuseppe Espa
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:trn:utwpem:2013/12&r=ecm
  23. By: Marcos Bujosa (Dpto. de Fundamentos del Análisis Económico II. Facultad CC. Económicas y Empresariales. Univ. Complutense de Madrid. Campus de Somosaguas, s/n. 28223 POZUELO DE ALARCÓN (MADRID)); Alfredo García-Hiernaux (Dpto. de Fundamentos del Análisis Económico II. Facultad CC. Económicas y Empresariales. Univ. Complutense de Madrid. Campus de Somosaguas, s/n. 28223 POZUELO DE ALARCÓN (MADRID))
    Abstract: Espasa and Mayo-Burgos (2013) provide consistent forecasts for an aggregate economic indicator and its basic components as well as for useful sub-aggregates. To do this, they develop a procedure based on single-equation models that includes the restrictions arisen from the fact that some components share common features. The classification by common features provides a disaggregation map useful in several applications. We discuss their procedure and suggest some issues that should be taken into account when designing an algorithm to identify subsets of series with one common trend. We also provide a naive algorithm following those suggestions.
    Abstract: Espasa y Mayo-Burgos (2013) proporcionan pronósticos consistentes de algunos indicadores económicos compuestos, así como de sus componentes básicos y sub-agregados de interés. Para ello desarrollan un procedimiento uniecuacional con restricciones que incorporan en el modelo la existencia de características comunes en algunos de los componentes básicos del agregado. La clasificación de los componentes básicos según rasgos comunes es de gran utilidad en algunas aplicaciones, por ejemplo en la previsión. En este documento discutimos el procedimiento de clasificación de Espasa y Mayo-Burgos, y sugerimos algunas recomendaciones que deberían tenerse en cuenta al diseñar algoritmos de identificación de conjuntos de series con tendencia común. Siguiendo dichas recomendaciones, proporcionamos un ingenuo algoritmo de clasificación.
    Keywords: Cointegration, Common trends, Multiple Comparison Procedures, Statistical power, Disaggregation map, Cointegración, Tendencias Comunes, Test de Comparaciones Múltiples, Potencia estadística, Mapa de desagregación.
    JEL: D24 R12 R40
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:ucm:doctra:13-02&r=ecm
  24. By: Andrés Bujosa Brun (Dpto. de Matemáticas, ETSI Telecomunicación. Universidad Politécnica de Madrid. Spain); Marcos Bujosa (Departamento de Economía Cuantitativa (Department of Quantitative Economics), Facultad de Ciencias Económicas y Empresariales (Faculty of Economics and Business), Universidad Complutense de Madrid (Complutense University of Madrid)); Antonio García-Ferrer (Dpto. de Análisis Económico: Economía cuantitativa. Universidad Autónoma de Madrid. Spain)
    Abstract: Although spectral analysis of stationary stochastic processes has solid mathematical foundations, this is not always so for the non-stationary case. Here, we establish a sound mathematical framework for the spectral analysis of non-stationary solutions of linear stochastic difference equations. To achieve it, the classical problem is embedded in a wider framework, the Rigged Hilbert space; the Fourier Transform is extended, and a new Extended Fourier Transform pair pseudocovariance function/pseudo-spectrum is defined. Our approach is an extension proper of the classical spectral analysis, where the Fourier Transform pair auto-covariance function/spectrum is a particular case, and consequently spectrum and pseudo-spectrum are identical when the first one is defined.
    Keywords: Pseudo-spectrum, Time series, Non-stationarity, Frequency domain, Pseudo-covariance function, Linear stochastic difference equations, Rigged Hilbert space, Partial inner product, Extended Fourier Transform.
    Date: 2013–04–07
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:1313&r=ecm
  25. By: Alastair R. Hall; Denise R. Osborn; Nikolaos Sakkas
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:man:sespap:1328&r=ecm
  26. By: Anna Zaremba; Tomaso Aste
    Abstract: This article investigates causality structure of financial time series. We concentrate on three main approaches to measuring causality: linear Granger causality, kernel generalisations of Granger causality (based on ridge regression and Hilbert-Schmidt norm of the cross-covariance operator) and transfer entropy, examining each method and comparing their theoretical properties, with special attention given to the ability to capture nonlinear causality. We also analyse the theoretical benefits of applying non symmetrical measures rather than symmetrical measures of dependence. We applied the measures to a range of simulated and real data. The simulated data sets have been generated with linear and several types of nonlinear dependence, using bivariate as well as multivariate setting. Application to real-world financial data highlights the practical difficulties as well as the potential of the methods. We use two sets of real data: (1) US inflation and 1 month Libor, (2) S$\&$P data and exchange rates for the following currencies: AUDJPY, CADJPY, NZDJPY, AUDCHF, CADCHF, NZDCHF. Overall, we reached the conclusion that no single method can be recognised as the best in all circumstances and each of the methods has its domain of best applicability. We also describe the areas for improvement and future research.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1401.1457&r=ecm
  27. By: Leeb, Hannes; Pötscher, Benedikt M.; Ewald, Karl
    Abstract: We compare several confidence intervals after model selection in the setting recently studied by Berk et al. (2013), where the goal is to cover not the true parameter but a certain non-standard quantity of interest that depends on the selected model. In particular, we compare the PoSI-intervals that are proposed in that reference with the `naive' confidence interval, which is constructed as if the selected model were correct and fixed a-priori (thus ignoring the presence of model selection). Overall, we find that the actual coverage probabilities of all these intervals deviate only moderately from the desired nominal coverage probability. This finding is in stark contrast to several papers in the existing literature, where the goal is to cover the true parameter.
    Keywords: Confidence intervals, model selection
    JEL: C1
    Date: 2014
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:52858&r=ecm
  28. By: Michal Andrle; Jaromir Benes
    Abstract: This paper proposes a novel way of formulating priors for estimating economic models. System priors are priors about the model's features and behavior as a system, such as the sacrifice ratio or the maximum duration of response of inflation to a particular shock, for instance. System priors represent a very transparent and economically meaningful way of formulating priors about parameters, without the unintended consequences of independent priors about individual parameters. System priors may complement or also substitute for independent marginal priors. The new philosophy of formulating priors is motivated, explained and illustrated using a structural model for monetary policy.
    Keywords: Economic models;Monetary policy;system priors; Bayesian inference; DSGE; smell test
    Date: 2013–12–19
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:13/257&r=ecm
  29. By: Beaudry, Paul; Portier, Franck; Seymen, Atılım
    Abstract: Recent empirical literature delivered, based on different structural VAR approaches, controversial results concerning the role of anticipated technology-news-shocks in business cycle fluctuations. We deal with this controversy and investigate (i) the extent to thich two prominent structural VAR approaches can be usefull in recuperating news shock dynamics from artificially generated data in general and (ii) why and to what extent these SVAR approaches differ in the results the deliver in particular. Thereby, we provide several insights for the users of both VAR techniques with small samples in practice. --
    Keywords: News Shocks,Structural VAR,Identification
    JEL: C32 E32
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:zewdip:13110&r=ecm
  30. By: George Ploubidis (London School of Hygiene and Tropical Medicine)
    Abstract: Causal inference in observational data is a nearly alchemic task because parameter estimates depend on the model being correctly specified. Researchers strive to include all potential confounders in their models, but this assumption cannot be directly tested. Further complications arise in causal mediation analyses where the decomposition to direct and indirect effects is of interest. We argue that sensitivity analysis is an effective method for probing the plausibility of this nonrefutable assumption. The goal of sensitivity analysis in the context of causal mediation is to quantify the degree to which the key assumption of no unmeasured confounders must be violated for a researcher's original conclusion regarding the decomposition to direct and indirect effects to be reversed. Three general scenarios where the assumption of no unmeasured confounders is violated will be discussed, and results derived from sensitivity analyses appropriate for each scenario will be presented.
    Date: 2013–11–01
    URL: http://d.repec.org/n?u=RePEc:boc:isug13:01&r=ecm
  31. By: Sarah Brown (Department of Economics, University of Sheffield); Alan S Duncan (Bankwest Curtin Economics Centre (BCEC), Curtin University); Mark N Harris (School of Economics and Finance, Curtin University); Jennifer Roberts (Department of Economics, University of Sheffield); Karl Taylor (Department of Economics, University of Sheffield)
    Abstract: We introduce the (panel) zero-inflated interval regression (ZIIR) model, which is ideally suited when data are in the form of groups, which is commonly the case in survey data, and there is an ‘excess’ of zero observations. We apply our new modelling framework to the analysis of visits to general practitioners (GPs) using individual-level panel data from the British Household Panel Survey (BHPS). The ZIIR model simultaneously estimates the probability of visiting the GP and the frequency of visits (defined by given numerical intervals in the data). The results show that different socio-economic factors influence the probability of visiting the GP and the frequency of visits, thereby providing potentially valuable information to policy-makers concerned with health care allocation.
    Keywords: interval regression, inflated responses, health care allocation, general practitioners
    JEL: C3 D1 I1
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:ozl:bcecwp:wp1401&r=ecm
  32. By: A. Girardi; R. Golinelli; C. Pappalardo
    Abstract: Building on the literature on regularization and dimension reduction methods, we have developed a quarterly forecasting model for euro area GDP. This method consists in bridging quarterly national accounts data using factors extracted from a large panel of monthly and quarterly series including business surveys and financial indicators. The pseudo real-time nature of the information set is accounted for as the pattern of publication lags is considered. Forecast evaluation exercises show that predictions obtained through various dimension reduction methods outperform both the benchmark AR and the diffusion index model without pre-selected indicators. Moreover, forecast combination significantly reduces forecast error.
    JEL: C53 C22 E37 F47
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:bol:bodewp:wp919&r=ecm
  33. By: Chang, C-L.; McAleer, M.J.
    Abstract: __Abstract __ The paper focuses on the robustness of rankings of academic journal quality and research impact of 10 leading econometrics journals taken from the Thomson Reuters ISI Web of Science (ISI) Category of Economics, using citations data from ISI and the highly accessible Research Papers in Economics (RePEc) database that is widely used in economics, finance and related disciplines. The journals are ranked using quantifiable static and dynamic Research Assessment Measures (RAMs), with 15 RAMs from ISI and 5 RAMs from RePEc. The similarities and differences in various RAMs, which are based on alternative weighted and unweighted transformations of citations, are highlighted to show which RAMs are able to provide informational value relative to others. The RAMs include the impact factor, mean citations and non-citations, journal policy, number of high quality papers, and journal influence and article influence. The paper highlight robust rankings based on the harmonic mean of the ranks of 20 RAMs, which in some cases are closely related. It is shown that emphasizing the most widely-used RAM, the 2-year impact factor of a journal, can lead to a distorted evaluation of journal quality, impact and influence relative to the harmonic mean of the ranks.
    Keywords: research assessment measures, citations, impact, influence, harmonic mean, robust journal rankings, econometrics
    JEL: C1 C81 Y10
    Date: 2013–10–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:50130&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.