nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒02‒27
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. Sieve Inference on Semi-nonparametric Time Series Models By Xiaohong Chen; Zhipeng Liao; Yixiao Sun
  2. Bootstrapping Anderson-Rubin Statistic and J Statistic in Linear IV Models with Many Instruments By Wenjie Wang
  3. A Nonlinear Panel Data Model of Cross-Sectional Dependence By James Mitchell; George Kapetanios; Yongcheol Shin
  4. Comparaison of several estimation procedures for long term behavior. By Dominique Guegan; Zhiping Lu; BeiJia Zhu
  5. Modelling Changes in the Unconditional Variance of Long Stock Return Series By Cristina Amado; Timo Terasvirta
  6. Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes By Johanna Kappus
  7. U-MIDAS: MIDAS regressions with unrestricted lag polynomials By Foroni, Claudia; Marcellino, Massimiliano; Schumacher, Christian
  8. Les liaisons fallacieuses : quasi-colinéarité et « suppresseur classique », aide au développement et croissance. By Jean-Bernard Chatelain; Kirsten Ralf
  9. Pregibit: A Family of Discrete Choice Models By Vijverberg, Chu-Ping C.; Vijverberg, Wim P.
  10. Discriminant analysis of multivariate time series using wavelets By Ann Elizabeth Maharaj; M. Andrés Alonso
  11. Tests for weak form market efficiency in stock prices: Monte Carlo evidence By Khaled, Mohammed S; Keef, Stephen P
  12. Are Forecast Combinations Efficient? By Pablo Pincheira
  13. Characterizing the Instrumental Variable Identifying Assumption as Sample Selection Conditions By Belzil, Christian; Hansen, Jörgen
  14. Assessing market uncertainty by means of a time-varying intermittency parameter for asset price fluctuations By Martin Rypdal; Espen Sirnes; Ola L{\o}vsletten; Kristoffer Rypdal
  15. Asymptotic properties of u-processes under long-range dependence. By Boistard, Hélène; Levy-Leduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad
  16. The formulation and estimation of random effects panel data models of trade By Matyas, Laszlo; Hornok, Cecilia; Pus, Daria
  17. The cult of statistical signicance. What economists should and should not do to make their data talk By Walter Krämer
  18. Robust estimation of the scale and of the autocovariance function of Gaussian shortand long-range dependent processes. By Boistard, Hélène; Levy-Leduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad
  19. Inference on sets in finance By Victor Chernozhukov; Emre Kocatulum; Konrad Menzel
  20. Existence and Uniqueness of Perturbation Solutions to DSGE Models By Hong Lan; Alexander Meyer-Gohde
  21. Large sample behaviour of some well-known robust estimators under longrange dependence. By Boistard, Hélène; Levy-Leduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad

  1. By: Xiaohong Chen (Cowles Foundation, Yale University); Zhipeng Liao (Dept. of Economics, UC Los Angeles); Yixiao Sun (Dept. of Economics, UC San Diego)
    Abstract: The method of sieves has been widely used in estimating semiparametric and nonparametric models. In this paper, we first provide a general theory on the asymptotic normality of plug-in sieve M estimators of possibly irregular functionals of semi/nonparametric time series models. Next, we establish a surprising result that the asymptotic variances of plug-in sieve M estimators of irregular (i.e., slower than root-T estimable) functionals do not depend on temporal dependence. Nevertheless, ignoring the temporal dependence in small samples may not lead to accurate inference. We then propose an easy-to-compute and more accurate inference procedure based on a "pre-asymptotic" sieve variance estimator that captures temporal dependence. We construct a "pre-asymptotic" Wald statistic using an orthonormal series long run variance (OS-LRV) estimator. For sieve M estimators of both regular (i.e., root-T estimable) and irregular functionals, a scaled "pre-asymptotic" Wald statistic is asymptotically F distributed when the series number of terms in the OS-LRV estimator is held fixed. Simulations indicate that our scaled "pre-asymptotic" Wald test with F critical values has more accurate size in finite samples than the usual Wald test with chi-square critical values.
    Keywords: Weak dependence, Sieve M estimation, Sieve Riesz representor, Irregular functional, Misspecification, Pre-asymptotic variance, Orthogonal series long run variance estimation, F distribution
    JEL: C12 C14 C32
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1849&r=ecm
  2. By: Wenjie Wang (Graduate School of Economics, Kyoto University)
    Abstract: A bootstrap method is proposed for the Anderson-Rubin test and the J test for overidentifying restrictions in linear instrumental variable models with many instruments. We show the bootstrap validity of these test statistics when the number of instruments increases at the same rate as the sample size. Moreover, since it has been shown in the literature to be valid when the number of instruments is small, the bootstrap technique is practically robust to the numerosity of the moment conditions. A small-scale Monte Carlo experiment shows that our procedure has outstanding small sample performance compared with some existing asymptotic procedures.
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:kyo:wpaper:810&r=ecm
  3. By: James Mitchell; George Kapetanios; Yongcheol Shin
    Abstract: This paper proposes a nonlinear panel data model which can generate endogenously both `weak' and `strong' cross-sectional dependence. The model's distinguishing characteristic is that a given agent's behaviour is influenced by an aggregation of the views or actions of those around them. The model allows for considerable flexibility in terms of the genesis of this herding or clustering type behaviour. At an econometric level, the model is shown to nest various extant dynamic panel data models. These include panel AR models, spatial models, which accommodate weak dependence only, and panel models where cross-sectional averages or factors exogenously generate strong, but not weak, cross sectional dependence. An important implication is that the appropriate model for the aggregate series becomes intrinsically nonlinear, due to the clustering behaviour, and thus requires the disaggregates to be simultaneously considered with the aggregate. We provide the associated asymptotic theory for estimation and inference. This is supplemented with Monte Carlo studies and two empirical applications which indicate the utility of our proposed model as both a structural and reduced form vehicle to model different types of cross-sectional dependence, including evolving clusters.
    Keywords: Nonlinear Panel Data Model; Clustering; Cross-section Dependence; Factor Models; Monte Carlo Simulations; Application to Stock Returns and Inflation Expectations
    JEL: C31 C33 C51 E31 G14
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:12/01&r=ecm
  4. By: Dominique Guegan (Centre d'Economie de la Sorbonne); Zhiping Lu (East China Normal University (ECNU)); BeiJia Zhu (Centre d'Economie de la Sorbonne et East China Normal University (ECNU))
    Abstract: In this paper, nine memory parameter estimation procedures for the fractionally integrated I(d) process, semi-parametric and parametric, which prevail in the existing literature are reviewed ; through the simulation study under the ARFIMA (p,d,q) setting we cast a light on the finite sample performance of these estimation procedures for the non-stationary long memory time series. As a by-product of this study, we provide a bandwidth parameter selection strategy for the frequency domain estimation and an upper-and-lower scale trimming strategy for the wavelet domain estimation from a practical stand-point. The other objective of this paper is to give a useful reference to the applied reserachers and practitioners.
    Keywords: Finite sample performance comparaison, Fourier frequency, GDP, non-stationary long memory time series, wavelet.
    JEL: C12 C15 C22
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:12008&r=ecm
  5. By: Cristina Amado (Universidade do Minho - NIPE); Timo Terasvirta (CREATES, Department of Economics and Business, Aarhus University)
    Abstract: In this paper we develop a testing and modelling procedure for describing the long-term volatility movements over very long return series. For the purpose, we assume that volatility is multiplicatively decomposed into a conditional and an unconditional component as in Amado and Teräsvirta (2011). The latter component is modelled by incorporating smooth changes so that the unconditional variance is allowed to evolve slowly over time. Statistical inference is used for specifying the parameterization of the time-varying component by applying a sequence of Lagrange multiplier tests. The model building procedure is illustrated with an application to daily returns of the Dow Jones Industrial Average stock index covering a period of more than ninety years. The main conclusions are as follows. First, the LM tests strongly reject the assumption of constancy of the unconditional variance. Second, the results show that the long-memory property in volatility may be explained by ignored changes in the unconditional variance of the long series. Finally, based on a formal statistical test we find evidence of the superiority of volatility forecast accuracy of the new model over the GJR-GARCH model at all horizons for a subset of the long return series.
    Keywords: Model specification; Conditional heteroskedasticity; Lagrange multiplier test; Timevarying unconditional variance; Long financial time series; Volatility persistence
    JEL: C12 C22 C51 C52 C53
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:nip:nipewp:02/2012&r=ecm
  6. By: Johanna Kappus
    Abstract: For a Lévy process X having finite variation on compact sets and finite first moments, µ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of µ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.
    Keywords: Statistics of stochastic processes, Low frequency observed Lévy processes, Nonparametric statistics, Adaptive estimation, Model selection with unknown variance
    JEL: C14
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012-016&r=ecm
  7. By: Foroni, Claudia; Marcellino, Massimiliano; Schumacher, Christian
    Abstract: Mixed-data sampling (MIDAS) regressions allow to estimate dynamic equations that explain a low-frequency variable by high-frequency variables and their lags. When the difference in sampling frequencies between the regressand and the regressors is large, distributed lag functions are typically employed to model dynamics avoiding parameter proliferation. In macroeconomic applications, however, differences in sampling frequencies are often small. In such a case, it might not be necessary to employ distributed lag functions. In this paper, we discuss the pros and cons of unrestricted lag polynomials in MIDAS regressions. We derive unrestricted MIDAS regressions (U-MIDAS) from linear high-frequency models, discuss identification issues, and show that their parameters can be estimated by OLS. In Monte Carlo experiments, we compare U-MIDAS to MIDAS with functional distributed lags estimated by NLS. We show that U-MIDAS generally performs better than MIDAS when mixing quarterly and monthly data. On the other hand, with larger differences in sampling frequencies, distributed lag-functions outperform unrestricted polynomials. In an empirical application on out-of-sample nowcasting GDP in the US and the Euro area using monthly predictors, we find a good performance of U-MIDAS for a number of indicators, albeit the results depend on the evaluation sample. We suggest to consider U-MIDAS as a potential alternative to the existing MIDAS approach in particular for mixing monthly and quarterly variables. In practice, the choice between the two approaches should be made on a case-by-case basis, depending on their relative performance. --
    Keywords: mixed data sampling,distributed lag polynomals,time aggregation,now-casting
    JEL: E37 C53
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdp1:201135&r=ecm
  8. By: Jean-Bernard Chatelain (Centre d'Economie de la Sorbonne - Paris School of Economics); Kirsten Ralf (Ecole Supérieure du Commerce Extérieur (ESCE))
    Abstract: This paper shows that a multiple regression with two highly correlated explanatory variables, both of them with a near zero correlation with the dependent variable may correspond to a spurious regression or to a homeostatic model, with estimates highly sensible to outliers. The regression method does not allow how to decide which one of the two models is relevant. Statistical significance of the (very high) parameters is easily obtained, as shown doing Monte Carlo simulations. An example is provided by the Burnside and Dollar [2000] article on aid, policies and growth.
    Keywords: Spurious regression, near-multicollinearity, classical suppressor, parameter inflation factor (PIF).
    JEL: C12 C18 C52 F35 O47
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:12011&r=ecm
  9. By: Vijverberg, Chu-Ping C. (Wichita State University); Vijverberg, Wim P. (CUNY Graduate Center)
    Abstract: The pregibit discrete choice model is built on a distribution that allows symmetry or asymmetry and thick tails, thin tails or no tails. Thus the model is much richer than the traditional models that are typically used to study behavior that generates discrete choice outcomes. Pregibit nests logit, approximately nests probit, loglog, cloglog and gusset models, and yields a linear probability model that is solidly founded on the discrete choice framework that underlies logit and probit.
    Keywords: discrete choice, asymmetry, logit, probit, post-secondary education, mortgage application
    JEL: C25 G21 I21
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp6359&r=ecm
  10. By: Ann Elizabeth Maharaj; M. Andrés Alonso
    Abstract: In analyzing ECG data, the main aim is to differentiate between the signal patterns of those of healthy subjects and those of individuals with specific heart conditions. We propose an approach for classifying multivariate ECG signals based on discriminant and wavelet analyzes. For this purpose we use multiple-scale wavelet variances and wavelet correlations to distinguish between the patterns of multivariate ECG signals based on the variability of the individual components of each ECG signal and the relationships between every pair of these components. Using the results of other ECG classification studies in the literature as references, we demonstrate that our approach applied to 12-lead ECG signals from a particular database, displays quite favourable performance. We also demonstrate with real and synthetic ECG data that our approach to classifying multivariate time series out performs other well-known approaches for classifying multivariate time series. In simulation studies using multivariate time series that have patterns that are different from that of the ECG signals, we also demonstrate very favourably performance of this approach when compared to these other approaches.
    Keywords: Time series, Wavelet Variances, Wavelet Correlations, Discriminant Analysis
    JEL: C38 C22
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws120603&r=ecm
  11. By: Khaled, Mohammed S; Keef, Stephen P
    Abstract: Efficiency in financial markets is tested by applying variance ratio (VR)tests, but unit root tests are also used by many, sometimes in addition to the VR tests. There is a lack of clarity in the literature about the implication of these test results when they seem to disagree. We distinguish between two different types of predictability, called "structural predictability" and "error predictability". Standard unit root tests pick up structural predictability. VR tests pick up both structural and error predictability.
    Keywords: Unit Root, Weak Form Efficiency, Random Walk, Autocorrelation, Variance Ratio,
    Date: 2011–12–20
    URL: http://d.repec.org/n?u=RePEc:vuw:vuwecf:1993&r=ecm
  12. By: Pablo Pincheira
    Abstract: It is well known that weighted averages of two competing forecasts may reduce Mean Squared Prediction Errors (MSPE) and may also introduce certain inefficiencies. In this paper we take an in-depth view of one particular type of inefficiency stemming from simple combination schemes. We identify testable conditions under which every linear convex combination of two forecasts displays this type of inefficiency. In particular, we show that the process of taking averages of forecasts may induce inefficiencies in the combination, even when the individual forecasts are efficient. Furthermore, we show that the so-called "optimal weighted average" traditionally presented in the literature may indeed be suboptimal. We propose a simple testable condition to detect if this traditional weighted factor is optimal in a broader sense. An optimal "recombination weight" is introduced. Finally, we illustrate our findings with simulations and an empirical application in the context of the combination of inflation forecasts.
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:chb:bcchwp:661&r=ecm
  13. By: Belzil, Christian (Ecole Polytechnique, Paris); Hansen, Jörgen (Concordia University)
    Abstract: We build on Rosenzweig and Wolpin (2000) and Keane (2010) and show that in order to fulfill the Instrumental variable (IV) identifying moment condition, a policy must be designed so that compliers and non-compliers either have the same average error term, or have an error term ratio equal to their relative share of the population. The former condition (labeled Choice Orthogonality) is essentially a no-selection condition. The latter one, referred to as Weighted Opposite Choices, may be viewed as a distributional (functional form) assumption necessary to match the degree of selectivity between compliers and noncompliers to their relative population proportions. Those conditions form a core of implicit IV assumptions that are present in any empirical applications. They allow the econometrician to gain substantial insight about the validity of a specific instrument, and they illustrate the link between identification and the statistical strength of an instrument. Finally, our characterization may also help designing a policy generating a valid instrument.
    Keywords: instrumental variable methods, implicit assumptions, treatment effects
    JEL: B4 C1 C3
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp6339&r=ecm
  14. By: Martin Rypdal; Espen Sirnes; Ola L{\o}vsletten; Kristoffer Rypdal
    Abstract: Maximum likelihood estimation applied to high-frequency data allows us to quantify intermittency in the fluctu- ations of asset prices. From time records as short as one month these methods permit extraction of a meaningful intermittency parameter {\lambda} characterising the degree of volatility clustering of asset prices. We can therefore study the time evolution of volatility clustering and test the statistical significance of this variability. By analysing data from the Oslo Stock Exchange, and comparing the results with the investment grade spread, we find that the estimates of {\lambda} are lower at times of high market uncertainty.
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1202.4877&r=ecm
  15. By: Boistard, Hélène; Levy-Leduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univ-tlse1.fr/3043/&r=ecm
  16. By: Matyas, Laszlo; Hornok, Cecilia; Pus, Daria
    Abstract: The paper introduces for the most frequently used three-dimensional panel data sets several random effects model specifications. It derives appropriate estimation methods for the balanced and unbalanced cases. An application is also presented where the bilateral trade of 20 EU countries is analysed for the period 2001-2006. The differences between the fixed and random effects specifications are highlighted through this empirical exercise.
    Keywords: panel data; multidimensional panel data; random effects; error components model; trade model; gravity mode;
    JEL: C13 F11 C23 F17 F1 C21 C01
    Date: 2012–02–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:36789&r=ecm
  17. By: Walter Krämer
    Abstract: This article takes issue with a recent book by Ziliak and McCloskey (2008) of the same title. Ziliak and McCloskey argue that statistical significance testing is a barrier rather than a booster for empirical research in economics and should therefore be abandoned altogether. The present article argues that this is good advice in some research areas but not in others. Taking all issues which have appeared so far of the German Economic Review and a recent epidemiological meta-analysis as examples, it shows that there has indeed been a lot of misleading work in the context of significance testing, and that at the same time many promising avenues for fruitfully employing statistical significance tests, disregarded by Ziliak and McCloskey, have not been used.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:rsw:rswwps:rswwps176&r=ecm
  18. By: Boistard, Hélène; Levy-Leduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univ-tlse1.fr/3044/&r=ecm
  19. By: Victor Chernozhukov (Institute for Fiscal Studies and MIT); Emre Kocatulum; Konrad Menzel
    Abstract: <p>In this paper we introduce various set inference problems as they appear in finance and propose practical and powerful inferential tools. Our tools will be applicable to any problem where the set of interest solves a system of smooth estimable inequalities, though we will particularly focus on the following two problems: the admissible mean-variance sets of stochastic discount factors and the admissible mean-variance sets of asset portfolios. We propose to make inference on such sets using weighted likelihood-ratio and Wald type statistics, building upon and substantially enriching the available methods for inference on sets.</p>
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:ifs:cemmap:04/12&r=ecm
  20. By: Hong Lan; Alexander Meyer-Gohde
    Abstract: We prove that standard regularity and saddle stability assumptions for linear approximations are sufficient to guarantee the existence of a unique solution for all undetermined coefficients of nonlinear perturbations of arbitrary order to discrete time DSGE models. We derive the perturbation using a matrix calculus that preserves linear algebraic structures to arbitrary orders of derivatives, enabling the direct application of theorems from matrix analysis to prove our main result. As a consequence, we provide insight into several invertibility assumptions from linear solution methods, prove that the local solution is independent of terms first order in the perturbation parameter, and relax the assumptions needed for the local existence theorem of perturbation solutions.
    Keywords: Perturbation, matrix calculus, DSGE, solution methods, Bézout theorem; Sylvester equations
    JEL: C61 C63 E17
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012-015&r=ecm
  21. By: Boistard, Hélène; Levy-Leduc, Céline; Moulines, Eric; Reisen, Valdério Anselmo; Taqqu, Murad
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ner:toulou:http://neeo.univ-tlse1.fr/3045/&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.