nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒01‒26
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Fractionally Integrated VAR Models with a Fractional Lag Operator and Deterministic Trends: Finite Sample Identification and Two-step Estimation By Tschernig, Rolf; Weber, Enzo; Weigand, Roland
  2. Multiscale Adaptive Inference on Conditional Moment Inequalities By Timothy B. Armstrong; Hock Peng Chan
  3. Spectrum estimation: a unified framework for covariance matrix estimation and PCA in large dimensions By Olivier Ledoit; Michael Wolf
  4. Group Invariance, Likelihood Ratio Tests, and the Incidental Parameter Problem in a High-Dimensional Linear Model By Marc Hallin; Marcelo Moreira J.; Alexei Onatski
  5. Exploiting infinite variance through Dummy Variables in non-stationary autoregressions By Giuseppe Cavaliere; Iliyan Georgiev
  6. The best estimation for high-dimensional Markowitz mean-variance optimization By Bai, Zhidong; Li, Hua; Wong, Wing-Keung
  7. Detecting dependence between spatial processes By Herrera Gómez, Marcos; Ruiz Marín, Manuel; Mur Lacambra, Jesús
  8. Random cascade model in the limit of infinite integral scale as the exponential of a non-stationary $1/f$ noise. Application to volatility fluctuations in stock markets By J. F. Muzy; R. Baile; E. Bacry
  9. Local Powers of Optimal One- and Multi-Sample Tests for the Concentration of Fisher-von Mises Langevin Distributions By Christophe Ley; Thomas Verdebout
  10. Obtaining reliable Likelihood Ratio tests from simulated likelihood functions By Laura Mørch Andersen
  11. Empirical studies in a multivariate non-stationary, nonparametric regression model for financial returns By Gürtler, Marc; Rauh, Ronald
  12. A Generalized Dynamic Factor Model for Panel Data: Estimation with a Two-Cycle Conditional Expectation-Maximization Algorithm By Nikolaos Zirogiannis; Yorghos Tripodis
  13. Treatment effect identification using alternative parallel assumptions By Ricardo Mora; Iliana Reggio
  14. Posterior-Predictive Evidence on US Inflation using Phillips Curve Models with Non-Filtered Time Series By Nalan Basturk; Cem Cakmakli; Pinar Ceyhan; Herman K. van Dijk
  15. Limit theorems for power variations of ambit fields driven by white noise By Mikko S. Pakkanen
  16. A Model-Free Measure of Aggregate Idiosyncratic Volatility and the Prediction of Market Returns By René Garcia; Daniel Mantilla-Garcia; Lionel Martellini
  17. Bayesian Non-Parametric Portfolio Decisions with Financial Time Series By Audrone Virbickaite; M. Concepci\'on Aus\'in; Pedro Galeano
  18. Financial Dependence Analysis: Applications of Vine Copulae By David.E. Allen; Mohammad.A. Ashraf; Michael. McAleer; Robert.J. Powell; Abhay K. Singh

  1. By: Tschernig, Rolf; Weber, Enzo; Weigand, Roland
    Abstract: Fractionally integrated vector autoregressive models allow to capture persistence in time series data in a very flexible way. Additional flexibility for the short memory properties of the model can be attained by using the fractional lag perator of Johansen (2008) in the vector autoregressive polynomial. However, it also makes maximum likelihood estimation more diffcult. In this paper we first identify parameter settings for univariate and bivariate models that suffer from poor identification in finite samples and may therefore lead to estimation problems. Second, we propose to investigate the extent of poor identification by using expected log-likelihoods and variations thereof which are faster to simulate than multivariate finite sample distributions of parameter estimates. Third, we provide a line of reasoning that explains the finding from several univariate and bivariate simulation examples that the two-step estimator suggested by Tschernig, Weber, and Weigand (2010) can be more robust with respect to estimating the deterministic components than the maximum likelihood estimator.
    Keywords: fractional integration; long memory; maximum likelihood estimation; fractional lag operator
    JEL: C32 C51
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:bay:rdwiwi:27269&r=ecm
  2. By: Timothy B. Armstrong (Cowles Foundation, Yale University); Hock Peng Chan (National University of Singapore)
    Abstract: This paper considers inference for conditional moment inequality models using a multiscale statistic. We derive the asymptotic distribution of this test statistic and use the result to propose feasible critical values that have a simple analytic formula. We also propose critical values based on a modified bootstrap procedure and prove their asymptotic validity. The asymptotic distribution is extreme value, and the proof uses new techniques to overcome several technical obstacles. We provide power results that show that our test detects local alternatives that approach the identified set at the best possible rate under a set of conditions that hold generically in the set identified case in a broad class of models, and that our test is adaptive to the smoothness properties of the data generating process. Our results also have implications for the use of moment selection procedures in this setting. We provide a monte carlo study and an empirical illustration to inference in a regression model with endogenously censored and missing data.
    Keywords: Moment inequalities, Set inference, Adaptive inference
    JEL: C01 C14 C34
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1885&r=ecm
  3. By: Olivier Ledoit; Michael Wolf
    Abstract: Covariance matrix estimation and principal component analysis (PCA) are two cornerstones of multivariate analysis. Classic textbook solutions perform poorly when the dimension of the data is of a magnitude similar to the sample size, or even larger. In such settings, there is a common remedy for both statistical problems: nonlinear shrinkage of the eigenvalues of the sample covariance matrix. The optimal nonlinear shrinkage formula depends on unknown population quantities and is thus not available. It is, however, possible to consistently estimate an oracle nonlinear shrinkage, which is motivated on asymptotic grounds. A key tool to this end is consistent estimation of the set of eigenvalues of the population covariance matrix (also known as spectrum), an interesting and challenging problem in its own right. Extensive Monte Carlo simulations demonstrate that our methods have desirable finite-sample properties and outperform previous proposals.
    Keywords: Large-dimensional asymptotics, covariance matrix eigenvalues, nonlinear shrinkage, principal component analysis
    JEL: C13
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:105&r=ecm
  4. By: Marc Hallin; Marcelo Moreira J.; Alexei Onatski
    Abstract: This paper considers a linear panel data model with reduced rank regressors and interactive fixed effects. The leading example is a factor model where some of the factors are observed, some others not. Invariance considerations yield a maximal invariant statistic whose density does not depend on incidental parameters. It is natural to consider a likelihood ratio test based on the maximal invariant statistic. Its density can be found by using as a prior the unique invariant distribution for the incidental parameters. That invariant distribution is least favorable and leads to minimax optimality properties. Combining the invariant distribution with a prior for the remaining parameters gives a class of admissible tests. A particular choice of distribution yields the spiked covariance model of Johnstone (2001). Numerical simulations suggest that the maximal invariant likelihood ratio test outperforms the standard likelihood ratio test. Tests which are not invariant to data transformations (i) are uniquely represented as randomized tests of the maximal invariant statistic and (ii) do not solve the incidental parameter problem.
    Keywords: panel data models; factor model; incidental parameters; invariance; integrated likelihood; minimax; likelihood ratio test
    JEL: C12 C44
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/137736&r=ecm
  5. By: Giuseppe Cavaliere (Università di Bologna); Iliyan Georgiev (Universidade Nova de Lisboa)
    Abstract: We consider estimation and testing infinite-order autoregressive models with a (near) unit root and infinite-variance innovations. We study the asymptotic properties of estimators obtained by dummying out ?large?innovations, i.e., exceeding a given threshold. These estimators reflect the common practice of dealing with large residuals by including impulse dummies in the estimated regression. Iterative versions of the dummy-variable estimator are also discussed. We provide conditions on the preliminary parameter estimator and on the threshold which ensure that (i) the dummy-based estimator is consistent at higher rates than the OLS estimator, (ii) an asymptotically normal test statistic for the unit root hypothesis can be derived, and (iii) order of magnitude gains of local power are obtained.
    Keywords: Autoregressive processes; Infinite variance; Dummy variables Processi autoregressivi; Varianza infinita; Variabili dumm
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:118&r=ecm
  6. By: Bai, Zhidong; Li, Hua; Wong, Wing-Keung
    Abstract: The traditional(plug-in) return for the Markowitz mean-variance (MV) optimization has been demonstrated to seriously overestimate the theoretical optimal return, especially when the dimension to sample size ratio $p/n$ is large. The newly developed bootstrap-corrected estimator corrects the overestimation, but it incurs the "under-prediction problem," it does not do well on the estimation of the corresponding allocation, and it has bigger risk. To circumvent these limitations and to improve the optimal return estimation further, this paper develops the theory of spectral-corrected estimation. We first establish a theorem to explain why the plug-in return greatly overestimates the theoretical optimal return. We prove that under some situations the plug-in return is $\sqrt{\gamma}\ $\ times bigger than the theoretical optimal return, while under other situations, the plug-in return is bigger than but may not be $\sqrt{\gamma}\ $\ times larger than its theoretic counterpart where $\gamma = \frac 1{1-y}$ with $y$ being the limit of the ratio $p/n$. Thereafter, we develop the spectral-corrected estimation for the Markowitz MV model which performs much better than both the plug-in estimation and the bootstrap-corrected estimation not only in terms of the return but also in terms of the allocation and the risk. We further develop properties for our proposed estimation and conduct a simulation to examine the performance of our proposed estimation. Our simulation shows that our proposed estimation not only overcomes the problem of "over-prediction," but also circumvents the "under-prediction," "allocation estimation," and "risk" problems. Our simulation also shows that our proposed spectral-corrected estimation is stable for different values of sample size $n$, dimension $p$, and their ratio $p/n$. In addition, we relax the normality assumption in our proposed estimation so that our proposed spectral-corrected estimators could be obtained when the returns of the assets being studied could follow any distribution under the condition of the existence of the fourth moments.
    Keywords: Markowitz mean-variance optimization; Optimal Return; Optimal Portfolio Allocation; Large Random Matrix; Bootstrap Method
    JEL: G11 C3
    Date: 2013–01–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:43862&r=ecm
  7. By: Herrera Gómez, Marcos; Ruiz Marín, Manuel; Mur Lacambra, Jesús
    Abstract: Testing the assumption of independence between variables is a crucial aspect of spatial data analysis. However, the literature is limited and somewhat confusing. To our knowledge, we can mention only the bivariate generalization of Moran’s statistic. This test suffers from several restrictions: it is applicable only to pairs of variables, a weighting matrix and the assumption of linearity are needed; the null hypothesis of the test is not totally clear. Given these limitations, we develop a new non-parametric test based on symbolic dynamics with better properties. We show that the test can be extended to a multivariate framework, it is robust to departures from linearity, it does not need a weighting matrix and can be adapted to different specifications of the null. The test is consistent, computationally simple and with good size and power, as shown by a Monte Carlo experiment. An application to the case of the productivity of the manufacturing sector in the Ebro Valley illustrates our approach.
    Keywords: Non-parametric methods; Spatial bootstrapping; Spatial independence; Symbolic dynamics
    JEL: C12 R12 C15 C21
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:43861&r=ecm
  8. By: J. F. Muzy; R. Baile; E. Bacry
    Abstract: In this paper we propose a new model for volatility fluctuations in financial time series. This model relies on a non-stationary gaussian process that exhibits aging behavior. It turns out that its properties, over any finite time interval, are very close to continuous cascade models. These latter models are indeed well known to reproduce faithfully the main stylized facts of financial time series. However, it involve a large scale parameter (the so-called "integral scale" where the cascade is initiated) that is hard to interpret in finance. Moreover the empirical value of the integral scale is in general deeply correlated to the overall length of the sample. This feature is precisely predicted by our model that turns out, as illustrated on various examples from daily stock index data, to quantitatively reproduce the empirical observations.
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1301.4160&r=ecm
  9. By: Christophe Ley; Thomas Verdebout
    Abstract: One-sample and multi-sample tests on the concentration parameter of Fisher-vonMises-Langevin (FvML) distributions have been well studied in the literature. However,only very little is known about their behavior under local alternatives, whichis due to complications inherent to the curved nature of the parameter space. Theaim of the present paper therefore consists in filling that gap by having recourse tothe Le Cam methodology, which has been adapted from the linear to the sphericalsetup in Ley et al. (2013a). We obtain explicit expressions of the powers for the mostefficient one- and multi-sample tests; these tests are those considered in Watamori andJupp (2005). As a nice by-product, we are also able to write down the powers (againstlocal FvML alternatives) of the celebrated Rayleigh (1919) test of uniformity. A MonteCarlo simulation study confirms our theoretical findings and shows the finite-samplebehavior of the above-mentioned procedures.
    Keywords: concentration parameter; directional statistics; Fisher-von Mises-Langevin distributions; Le Cam's third Lemma; uniform local asymptotic normality
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/138256&r=ecm
  10. By: Laura Mørch Andersen (Department of Food and Resource Economics, University of Copenhagen)
    Abstract: It is standard practice by researchers and the default option in many statistical programs to base test statistics for mixed models on simulations using asymmetric draws (e.g. Halton draws). This paper shows that when the estimated likelihood functions depend on standard deviations of mixed parameters this practice is very likely to cause misleading test results for the number of draws usually used today. The paper shows that increasing the number of draws is a very inefficient solution strategy requiring very large numbers of draws to ensure against misleading test statistics. The paper shows that using one dimensionally antithetic draws does not solve the problem but that the problem can be solved completely by using fully antithetic draws. The paper also shows that even when fully antithetic draws are used, models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. Again this is not standard in research or statistical programs. The paper therefore recommends using fully antithetic draws replicating the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood and that this should become the default option in statistical programs.
    Keywords: Quasi-Monte Carlo integration; Antithetic draws; Likelihood Ratio tests; simulated likelihood; panel Mixed MultiNomial Logit; Halton draws
    JEL: C15 C25
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:foi:wpaper:2013_1&r=ecm
  11. By: Gürtler, Marc; Rauh, Ronald
    Abstract: In this paper we analyze a multivariate non-stationary regression model empirically. With the knowledge about unconditional heteroscedasticty of financial returns, based on univariate studies and a congruent paradigm in Gürtler and Rauh (2009), we test for a time-varying covariance structure firstly. Based on these results, a central component of our non-stationary model is a kernel regression for pairwise covariances and the covariance matrix. Residual terms are fitted with an asymmetric Pearson type VII distribution. In an extensive study we estimate the linear dependence of a broad portfolio of equities and fixed income securities (including credit and currency risks) and fit the whole approach to provide distributional forecasts. Our evaluations verify a reasonable approximation and a satisfactory forecasting quality with an out performance against a traditional risk model. --
    Keywords: heteroscedasticity,non-stationarity,nonparametric regression,volatility,covariance matrix,innovation modeling,asymmetric heavy-tails,multivariate distributional forecast,empirical studies
    JEL: C14 C5
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:tbsifw:if43v1&r=ecm
  12. By: Nikolaos Zirogiannis (Department of Resource Economics, University of Massachusetts Amherst); Yorghos Tripodis (Department of Biostatistics, Boston University School of Public Health)
    Abstract: We develop a generalized dynamic factor model for panel data with the goal of estimating an unobserved index. While similar models have been developed in the literature of dynamic factor analysis, our contribution is threefold. First, contrary to simple dynamic factor analysis where multiple attributes of the same subject are measured at each time period, our model also accounts for multiple subjects. It is therefore suitable to a panel data framework. Second, our model estimates a unique unobserved index for every subject for every time period, as opposed to previous work where a temporal index common to all subjects was used. Third, we develop a novel iterative estimation process which we call the Two-Cycle Conditional Expectation-Maximization (2CCEM) algorithm and is flexible enough to handle a variety of different types of datasets. The model is applied on a panel measuring attributes related to the operation of water and sanitation utilities.
    Keywords: Dynamic Factor Models, EM algorithm, Panel Data, State-Space models, IBNET
    JEL: C32 C33 C51 Q25
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:dre:wpaper:2013-1&r=ecm
  13. By: Ricardo Mora; Iliana Reggio
    Abstract: The core assumption to identify the treatment effect in difference-in-differences estimators is the so-called Parallel Paths assumption, namely that the average change in outcome for the treated in the absence of treatment equals the average change in outcome for the non-treated. We define a family of alternative Parallel assumptions and show for a number of frequently used empirical specifications which parameters of the model identify the treatment effect under the alternative Parallel assumptions. We further propose a fully flexible model which has two desirable features not present in the usual econometric specifications implemented in applied research. First, it allows for flexible dynamics and for testing restrictions on these dynamics. Second, it does not impose equivalence between alternative Parallel assumptions. We illustrate the usefulness of our approach by revising the results of several recent papers in which the difference-in-differences technique has been applied.The core assumption to identify the treatment effect in difference-in-differences estimators is the so-called Parallel Paths assumption, namely that the average change in outcome for the treated in the absence of treatment equals the average change in outcome for the non-treated. We define a family of alternative Parallel assumptions and show for a number of frequently used empirical specifications which parameters of the model identify the treatment effect under the alternative Parallel assumptions. We further propose a fully flexible model which has two desirable features not present in the usual econometric specifications implemented in applied research. First, it allows for flexible dynamics and for testing restrictions on these dynamics. Second, it does not impose equivalence between alternative Parallel assumptions. We illustrate the usefulness of our approach by revising the results of several recent papers in which the difference-in-differences technique has been applied
    Keywords: Difference-in-differences, Parallel paths, Treatment effect
    Date: 2012–12
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1233&r=ecm
  14. By: Nalan Basturk (Erasmus University Rotterdam); Cem Cakmakli (University of Amsterdam); Pinar Ceyhan (Erasmus University Rotterdam); Herman K. van Dijk (Erasmus University Rotterdam, and VU University Amsterdam)
    Abstract: Changing time series properties of US inflation and economic activity are analyzed within a class of extended Phillips Curve (PC) models. First, the misspecification effects of mechanical removal of low frequency movements of these series on posterior inference of a basic PC model are analyzed using a Bayesian simulation based approach. Next, structural time series models that describe changing patterns in low and high frequencies and backward as well as forward inflation expectation mechanisms are incorporated in the class of extended PC models. Empirical results indicate that the proposed models compare favorably with existing Bayesian Vector Autoregressive and Stochastic Volatility models in terms of fit and predictive performance. Weak identification and dynamic persistence appear less important when time varying dynamics of high and low frequencies are carefully modeled. Modeling inflation expectations using survey data and adding level shifts and stochastic volatility improves substantially in sample fit and out of sample predictions. No evidence is found of a long run stable cointegration relation between US inflation and marginal costs. Tails of the complete predictive distributions indicate an increase in the probability of disinflation in recent years.
    Keywords: New Keynesian Phillips curve; unobserved components; level shifts; inflation expectations
    JEL: C11 C32 E31 E37
    Date: 2013–01–10
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130011&r=ecm
  15. By: Mikko S. Pakkanen (Aarhus University and CREATES)
    Abstract: We study the asymptotic behavior of lattice power variations of two-parameter ambit fields that are driven by white noise. Our first result is a law of large numbers for such power variations. Under a constraint on the memory of the ambit field, normalized power variations are shown to converge to certain integral functionals of the volatility field associated to the ambit field, when the lattice spacing tends to zero. This law of large numbers holds also for thinned power variations that are computed by only including increments that are separated by gaps with a particular asympotic behavior. Our second result is a related stable central limit theorem for thinned power variations. Additionally, we provide concrete examples of ambit fields that satisfy the assumptions of our limit theorems.
    Keywords: ambit field, power variation, law of large numbers, central limit theorem, chaos decomposition
    JEL: C10 C14
    Date: 2013–10–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2013-01&r=ecm
  16. By: René Garcia; Daniel Mantilla-Garcia; Lionel Martellini
    Abstract: In this paper, we formally show that the cross-sectional variance of stock returns is a consistent and asymptotically efficient estimator for aggregate idiosyncratic volatility. This measure has two key advantages: it is model-free and observable at any frequency. Previous approaches have used monthly model based measures constructed from time series of daily returns. The newly proposed cross-sectional volatility measure is a strong predictor for future returns on the aggregate stock market at the daily frequency. Using the cross-section of size and book-to-market portfolios, we show that the portfolios’ exposures to the aggregate idiosyncratic volatility risk predict the cross-section of expected returns. <P>
    Keywords: Aggregate idiosyncratic volatility, cross-sectional dispersion, prediction of market returns,
    Date: 2013–01–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2013s-01&r=ecm
  17. By: Audrone Virbickaite; M. Concepci\'on Aus\'in; Pedro Galeano
    Abstract: A Bayesian non-parametric approach for efficient risk management is proposed. A dynamic model is considered where optimal portfolio weights and hedging ratios are adjusted at each period. The covariance matrix of the returns is described using an asymmetric MGARCH model. Restrictive parametric assumptions for the errors are avoided by relying on Bayesian non-parametric methods, which allow for a better evaluation of the uncertainty in financial decisions. Illustrative risk management problems using real data are solved. Significant differences in posterior distributions of the optimal weights and ratios are obtained arising from different assumptions for the errors in the time series model.
    Date: 2013–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1301.5129&r=ecm
  18. By: David.E. Allen (Edith Cowan University, Australia); Mohammad.A. Ashraf (Indian Institute of Technology, Kharagpur, India); Michael. McAleer (Erasmus University Rotterdam, Complutense University of Madrid, Spain, and Kyoto University, Japan); Robert.J. Powell (Edith Cowan University, Australia); Abhay K. Singh (Edith Cowan University, Australia)
    Abstract: This paper features the application of a novel and recently developed method of statistical and mathematical analysis to the assessment of financial risk: namely Regular Vine copulas. Dependence modelling using copulas is a popular tool in financial applications, but is usually applied to pairs of securities. Vine copulas offer greater exibility and permit the modelling of complex dependency patterns using the rich variety of bivariate copulas which can be arranged and analysed in a tree structure to facilitate the analysis of multiple dependencies. We apply Regular Vine copula analysis to a sample of stocks comprising the Dow Jones Index to assess their interdependencies and to assess how their correlations change in different economic circumstances using three different sample periods: pre-GFC (Jan 2005- July 2007), GFC (July 2007-Sep 2009), and post-GFC periods (Sep 2009 - Dec 2011). The empirical results suggest that the dependencies change in a complex manner, and there is evidence of greater reliance on the Student <I>t</I> copula in the copula choice within the tree structures for the GFC period, which is consistent with the existence of larger tails in the distributions of returns for this period. One of the attractions of this approach to risk modelling is the exibility in the choice of distributions used to model co-dependencies.
    Keywords: Regular Vine Copulas; Tree structures; Co-dependence modelling
    JEL: G11 C02
    Date: 2013–01–22
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20130022&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.