nep-ecm New Economics Papers
on Econometrics
Issue of 2014‒05‒24
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. Simulation of multivariate diffusion bridges By Mogens Bladt; Samuel Finch; Michael Sørensen
  2. Uniform Inference in Nonlinear Models with Mixed Identification Strength By Xu Cheng
  3. Univariate versus multivariate modeling of panel data By Juan Carlos Bou; Albert Satorra
  4. Semi-parametric Expected Shortfall Forecasting By Chen, Cathy W.S.; Gerlach, Richard
  5. Alternative Tests for Correct Specification of Conditional Predictive Densities By Barbara Rossi; Tatevik Sekhposyan
  6. Διαστήματα εμπιστοσύνης για εκατοστημόρια σε στάσιμες ARMA διαδικασίες: Μία εμπειρική εφαρμογή σε περιβαλλοντικά δεδομένα By Halkos, George; Kevork, Ilias
  7. Confidence Corridors for Multivariate Generalized Quantile Regression By Shih-Kang Chao; Katharina Proksch; Holger Dette; Wolfgang Härdle
  8. Improving the graphical lasso estimation for the precision matrix through roots ot the sample convariance matrix By Vahe Avagyan; Andrés M. Alonso; Francisco J. Nogales
  9. Estimation of Affine Term Structure Models with Spanned or Unspanned Stochastic Volatility By Drew D. Creal; Jing Cynthia Wu
  10. Independent components techniques based on kurtosis for functional data analysis By Daniel Peña; Javir Prieto Fernández; Carolina Rendón
  11. On tail dependence coefficients of transformed multivariate Archimedean copulas By Elena Di Bernardino; Didier Rullière
  12. Cost Constrained Industry Ine By Antonio Peyrache
  13. How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It By Gary King; Margaret Roberts
  14. Multivariate bubbles and antibubbles By Fry, John
  15. Towards a new viewpoint on causality for time series By Michel Fliess; Cédric Join

  1. By: Mogens Bladt (Universidad Nacional Autónoma de México); Samuel Finch (University of Copenhagen, Dept. of Mathematical Sciences); Michael Sørensen (University of Copenhagen, Dept. of Mathematical Sciences and CREATES)
    Abstract: We propose simple methods for multivariate diffusion bridge simulation, which plays a fundamental role in simulation-based likelihood and Bayesian inference for stochastic differential equations. By a novel application of classical coupling methods, the new approach generalizes a previously proposed simulation method for one-dimensional bridges to the multi-variate setting. First a method of simulating approximate, but often very accurate, diffusion bridges is proposed. These approximate bridges are used as proposal for easily implementable MCMC algorithms that produce exact diffusion bridges. The new method is much more generally applicable than previous methods. Another advantage is that the new method works well for diffusion bridges in long intervals because the computational complexity of the method is linear in the length of the interval. In a simulation study the new method performs well, and its usefulness is illustrated by an application to Bayesian estimation for the multivariate hyperbolic diffusion model.
    Keywords: Bayesian inference, coupling, discretely sampled diffusions, likelihood inference, stochastic differential equation, time-reversal.
    JEL: C22 C15
    Date: 2014–05–13
    URL: http://d.repec.org/n?u=RePEc:aah:create:2014-16&r=ecm
  2. By: Xu Cheng (Department of Economics, University of Pennsylvania)
    Abstract: The paper studies inference in nonlinear models where identification loss presents in multiple parts of the parameter space. For uniform inference, we develop a local limit theory that models mixed identification strength. Building on this non-standard asymptotic approximation, we suggest robust tests and confidence intervals in the presence of non-identified and weakly identified nuisance parameters. In particular, this covers applications where some nuisance parameters are non-identified under the null (Davies (1977, 1987)) and some nuisance parameters are subject to a full range of identification strength. The asymptotic results involve both inconsistent estimators that depend on a localization parameter and consistent estimators with different rates of convergence. A sequential argument is used to peel the criterion function based on identification strength of the parameters. The robust test is uniformly valid and non-conservative.
    Keywords: Mixed rates, nonlinear regression, robust inference, uniformity, weak identification.
    JEL: C12 C15
    Date: 2014–05–08
    URL: http://d.repec.org/n?u=RePEc:pen:papers:14-018&r=ecm
  3. By: Juan Carlos Bou; Albert Satorra
    Abstract: Panel data can be arranged into a matrix in two ways, called 'long' and 'wide' formats (LF and WF). The two formats suggest two alternative model approaches for analyzing panel data: (i) univariate regression with varying intercept; and (ii) multivariate regression with latent variables (a particular case of structural equation model, SEM). The present paper compares the two approaches showing in which circumstances they yield equivalent—in some cases, even numerically equal—results. We show that the univariate approach gives results equivalent to the multivariate approach when restrictions of time invariance (in the paper, the TI assumption) are imposed on the parameters of the multivariate model. It is shown that the restrictions implicit in the univariate approach can be assessed by chi-square difference testing of two nested multivariate models. In addition, common tests encountered in the econometric analysis of panel data, such as the Hausman test, are shown to have an equivalent representation as chi-square difference tests. Commonalities and differences between the univariate and multivariate approaches are illustrated using an empirical panel data set of firms' profitability as well as a simulated panel data.
    Keywords: panel data modeling
    Date: 2014–02
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1417&r=ecm
  4. By: Chen, Cathy W.S.; Gerlach, Richard
    Abstract: Intra-day sources of data have proven effective for dynamic volatility and tail risk estimation. Expected shortfall is a tail risk measure, that is now recommended by the Basel Committee, involving a conditional expectation that can be semi-parametrically estimated via an asymmetric sum of squares function. The conditional autoregressive expectile class of model, used to indirectly model expected shortfall, is generalised to incorporate information on the intra-day range. An asymmetric Gaussian density model error formulation allows a likelihood to be developed that leads to semiparametric estimation and forecasts of expectiles, and subsequently of expected shortfall. Adaptive Markov chain Monte Carlo sampling schemes are employed for estimation, while their performance is assessed via a simulation study. The proposed models compare favourably with a large range of competitors in an empirical study forecasting seven financial return series over a ten year perio d.
    Keywords: Semi-parametric; Markov chain Monte Carlo method; Expected; Asymmetric Gaussian distribution; Nonlinear; CARE model
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/10457&r=ecm
  5. By: Barbara Rossi; Tatevik Sekhposyan
    Abstract: We propose new methods for evaluating predictive densities that focus on the models’ actual predictive ability in finite samples. The tests offer a simple way of evaluating the correct specification of predictive densities, either parametric or non-parametric. The results indicate that our tests are well sized and have good power in detecting mis-specification in predictive densities. An empirical application to the Survey of Professional Forecasters and a baseline Dynamic Stochastic General Equilibrium model shows the usefulness of our methodology.
    Keywords: predictive density, dynamic mis-specification, forecast evaluation
    JEL: C22 C52 C53
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:758&r=ecm
  6. By: Halkos, George; Kevork, Ilias
    Abstract: Percentiles estimation plays an important role at the stage of making decisions in many scientific fields. However, the up-to-now research on developing estimation methods for percentiles has been based on the assumption that the data in the sample are formed independently. In the current paper we suppress this restrictive assumption by assuming that the values of the variable under study are formed according to the general linear process. After deriving the asymptotic distribution of the Maximum Likelihood estimator for the 100×Pth percentile, we give the general form of the corresponding asymptotic confidence interval. Then, the performance of the estimated asymptotic confidence interval is evaluated in finite samples from the stationary AR(1) and ARMA(1,1) through Monte-Carlo simulations by computing two statistical criteria: (a) the actual confidence level, (b) the expected half-length as percentage of the true value of the percentile. Simulation results show that the validity of the estimated asymptotic confidence interval depends upon the sample size, the size of the 1st order theoretical autocorrelation coefficient, and the true cumulative probability P related to the percentile. Finally, an application example is given using the series of the CO2 annual emissions intensity in Greece (kg per kg of oil equivalent energy use) for the period 1961-2010. Confidence intervals for percentiles are constructed on this series and discussion about the validity of the estimation procedure follows according to the findings from the simulation experiments regarding the values of the aforementioned criteria.
    Keywords: Percentiles; environmental data; time series models; confidence intervals.
    JEL: C13 C22 C53 Q50 Q54
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:56134&r=ecm
  7. By: Shih-Kang Chao; Katharina Proksch; Holger Dette; Wolfgang Härdle
    Abstract: We focus on the construction of confidence corridors for multivariate nonparametric generalized quantile regression functions. This construction is based on asymptotic results for the maximal deviation between a suitable nonparametric estimator and the true function of interest which follow after a series of approximation steps including a Bahadur representation, a new strong approximation theorem and exponential tail inequalities for Gaussian random fields. As a byproduct we also obtain confidence corridors for the regression function in the classical mean regression. In order to deal with the problem of slowly decreasing error in coverage probability of the asymptotic confidence corridors, which results in meager coverage for small sample sizes, a simple bootstrap procedure is designed based on the leading term of the Bahadur representation. The finite sample properties of both procedures are investigated by means of a simulation study and it is demonstrated that the bootstrap procedure considerably outperforms the asymptotic bands in terms of coverage accuracy. Finally, the bootstrap confidence corridors are used to study the efficacy of the National Supported Work Demonstration, which is a randomized employment enhancement program launched in the 1970s. This article has supplementary materials online.
    Keywords: Bootstrap, expectile regression, Goodness-of-fit tests, quantile treatment effect, smoothing and nonparametric regression
    JEL: C2 C12 C14
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2014-028&r=ecm
  8. By: Vahe Avagyan; Andrés M. Alonso; Francisco J. Nogales
    Abstract: In this paper, we focus on the estimation of a high-dimensional precision matrix. We propose a simple improvement of the graphical lasso framework (glasso) that is able to attain better statistical performance without sacrificing too much the computational cost. The proposed improvement is based on computing a root of the covariance matrix to reduce the spread of the associated eigenvalues, and maintains the original convergence rate. Through extensive numerical results, using both simulated and real datasets, we show the proposed modification outperforms the glasso procedure. Finally, our results show that the square-root improvement may be a reasonable choice in practice
    Keywords: Gaussian Graphical Models, Gene expression, High-dimensionality, Inverse covariance matrix, Penalized estimation, Portfolio selection, Root of a matrix
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141208&r=ecm
  9. By: Drew D. Creal; Jing Cynthia Wu
    Abstract: We develop new procedures for maximum likelihood estimation of affine term structure models with spanned or unspanned stochastic volatility. Our approach uses linear regression to reduce the dimension of the numerical optimization problem yet it produces the same estimator as maximizing the likelihood. It improves the numerical behavior of estimation by eliminating parameters from the objective function that cause problems for conventional methods. We find that spanned models capture the cross-section of yields well but not volatility while unspanned models fit volatility at the expense of fitting the cross-section.
    JEL: C13 E43 G12
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:20115&r=ecm
  10. By: Daniel Peña; Javir Prieto Fernández; Carolina Rendón
    Abstract: The motivation for this paper arises from an article written by Peña et al. [40] in 2010,where they propose the eigenvectors associated with the extreme values of a kurtosismatrix as interesting directions to reveal the possible cluster structure of a dataset. In recent years many research papers have proposed generalizations of multivariatetechniques to the functional data case. In this paper we introduce an extension of themultivariate kurtosis for functional data, and we analyze some of its properties. Inparticular, we explore if our proposal preserves some of the properties of the kurtosisprocedures applied to the multivariate case, regarding the identification of outliers andcluster structures. This analysis is conducted considering both theoretical andexperimental properties of our proposal
    Keywords: Functional Data Analysis, Functional Kurtosis, Cluster Analysis, Kurtosis Operator
    Date: 2014–05
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws141006&r=ecm
  11. By: Elena Di Bernardino (CEDRIC - Centre d'Etude et De Recherche en Informatique du Cnam - Conservatoire National des Arts et Métiers (CNAM)); Didier Rullière (SAF - Laboratoire de Sciences Actuarielle et Financière - Université Claude Bernard - Lyon I : EA2429)
    Abstract: This paper presents the impact of a class of transformations of copulas in their upper and lower multivariate tail dependence coefficients. In particular we focus on multivariate Archimedean copulas. In the first part of this paper, we calculate multivariate tail dependence coefficients when the generator of the considered copula exhibits some regular variation properties, and we investigate the behaviour of these coefficients in cases that are close to tail independence. This first part exploits previous works of Charpentier and Segers (2009) and extends some results of Juri and Wüthrich (2003) and De Luca and Rivieccio (2012). We also introduce a new Regular Index Function (RIF) exhibiting some interesting properties. In the second part of the paper we analyse the impact in the upper and lower multivariate tail dependence coefficients of a large class of transformations of dependence structures. These results are based on the transformations exploited by Di Bernardino and Rullière (2013). We extend some bivariate results of Durante et al. (2010) in a multivariate setting by calculating multivariate tail dependence coefficients for transformed copulas. We obtain new results under specific conditions involving regularly varying hazard rates of components of the transformation. In the third part, we show the utility of using transformed Archimedean copulas, as they permit to build Archimedean generators exhibiting any chosen couple of lower and upper tail dependence coefficients. The interest of such study is also illustrated through applications in bivariate settings. At last, we explain possible applications with Markov chains with specific dependence structure.
    Keywords: Archimedean copulas; tail dependence coe fficients; regular variation; transformations of Archimedean copulas; Regular Index Function
    Date: 2014–05–19
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00992707&r=ecm
  12. By: Antonio Peyrache (School of Economics, The University of Queensland)
    Abstract: When analyzing productivity and efficiency of firms, stochastic frontier models are very attractive because they allow, as in typical regression models, to introduce some noise in the Data Generating Process. Most of the approaches so far have been using very restrictive fully parametric specified models, both for the frontier function and for the components of the stochastic terms. Recently, local MLE approaches were introduced to relax these parametric hypotheses. However, the high computational complexity of the latter makes them difficult to use, in particular if bootstrap-based inference is needed. In this work we show that most of the benefits of the local MLE approach can be obtained with less assumptions and involving much easier, faster and numerically more robust computations, by using nonparametric least-squares methods. Our approach can also be viewed as a semi-parametric generalization of the so-called “modified OLS†that was introduced in the parametric setup. If the final evaluation of individual efficiencies requires, as in the local MLE approach, the local specification of the distributions of noise and inefficiencies, it is shown that a lot can be learned on the production process without such specifications. Even elasticities of the mean inefficiency can be analyzed with unspecified noise distribution and a general class of local one-parameter scale family for inefficiencies. This allows to discuss the variation in inefficiency levels with respect to explanatory variables with minimal assumptions on the Data Generating Process. Our method is illustrated and compared with other methods with a real data set.
    Date: 2014–04
    URL: http://d.repec.org/n?u=RePEc:qld:uqcepa:95&r=ecm
  13. By: Gary King; Margaret Roberts
    Abstract: "Robust standard errors" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. Even though this message is well known to methodologists, it has failed to reach most applied researchers. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help applied researchers realize these gains via an alternative perspective that offers a productive way to use robust standard errors; a new general and easier-to-use "generalized information matrix test" statistic; and practical illustrations via simulations and real examples from published research. Instead of jettisoning this extremely popular tool, as some suggest, we show how robust and classical standard error differences can provide effective clues about model misspecification, likely biases, and a guide to more reliable inferences.
    Date: 2014–01
    URL: http://d.repec.org/n?u=RePEc:qsh:wpaper:32225&r=ecm
  14. By: Fry, John
    Abstract: In this paper we develop models for multivariate financial bubbles and antibubbles based on statistical physics. In particular, we extend a rich set of univariate models to higher dimensions. Changes in market regime can be explicitly shown to represent a phase transition from random to deterministic behaviour in prices. Moreover, our multivariate models are able to capture some of the contagious effects that occur during such episodes. We are able to show that declining lending quality helped fuel a bubble in the US stock market prior to 2008. Further, our approach offers interesting insights into the spatial development of UK house prices.
    Keywords: Econophysics Bubbles Antibubbles Contagion
    JEL: C0 C02 C10 G0 G01
    Date: 2014–05–19
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:56081&r=ecm
  15. By: Michel Fliess (LIX - Laboratoire d'informatique de l'école polytechnique - CNRS : UMR7161 - Polytechnique - X, AL.I.E.N. - ALgèbre pour Identification & Estimation Numériques - ALIEN); Cédric Join (AL.I.E.N. - ALgèbre pour Identification & Estimation Numériques - ALIEN, INRIA Lille - Nord Europe - Non-A - INRIA : LILLE - NORD EUROPE, CRAN - Centre de Recherche en Automatique de Nancy - CNRS : UMR7039 - Université de Lorraine)
    Abstract: Causation between time series is a most important topic in econometrics, financial engineering, biological and psychological sciences, and many other fields. A new setting is introduced for examining this rather abstract concept. The corresponding calculations, which are much easier than those required by the celebrated Granger-causality, do not necessitate any deterministic or probabilistic modeling. Some convincing computer simulations are presented.
    Keywords: Time series; causation; Granger-causality; beta; return; nonstandard analysis; trends; quick fluctuations.
    Date: 2014–05–29
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-00991942&r=ecm

This nep-ecm issue is ©2014 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.