nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒10‒11
nine papers chosen by
Sune Karlsson
Orebro University

  1. Bayesian Structured Additive Distributional Regression By Nadja Klein; Thomas Kneib; Stefan Lang
  2. Nets: Network Estimation for Time Series By Matteo Barigozzi; Christian Brownlees
  3. Efficient Estimation Using the Characteristic Function By Marine Carrasco; Rachidi Kotchoni
  4. The Indirect Continuous-GMM Estimation By Rachidi Kotchoni
  5. Model Selection in the Presence of Incidental Parameters By Yoonseok Lee; Peter C.B. Phillips
  6. Modeling Hyperinflation Phenomenon: A Bayesian Approach By Rolando Gonzales Martínez; Last: Gonzales Martínez
  7. A discreet approach to study the distribution-free downward biases of Gini coefficient and the methods of correction in cases of small observations By Amlan Majumder; Takayoshi Kusago
  8. Closed form solution of correlation in doubly truncated or censored sample of bivariate log-normal distribution By Vilmunen, Jouko; Palmroos, Peter
  9. Random Matrix Application to Correlations Among Volatility of Assets By Ajay Singh; Dinghai Xu

  1. By: Nadja Klein; Thomas Kneib; Stefan Lang
    Abstract: In this paper, we propose a generic Bayesian framework for inference in distributional regression models in which each parameter of a potentially complex response distribution and not only the mean is related to a structured additive predictor. The latter is composed additively of a variety of different functional effect types such as nonlinear effects, spatial effects, random coefficients, interaction surfaces or other (possibly non-standard) basis function representations. To enforce specific properties of the functional effects such as smoothness, informative multivariate Gaussian priors are assigned to the basis function coefficients. Inference is then based on efficient Markov chain Monte Carlo simulation techniques where a generic procedure makes use of distribution-specific iteratively weighted least squares approximations to the full conditionals. We study properties of the resulting model class and provide detailed guidance on practical aspects of model choice including selecting an apropriate response distribution and predictor specification. The importance and flexibility of Bayesian structured additive distributional regression to estimate all parameters as functions of explanatory variables and therefore to obtain more realistic models, is exemplified in two applications with complex response distributions.
    Keywords: generalised additive models for location scale and shape, iteratively weighted least squares proposal, Markov chain Monte Carlo simulation, penalised splines, semiparametric regression
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2013-23&r=ecm
  2. By: Matteo Barigozzi; Christian Brownlees
    Abstract: This work proposes novel network analysis techniques for multivariate time series. We dene the network of a multivariate time series as a graph where vertices denote the components of the process and edges denote non{zero long run partial correlations. We then introduce a two step lasso procedure, called nets, to estimate high{dimensional sparse Long Run Partial Correlation networks. This approach is based on a var approximation of the process and allows to decompose the long run linkages into the contribution of the dynamic and contemporaneous dependence relations of the system. The large sample properties of the estimator are analysed and we establish conditions for consistent selection and estimation of the non{zero long run partial correlations. The methodology is illustrated with an application to a panel of U.S. bluechips.
    Keywords: Networks, Multivariate Time Series, Long Run Covariance, Lasso
    JEL: C01 C32 C52
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:723&r=ecm
  3. By: Marine Carrasco (CIREQ - Centre interuniversitaire de recherche en économie quantitative - Université de Montréal); Rachidi Kotchoni (THEMA - Théorie économique, modélisation et applications - CNRS : UMR8184 - Université de Cergy Pontoise)
    Abstract: The method of moments proposed by Carrasco and Florens (2000) permits to fully exploit the information contained in the characteristic function and yields an estimator which is asymptotically as efficient as the maximum likelihood estimator. However, this estimation procedure depends on a regularization or tuning parameter \alpha that needs to be selected. The aim of the present paper is to provide a way to optimally choose \alpha by minimizing the approximate mean square error (AMSE) of the estimator. Following an approach similar to that of Newey and Smith (2004), we derive a higher-order expansion of the estimator from which we characterize the fi…nite sample dependence of the AMSE on \alpha . We provide a data-driven procedure for selecting the regularization parameter that relies on parametric bootstrap. We show that this procedure delivers a root T consistent estimator of \alpha. Moreover, the data-driven selection of the regularization parameter preserves the consistency, asymptotic normality and efficiency of the CGMM estimator. Simulation experiments based on a CIR model show the relevance of the proposed approach.
    Keywords: Conditional moment restriction, Continuum of moment conditions, Generalized method of moments, Mean square error, Stochastic expansion, Tikhonov regularization
    Date: 2013–09–30
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00867850&r=ecm
  4. By: Rachidi Kotchoni (THEMA - Théorie économique, modélisation et applications - CNRS : UMR8184 - Université de Cergy Pontoise)
    Abstract: A curse of dimensionality arises when using the Continuum-GMM procedure to estimate large dimensional models. Two solutions are proposed, both of which convert the high di- mensional model into a continuum of reduced information sets. Under certain regularity conditions, each reduced information set can be used to produce a consistent estimator of the parameter of interest. An indirect CGMM estimator is obtained by optimally aggregating all such consistent estimators. The simulation results suggest that the indirect CGMM procedure makes an e¢ cient use of the information content of moment restrictions
    Keywords: Conditional moment restriction, Continuum of moment conditions, Covari- ance operator, Empirical characteristic function, Generalized method of moments, Indirect estimation
    Date: 2013–09–30
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00867804&r=ecm
  5. By: Yoonseok Lee (Dept. of Economics, Syracuse University); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper considers model selection in nonlinear panel data models where incidental parameters or large-dimensional nuisance parameters are present. Primary interest typically centres on selecting a model that best approximates the underlying structure involving parameters that are common within the panel after concentrating out the incidental parameters. It is well known that conventional model selection procedures are often inconsistent in panel models and this can be so even without nuisance parameters (Han et al, 2012). Modifications are then needed to achieve consistency. New model selection information criteria are developed here that use either the Kullback-Leibler information criterion based on the profile likelihood or the Bayes factor based on the integrated likelihood with the robust prior of Arellano and Bonhomme (2009). These model selection criteria impose heavier penalties than those associated with standard information criteria such as AIC and BIC. The additional penalty, which is data-dependent, properly reflects the model complexity arising from the presence of incidental parameters. A particular example is studied in detail involving lag order selection in dynamic panel models with fixed individual effects. The new criteria are shown to control for over/under-selection probabilities in these models and lead to consistent order selection criteria.
    Keywords: (Adaptive) model selection, incidental parameters, profile likelihood, Kullback-Leibler information, Bayes factor, integrated likelihood, robust prior, model complexity, fixed effects, lag order
    JEL: C23 C52
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1919&r=ecm
  6. By: Rolando Gonzales Martínez; Last: Gonzales Martínez (Unidad de Análisis de Políticas Económicas y Sociales, Bolivian Government)
    Abstract: Hyperinflations are short-lived episodes of economic instability in prices which characteristically last twenty months or less. Classical statistical techniques applied to these small samples could lead to an incorrect inference problem. This paper describes a Bayesian approach for modeling hyper-inflations which improves the modeling accuracy using small-sample inference based on specific parametric assumptions. A theory-congruent model for the Bolivian hyperinflation was estimated as a case study.
    Keywords: Hyperinflation, Bayesian methods
    JEL: E31 C11
    Date: 2013–04
    URL: http://d.repec.org/n?u=RePEc:cml:docinv:8&r=ecm
  7. By: Amlan Majumder (Dinhata College, West Bengal, India); Takayoshi Kusago (Kansai University, Osaka, Japan)
    Abstract: It is well-known that Gini coefficient is influenced by granularity of measurements. When there are few observations only or when they get reduced due to grouping, standard measures exhibit a non-negligible downward bias. At times, bias may be positive when there is an apparent reduction in sample size. Although authors agreed on distribution-free and distribution-specific parts of it, there is no consensus in regard to types of bias, their magnitude and the methods of correction in the former. This paper deals with the distribution-free downward biases only, which arise in two forms. One is related to scale and occurs in all the cases stated above, when number of observations is small. Both occur together if initial number of observations is not sufficiently large and further they get reduced due to grouping. Underestimations associated with the former is demonstrated and addressed, for discontinuous case, through alternative formulation with simplicity following the principle of mean difference without repetition. Equivalences of it are also derived under the geometric and covariance approaches. However, when it arises with the other, a straightforward claim of it in its full magnitude may be unwarranted and quite paradoxical. Some exercises are done consequently to make Gini coefficient standardized and comparable for a fixed number of observations. Corrections in case of the latter are done accordingly with a newly proposed operational pursuit synchronizing the relevant previous and present concerns. The paper concludes after addressing some definitional issues in regard to convention and adjustments in cases of small observations.
    Keywords: Gini coefficient, maximum inequality Lorenz curve, mean difference approach, small observations, underestimation.
    JEL: D31 D63
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq2013-298&r=ecm
  8. By: Vilmunen, Jouko (Bank of Finland Research); Palmroos, Peter (Financial Supervisory Authority)
    Abstract: In this study we present a closed form solution to the moments and, in particular, correlation of two log-normally distributed random variables, when the underlying log-normal distribution is potentially truncated or censored at both tails. The closed form solution that we derive also covers the cases where one tail is truncated and the other is censored. Throughout the derivations we further assume that the moments of the unconstrained bivariate log-normal distribution are known.
    Keywords: bivariate log-normal distribution; Pearson's product-moment correlation; truncated; censored; tail correlation; solvency II
    JEL: C18 C46 G28
    Date: 2013–08–21
    URL: http://d.repec.org/n?u=RePEc:hhs:bofrdp:2013_017&r=ecm
  9. By: Ajay Singh; Dinghai Xu
    Abstract: In this paper, we apply tools from the random matrix theory (RMT) to estimates of correlations across volatility of various assets in the S&P 500. The volatility inputs are estimated by modeling price fluctuations as GARCH(1,1) process. The corresponding correlation matrix is constructed. It is found that the distribution of a significant number of eigenvalues of the volatility correlation matrix matches with the analytical result from the RMT. Furthermore, the empirical estimates of short and long-range correlations among eigenvalues, which are within the RMT bounds, match with the analytical results for Gaussian Orthogonal ensemble (GOE) of the RMT. To understand the information content of the largest eigenvectors, we estimate the contribution of GICS industry groups in each eigenvector. In comparison with eigenvectors of correlation matrix for price fluctuations, only few of the largest eigenvectors of volatility correlation matrix are dominated by a single industry group. We also study correlations among `volatility return' and get similar results.
    Date: 2013–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1310.1601&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.