nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒04‒30
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. Efficient estimation of parameters in marginal in semiparametric multivariate models By Panchenko, Valentyn; Prokhorov, Artem
  2. Accounting for unobserved heterogeneity in micro-econometric agricultural production models: a random parameter approach By Koutchade, Philippe; Carpentier, Alain; Femenia, Fabienne
  3. On Asymptotic Properties of the Separating Hill Estimator By Matias Heikkila; Yves Dominicy; Sirkku Pauliina Ilmonen
  4. A new combination approach to reducing forecast errors with an application to volatility forecasting By Till Weigt; Bernd Wilfling
  5. Estimating individual effects and their spatial spillovers in linear panel data models By Miranda, Karen; Martínez Ibáñez, Oscar; Manjón Antolín, Miquel Carlos,
  6. Random factor approach for large sets of equity time-series By Antti Tanskanen; Jani Lukkarinen; Kari Vatanen
  7. Macroeconomic forecasting and structural changes in steady states By Dimitrios P. Louzis
  8. Estimating Border Effects: The Impact of Spatial Aggregation By Coughlin, Cletus C.; Novy, Dennis
  9. On a Possible Problem in the Estimation of Saddle-point Dynamic Economic Models By Audrey Laporte; Adrian Rohit Dass; Brian Ferguson
  10. A true measure of dependence By Li, Hui
  11. Specification tests for lattice processes By Javier Hidalgo; Myung Hwan Seo
  12. Efficient Two-Step Estimation via Targeting By David T. Frazierz; Éric Renault

  1. By: Panchenko, Valentyn; Prokhorov, Artem
    Abstract: We consider a general multivariate model where univariate marginal distributions are known up to a common parameter vector and we are interested in estimating that vector without assuming anything about the joint distribution, except for the marginals. If we assume independence between the marginals and maximize the resulting quasi-likelihood, we obtain a consistent but inefficient estimate. If we assume a parametric copula (other than independence) we obtain a full MLE, which is efficient but only under correct copula specification and badly biased if the copula is misspecified. Instead we propose a sieve MLE estimator which improves over OMLE but does not suffer the drawbacks of the full MLE. We model the unknown part of the joint distribution using the Bernstein-Kantorovich polynomial copula and assess the resulting improvement over QMLE and over misspecified FMLE in terms of relative efficiency and robustness. We derive the asymptotic distribution of the new estimator and show that it reaches the semiparametric efficiency bound. Simulations suggest that the sieve MLE can be almost as efficient as FMLE relative to QMLE provided there is enough dependence between the marginals. An application using insurance company loss and expense data demonstrates empirical relevance of the estimator.
    Keywords: sieve MLE; copula; semiparametric efficiency;
    Date: 2016–03–18
  2. By: Koutchade, Philippe; Carpentier, Alain; Femenia, Fabienne
    Abstract: To account for the effects of heterogeneity in micro-econometric models has been major concern in labor economics, empirical industrial organization or trade economics for at least two decades. The micro-econometric agricultural production choice models found in the literature largely ignore the impacts of unobserved heterogeneity. This can partly be explained by the dimension of these models which deals with a large set of choices, e.g., acreage choices, input demands and yield supplies. We propose a random parameter framework to account for the unobserved heterogeneity in micro-econometric agricultural production choices models. This approach allows accounting for unobserved farms’ and farmers’ heterogeneity in a fairly flexible way. We estimate a system of yield supply and acreage choice equations with a panel set of French crop growers. Our results show that heterogeneity significantly matters in our empirical application and that ignoring the heterogeneity of farmers’ choice processes can have important impacts on simulation outcomes. Due to the dimension of the estimation problem and the functional form of the considered production choice model, the Simulated Maximum Likelihood approach usually considered in the applied econometrics literature in such context is empirically intractable. We show that specific versions of the Stochastic Expectation-Maximization algorithms proposed in the statistics literature are easily implemented for estimating random parameter agricultural production models.
    Keywords: Heterogeneity, random parameter models, agricultural production choices, Agricultural and Food Policy, Q12, C13, C15,
    Date: 2015
  3. By: Matias Heikkila; Yves Dominicy; Sirkku Pauliina Ilmonen
    Abstract: Modeling and understanding multivariate extreme events is challenging, but of great importance invarious applications— e.g. in biostatistics, climatology, and finance. The separating Hill estimator canbe used in estimating the extreme value index of a heavy tailed multivariate elliptical distribution. Weconsider the asymptotic behavior of the separating Hill estimator under estimated location and scatter.The asymptotic properties of the separating Hill estimator are known under elliptical distribution withknown location and scatter. However, the effect of estimation of the location and scatter has previouslybeen examined only in a simulation study. We show, analytically, that the separating Hill estimator isconsistent and asymptotically normal under estimated location and scatter, when certain mild conditionsare met.
    Keywords: extreme value theory; hill estimator; multivariate Analysis
    Date: 2015–12
  4. By: Till Weigt; Bernd Wilfling
    Abstract: This paper formally establishes a new forecast combination approach, which is based on VAR modeling of the forecast errors resulting from alternative forecast models. We apply our approach to volatility forecasting by combining several structural time series models with implied volatility. Using a multi-currency data set, we conduct in-sample and out-of-sample forecasting analyses in order (a) to demonstrate the statistical significance of our approach, and (b) to assess its forecasting superiority over alternative forecasting models and combinations.
    Keywords: Forecast combination, volatility forecasting, realized volatility, implied volatility, exchange rates
    JEL: C53 G17
    Date: 2016–04
  5. By: Miranda, Karen; Martínez Ibáñez, Oscar; Manjón Antolín, Miquel Carlos,
    Abstract: Individual-specific effects and their spatial spillovers are generally not identified in linear panel data models. In this paper we present identification conditions under the assumption that covariates are correlated with the individual-specific effects. We also derive appropriate GLS and IV estimators for the resulting correlated random effects spatial panel data model with strictly-exogenous and predetermined explanatory variables, respectively. Lastly, we illustrate the proposed estimators using a Cobb-Douglas production function specification and US state-level data from Munnell (1990). As in previous studies, we find no evidence of public capital spillovers. However, the public capital does play a role in the positive spatial contagion of the nevertheless negative spillovers that states produce in and receive from their neighbours. Keywords: correlated random effects, spatial panel data. JEL Classification: C23
    Keywords: Anàlisi de dades de panel, 33 - Economia,
    Date: 2015
  6. By: Antti Tanskanen; Jani Lukkarinen; Kari Vatanen
    Abstract: Factor models are commonly used in financial applications to analyze portfolio risk and to decompose it to loadings of risk factors. A linear factor model often depends on a small number of carefully-chosen factors and it has been assumed that an arbitrary selection of factors does not yield a feasible factor model. We develop a statistical factor model, the random factor model, in which factors are chosen at random based on the random projection method. Random selection of factors has the important consequence that the factors are almost orthogonal with respect to each other. The developed random factor model is expected to preserve covariance between time-series. We derive probabilistic bounds for the accuracy of the random factor representation of time-series, their cross-correlations and covariances. As an application of the random factor model, we analyze reproduction of correlation coefficients in the well-diversified Russell 3,000 equity index using the random factor model. Comparison with the principal component analysis (PCA) shows that the random factor model requires significantly fewer factors to provide an equally accurate reproduction of correlation coefficients. This occurs despite the finding that PCA reproduces single equity return time-series more faithfully than the random factor model. Accuracy of a random factor model is not very sensitive to which particular set of randomly-chosen factors is used. A more general kind of universality of random factor models is also present: it does not much matter which particular method is used to construct the random factor model, accuracy of the resulting factor model is almost identical.
    Date: 2016–04
  7. By: Dimitrios P. Louzis (Bank of Greece)
    Abstract: This article proposes methods for estimating a Bayesian vector autoregression (VAR) model with an informative steady state prior which also accounts for possible structural changes in the long-term trend of the macroeconomic variables. I show that, overall, the proposed time-varying steady state VAR model can lead to superior point and density macroeconomic forecasting compared to constant steady state VAR specifications.
    Keywords: Steady states; time-varying parameters; macroeconomic forecasting
    JEL: C32
    Date: 2016–03
  8. By: Coughlin, Cletus C. (Federal Reserve Bank of St. Louis); Novy, Dennis (University of Warwick, UK)
    Abstract: Trade data are typically reported at the level of regions or countries and are therefore aggregates across space. In this paper, we investigate the sensitivity of standard gravity estimation to spatial aggregation. We build a model in which initially symmetric micro regions are combined to form aggregated macro regions. We then apply the model to the large literature on border effects in domestic and international trade. Our theory shows that larger countries are systematically associated with smaller border effects. The reason is that due to spatial frictions, aggregation across space increases the relative cost of trading within borders. The cost of trading across borders therefore appears relatively smaller. This mechanism leads to border effect heterogeneity and is independent of multilateral resistance effects in general equilibrium. Even if no border frictions exist at the micro level, gravity estimation on aggregate data can still produce large border effects. We test our theory on domestic and international trade flows at the level of U.S. states. Our results confirm the model’s predictions, with quantitatively large effects.
    Keywords: Gravity; Geography; Borders; Trade Costs; Heterogeneity; Home Bias; Spatial Attenuation; Modifiable Areal Unit Problem (MAUP)
    JEL: F10 F15 R12
    Date: 2016–04–01
  9. By: Audrey Laporte; Adrian Rohit Dass; Brian Ferguson
    Keywords: rational addiction model, dynamic time series, dynamic oanel
    JEL: I12 C22 C23
    Date: 2016–04
  10. By: Li, Hui
    Abstract: The strength of dependence between random variables is an important property that is useful in a lot of areas. Various measures have been proposed which detect mostly divergence from independence. However, a true measure of dependence should also be able to characterize complete dependence where one variable is a function of the other. Previous measures are mostly symmetric which are shown to be insufficient to capture complete dependence. A new type of nonsymmetric dependence measure is presented that can unambiguously identify both independence and complete dependence. The original Rényi’s axioms for symmetric measures are reviewed and modified for nonsymmetric measures.
    Keywords: Nonsymmetric dependence measure, complete dependence, ∗ product on copula, Data Processing Inequality (DPI)
    JEL: C02
    Date: 2016–02–26
  11. By: Javier Hidalgo; Myung Hwan Seo
    Abstract: We consider an omnibus test for the correct specification of the dynamics of a sequence S0266466614000310_inline1 in a lattice. As it happens with causal models and d = 1, its asymptotic distribution is not pivotal and depends on the estimator of the unknown parameters of the model under the null hypothesis. One first main goal of the paper is to provide a transformation to obtain an asymptotic distribution that is free of nuisance parameters. Secondly, we propose a bootstrap analog of the transformation and show its validity. Thirdly, we discuss the results when S0266466614000310_inline2 are the errors of a parametric regression model. As a by product, we also discuss the asymptotic normality of the least squares estimator of the parameters of the regression model under very mild conditions. Finally, we present a small Monte Carlo experiment to shed some light on the finite sample behavior of our test.
    JEL: C21 C23
    Date: 2015–04
  12. By: David T. Frazierz; Éric Renault
    Abstract: The standard description of two-step extremum estimation amounts to plugging-in a first-step estimator of nuisance parameters to simplify the optimization problem and then deducing a user friendly, but potentially inefficient, estimator for the parameters of interest. In this paper, we consider a more general setting of two-step estimation where we do not necessarily have “nuisance parameters” but rather awkward occurrences of the parameters of interest. The efficiency problem associated with two-step estimators in this context is more difficult than with standard nuisance parameters as even if the true unknown value of the parameters were plugged-in to alleviate the awkward occurrences of the parameters, the resulting second-step estimator may not be efficient. In addition, standard approaches to restore efficiency for two-step procedures may not work due to a consistency issue. To alleviate this potential issue, we propose a new computationally simple two-step estimation procedure that relies on targeting and penalized to enforce consistency, with the second-step estimators maintaining asymptotic efficiency. We compare this new method with existing iterative methods in the framework of copula models and asset pricing models. Simulation results illustrate that this new method performs better than existing iterative procedures and is (nearly) computationally equivalent.
    Keywords: Targeting, Penalization, Multivariate Time Series Models, Asset Pricing,
    Date: 2016–04–08

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.