nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒08‒07
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Large dynamic covariance matrices By Robert F. Engle; Olivier Ledoit; Michael Wolf
  2. Maximum Likelihood Estimation of Autoregressive Models with a Near Unit Root and Cauchy Errors By Jungjun Choi; In Choi
  3. Equation-by-Equation Estimation of Multivariate Periodic Electricity Price Volatility By Escribano, Alvaro; Sucarrat, Genaro
  4. A novel, divergence based, regression for compositional data By Tsagris, Michail
  5. Measures of variance for smoothed disturbances in linear state-space models: a clarification By Allin Cottrell; Riccardo (Jack) Lucchetti; Matteo Pelagatti
  6. Estimating trade policy effects with structural gravity By Piermartini, Roberta; Yotov, Yoto V.
  7. Understanding the Sources of Macroeconomic Uncertainty By Barbara Rossi; Tatevik Sekhposyan; Matthiew Soupre
  8. Nonparametric hypothesis testing for equality of means on the simplex By Tsagris, Michail; Preston, Simon; T.A. Wood, Andrew
  9. A Bayesian Estimate of the Pricing Kernel By Giovanni Barone-Adesi; Chiara Legnazzi; Antonietta Mira
  10. External Validity in a Stochastic World By Mark Rosenzweig; Christopher Udry
  11. State dependence and unobserved heterogeneity in a double hurdle model for remittances: evidence from immigrants to Germany By Giulia Bettin; Riccardo Lucchetti; Claudia Pigini
  12. Viewpoint: Estimating the Causal Effects of Policies and Programs By Smith, Jeffrey A.; Sweetman, Arthur
  13. Feature Selection with the R Package MXM: Discovering Statistically-Equivalent Feature Subsets By Lagani, Vincenzo; Athineou, Giorgos; Farcomeni, Alessio; Tsagris, Michail; Tsamardinos, Ioannis

  1. By: Robert F. Engle; Olivier Ledoit; Michael Wolf
    Abstract: Second moments of asset returns are important for risk management and portfolio selection. The problem of estimating second moments can be approached from two angles: time series and the cross-section. In time series, the key is to account for conditional heteroskedasticity; a favored model is Dynamic Conditional Correlation (DCC), derived from the ARCH/GARCH family started by Engle (1982). In the cross-section, the key is to correct in-sample biases of sample covariance matrix eigenvalues; a favored model is nonlinear shrinkage, derived from Random Matrix Theory (RMT). The present paper aims to marry these two strands of literature in order to deliver improved estimation of large dynamic covariance matrices.
    Keywords: Composite likelihood, dynamic conditional correlations, GARCH, Markowitz portfolio selection, nonlinear shrinkage.
    JEL: C13 C58 G11
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:231&r=ecm
  2. By: Jungjun Choi (School of Economics, Sogang University, Seoul); In Choi (School of Economics, Sogang University, Seoul)
    Abstract: This paper studies maximum likelihood estimation of autoregressive models of order 1 with a near unit root and Cauchy errors. Autoregressive models with an intercept and with an intercept and a linear time trend are also considered. The maximum likelihood estimator (MLE) for the autoregressive coeffcient is n^(3/2)-consistent with n denoting the sample size and has a mixture-normal dis- tribution in the limit. The MLE for the scale parameter of Cauchy distribution is n^(1/2)-consistent and its limiting distribution is normal. The MLEs of the intercept and the linear time trend are n^(1/2)- and n^(3/2)-consistent, respectively. It is also shown that the t-statistic for a unit root based on the MLE has a standard normal distribution in the limit. In addition, finite sample properties of the MLE are compared with those of the least square estimator (LSE). It is found that the MLE is more effcient than the LSE when the errors have a Cauchy distribution or a distribution which is a mixture of Cauchy and normal distributions. It is also shown that empirical power of the MLE-based t-test for a unit root is much higher than that of the Dickey-Fuller t-test.
    Keywords: autoregressive model, near unit root, Cauchy distribution, maxi- mum likelihood estimator, infi?nite variance
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:sgo:wpaper:1612&r=ecm
  3. By: Escribano, Alvaro; Sucarrat, Genaro
    Abstract: Electricity prices are characterised by strong autoregressive persistence, periodicity (e.g. intraday, day-of-the week and month-of-the-year effects), large spikes or jumps, GARCH and -- as evidenced by recent findings -- periodic volatility. We propose a multivariate model of volatility that decomposes volatility multiplicatively into a non-stationary (e.g. periodic) part and a stationary part with log-GARCH dynamics. Since the model belongs to the log-GARCH class, the model is robust to spikes or jumps, allows for a rich variety of volatility dynamics without restrictive positivity constraints, can be estimated equation-by-equation by means of standard methods even in the presence of feedback, and allows for Dynamic Conditional Correlations (DCCs) that can -- optionally -- be estimated subsequent to the volatilities. We use the model to study the hourly day-ahead system prices at Nord Pool, and find extensive evidence of periodic volatility and volatility feedback. We also find that volatility is characterised by (positive) leverage in half of the hours, and that a DCC model provides a better fit of the conditional correlations than a Constant Conditional Correlation (CCC) model.
    Keywords: Electricity prices, financial return, volatility, ARCH, exponential GARCH, log-GARCH, Multivariate GARCH, Dynamic Conditional Correlations, inverse leverage, Nord Pool
    JEL: C22 C32 C51 C58
    Date: 2016–07–22
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:72736&r=ecm
  4. By: Tsagris, Michail
    Abstract: In compositional data, an observation is a vector with non-negative components which sum to a constant, typically 1. Data of this type arise in many areas, such as geology, archaeology, biology, economics and political science among others. The goal of this paper is to propose a new, divergence based, regression modelling technique for compositional data. To do so, a recently proved metric which is a special case of the Jensen-Shannon divergence is employed. A strong advantage of this new regression technique is that zeros are naturally handled. An example with real data and simulation studies are presented and are both compared with the log-ratio based regression suggested by Aitchison in 1986.
    Keywords: compositional data, Jensen-Shannon divergence, regression, zero values, φ-divergence
    JEL: C1
    Date: 2015–04–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:72769&r=ecm
  5. By: Allin Cottrell (Wake Forest University); Riccardo (Jack) Lucchetti (Universita' Politecnica delle Marche, Dipartimento di Scienze Economiche Sociali); Matteo Pelagatti (Universita' di Milano - Bicocca)
    Abstract: We clarify a point regarding the appropriate measure(s) of the variance of smoothed disturbances in the context of linear state-space models. This involves explaining how two different concepts, which are sometimes given the same name in the literature, relate to each other. We also describe the behavior of several common software packages is in this regard.
    Keywords: State-space models, Disturbance smoother, Auxiliary residuals.
    JEL: C32 C63
    Date: 2016–07–29
    URL: http://d.repec.org/n?u=RePEc:anc:wgretl:3&r=ecm
  6. By: Piermartini, Roberta; Yotov, Yoto V.
    Abstract: The objective of this manuscript is to serve as a practical guide for estimations with the structural gravity model. After a brief review of the theoretical foundations, we summarize the main challenges with gravity estimations and we review the solutions to address those challenges. Then, we integrate the latest developments in the empirical gravity literature and we offer six recommendations to obtain reliable partial equilibrium estimates of the effects of bilateral and non-discriminatory trade policies within the same comprehensive, and theoretically-consistent econometric specification. Our recommendations apply equally to analyses with aggregate and disaggregated data. Interpretation, consistent aggregation methods, and data challenges and sources for gravity estimations are discussed as well. Empirical exercises demonstrate the usefulness, validity, and applicability of our methods.
    Keywords: Structural Gravity,Trade Policy,Estimation,Partial Equilibrium Analysis
    JEL: F13 F14 F16
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:zbw:wtowps:ersd201610&r=ecm
  7. By: Barbara Rossi; Tatevik Sekhposyan; Matthiew Soupre
    Abstract: We propose a decomposition to distinguish between Knightian uncertainty (ambiguity) and risk, where the ?rst measures the uncertainty about the probability distribution generating the data, while the second measures uncertainty about the odds of the outcomes when the probability distribution is known. We use the Survey of Professional Forecasters (SPF) density forecasts to quantify overall uncertainty as well as the evolution of the di¤erent components of uncertainty over time and investigate their importance for macroeconomic ?uctuations. We also study the behavior and evolution of the various components of our decomposition in a model that features ambiguity and risk.
    Keywords: uncertainty, risk, ambiguity, knightian uncertainty, survey of professional forecasters, predictive densities
    JEL: C22 C52 C53
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:920&r=ecm
  8. By: Tsagris, Michail; Preston, Simon; T.A. Wood, Andrew
    Abstract: In the context of data that lie on the simplex, we investigate use of empirical and exponential empirical likelihood, and Hotelling and James statistics, to test the null hypothesis of equal population means based on two independent samples. We perform an extensive numerical study using data simulated from various distributions on the simplex. The results, taken together with practical considerations regarding implementation, support the use of bootstrap-calibrated James statistic.
    Keywords: Compositional data, hypothesis testing, Hotelling test, James test, non parametric, empirical likelihood, bootstrap
    JEL: C1
    Date: 2016–07–19
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:72771&r=ecm
  9. By: Giovanni Barone-Adesi (University of Lugano - Swiss Finance Institute); Chiara Legnazzi (University of Lugano - Swiss Finance Institute); Antonietta Mira (Swiss Finance Institute, University of Lugano)
    Abstract: The focus of this article is the Pricing Kernel (PK), the building-block of asset pricing theory. In the classical framework the shape of the PK is monotonically decreasing in the stock price, nevertheless empirical evidence suggests that the PK is locally increasing in the interval around the 0% return and has an irregular behaviour in the tails of the distribution. We argue that these deviations, known as pricing kernel puzzle, derive from a dis-homogeneity between the physical and the risk neutral (RN) measure and can be corrected by embedding some forward looking information into the physical measure. Our proposed methodology combines the information from historical returns with that coming from the RN measure through a non-parametric Poisson-Dirichlet process. As a result the irregular behaviour of the PK in the tails of the distribution is reduced and the monotonicity is ensured in almost all cases.
    Keywords: Pricing Kernel, pricing kernel puzzle, Poisson-Dirichlet Process
    JEL: G13 G19
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp1614&r=ecm
  10. By: Mark Rosenzweig (Economic Growth Center,Yale University); Christopher Udry (Economic Growth Center, Yale University)
    Abstract: We examine the generalizability of internally valid estimates of causal effects in a fixed population over time when that population is subject to aggregate shocks. This temporal external validity is shown to depend upon the distribution of the aggregate shocks and the interaction between these shocks and the casual effects. We show that returns to investment in agriculture, small and medium enterprises and human capital differ significantly from year to year. We also show how returns to investments interact with specific aggregate shocks, and estimate the parameters of the distributions of these shocks. We show how to use these estimates to appropriately widen estimated confidence intervals to account for aggregate shocks.
    Keywords: returns to investment, heterogeneity, treatment effect
    JEL: C93 O1 O13 O14 O15
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:egc:wpaper:1054&r=ecm
  11. By: Giulia Bettin (Università Politecnica delle Marche, Dipartimento di Scienze Economiche e Sociali, MoFiR); Riccardo Lucchetti (Università Politecnica delle Marche, Dipartimento di Scienze Economiche e Sociali); Claudia Pigini (Università Politecnica delle Marche, Dipartimento di Scienze Economiche e Sociali)
    Abstract: The empirical modelling of remitting behaviour has been the object of a considerable amount of micro-level literature. The increasing availability of panel datasets makes it possible to explore the persistence in transfer decisions as a result of intertemporal choices, that may be consistent with several motivations to remit. Building a dynamic model with panel data poses the additional problem of dealing properly with permanent unobserved heterogeneity; moreover, the specific censored nature of international transfers has to be accounted for as well. In this paper, we propose a dynamic, random-effects double hurdle model for remittances: we combine the Maximum Likelihood estimator of the traditional double hurdle model for cross-section data (Jones, 1989) with the approach put forward by Heckman (1981b) for dealing with state dependence and unobserved heterogeneity in a non-linear setting. Our empirical evidence based on the German SOEP dataset suggests that there is significant state dependence in remitting behaviour consistent with migrants. intertemporal allocation of savings; at the same time, transaction costs are likely to affect the steadiness of transfers over time.
    Keywords: Migration, Remittances, State dependence, Double hurdle, Intertemporal choices
    JEL: F22 F24 C23 C34 C35
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:anc:wmofir:127&r=ecm
  12. By: Smith, Jeffrey A. (University of Michigan); Sweetman, Arthur (McMaster University)
    Abstract: Estimation, inference and interpretation of the causal effects of programs and policies have all advanced dramatically over the past 25 years. We highlight three particularly important intellectual trends: an improved appreciation of the substantive importance of heterogeneous responses and of their methodological implications, a stronger focus on internal validity brought about by the "credibility revolution," and the scientific value that follows from grounding estimation and interpretation in economic theory. We discuss a menu of commonly employed partial equilibrium approaches to the identification of causal effects, emphasizing that the researcher's central intellectual contribution always consists of making an explicit case for a specific causal interpretation given the relevant economic theory, the data, the institutional context and the economic question of interest. We also touch on the importance of general equilibrium effects and full cost-benefit analyses.
    Keywords: causal effects, heterogeneous treatment effects, partial equilibrium identification
    JEL: C18 C21 C26 C50 C90
    Date: 2016–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp10108&r=ecm
  13. By: Lagani, Vincenzo; Athineou, Giorgos; Farcomeni, Alessio; Tsagris, Michail; Tsamardinos, Ioannis
    Abstract: The statistically equivalent signature (SES) algorithm is a method for feature selection inspired by the principles of constrained-based learning of Bayesian Networks. Most of the currently available feature-selection methods return only a single subset of features, supposedly the one with the highest predictive power. We argue that in several domains multiple subsets can achieve close to maximal predictive accuracy, and that arbitrarily providing only one has several drawbacks. The SES method attempts to identify multiple, predictive feature subsets whose performances are statistically equivalent. Under that respect SES subsumes and extends previous feature selection algorithms, like the maxmin parent children algorithm. SES is implemented in an homonym function included in the R package MXM, standing for mens ex machina, meaning 'mind from the machine' in Latin. The MXM implementation of SES handles several data-analysis tasks, namely classi�cation, regression and survival analysis. In this paper we present the SES algorithm, its implementation, and provide examples of use of the SES function in R. Furthermore, we analyze three publicly available data sets to illustrate the equivalence of the signatures retrieved by SES and to contrast SES against the state-of-the-art feature selection method LASSO. Our results provide initial evidence that the two methods perform comparably well in terms of predictive accuracy and that multiple, equally predictive signatures are actually present in real world data.
    Keywords: feature selection, constraint-based algorithms, multiple predictive signatures
    JEL: C88
    Date: 2016
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:72772&r=ecm

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.