nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒11‒18
twenty papers chosen by
Sune Karlsson
Örebro universitet

  1. Testing for the Sandwich-Form Covariance Matrix Applied to Quasi-Maximum Likelihood Estimation Using Economic and Energy Price Growth Rates By Lijuan Huo; Jin Seo Cho
  2. Parametric Inference on the Mean of Functional Data Applied to Lifetime Income Curves By Jin Seo Cho; Peter C. B. Phillips; Ju Won Seo
  3. Estimation and Inference of Fractional Continuous-Time Model with Discrete-Sampled Data By Wang, Xiaohu; Xiao, Weilin; Yu, Jun
  4. On (bootstrapped) cointegration tests in partial systems By Sven Schreiber
  5. How Serious is the Measurement-Error Problem in a Popular Risk-Aversion Task? By Fabien Perez; Guillaume Hollard; Radu Vranceanu; Delphine Dubart
  6. Quadratic shrinkage for large covariance matrices By Olivier Ledoit; Michael Wolf
  7. Group Average Treatment Effects for Observational Studies By Daniel Jacob; Wolfgang Karl H\"ardle; Stefan Lessmann
  8. Exponential-type GARCH models with linear-in-variance risk premium By HAFNER Christian,; KYRIAKOPOULOU Dimitra,
  9. Improving portfolios global performance using a cleaned and robust covariance matrix estimate By Emmanuelle Jay; Thibault Soler; Eugénie Terreaux; Jean-Philippe Ovarlez; Frédéric Pascal; Philippe De Peretti; Christophe Chorro
  10. A Bayesian Approach to Account for Misclassification in Prevalence and Trend Estimation* By van Hasselt, Martijn; Bollinger, Christopher; Bray, Jeremy
  11. Novel approaches to coherency conditions in dynamic LDV models: quantifying financing constraints and a firm's decision and ability to innovate By Hajivassiliou, Vassilis; Savignac, Frédérique
  12. PENALIZED MAXIMUM LIKELIHOOD ESTIMATION OF LOGIT-BASED EARLY WARNING SYSTEMS By Claudia Pigini
  13. Nowcasting GDP with a large factor model space By Eraslan, Sercan; Schröder, Maximilian
  14. Identification of Sign-Dependency of Impulse Responses By Nadav Ben Zeev
  15. Too LATE for Natural Experiments: A Critique of Local Average Treatment Effects Using the Example of Angrist and Evans (1998) By Öberg, Stefan
  16. Testing Correlation in Error-Component Models By Jochmans, K.
  17. Checking if the straitjacket fits By Adrian Pagan; Michael Wickens
  18. Causal Mediation Analysis in Economics: objectives, assumptions, models By Viviana Celli
  19. High-order coverage of smoothed Bayesian bootstrap intervals for population quantiles By David M. Kaplan; Lonnie Hofmann
  20. Instrumental-Variable Estimation of Gravity Equations By Jochmans, K.; Verardi, V.

  1. By: Lijuan Huo (Beijing Institute of Technology); Jin Seo Cho (Yonsei Univ)
    Abstract: This study aims to directly test for the sandwich-form asymptotic covariance matrix entailed by conditional heteroskedasticity and autocorrelation in the regression error. Given that none of the conditional heteroskedastic or autocorrelated regression errors yield the sandwich-form asymptotic covariance matrix for the least squares estimator, it is not necessary to estimate the asymptotic covariance matrix using the heteroskedasticity-consistent (HC) or heteroskedasticity and autocorrelation-consistent (HAC) covariance matrix estimator. Because of this fact, we first examine testing for the sandwich-form asymptotic covariance matrix before applying the HC or HAC covariance matrix estimator. For this goal, we apply the testing methodologies proposed by Cho and White (2015) and Cho and Phillips (2018) to fit the context of this study by extending the scope of their maximum test statistic to have greater power and further establishing a methodology to sequentially detect the influence of heteroskedastic and autocorrelated regression errors on the asymptotic covariance matrix. We affirm the theory on the test statistics of this study through a simulation and further apply our test statistics to economic and energy price growth rate data for illustrative purposes.
    Keywords: Information matrix equality; sandwich-form covariance matrix; heteroskedasticity-consistent covariance matrix estimator; heteroskedasticity and autocorrelation-consistent covariance matrix estimator; economic growth rate; energy price growth rate.
    JEL: C12 C22 O47 G17 Q47
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2019rwp-152&r=all
  2. By: Jin Seo Cho (Yonsei Univ); Peter C. B. Phillips (Yale Univ); Ju Won Seo (National Univ of Singapore)
    Abstract: We propose a framework for estimation of the conditional mean function in a parametric model with function space covariates. The approach employs a functional mean squared error objective criterion and allows for possible model misspecification. Under regulatory conditions, consistency and asymptotic normality are established. The analysis extends to situations where the asymptotic properties are influenced by estimation errors arising from the presence of nuisance parameters. Wald, Lagrange multiplier, and quasi-likelihood ratio statistics are studied and asymptotic theory is provided. These procedures enable inference about curve shapes in the observed functional data. Several model specifications where our results are useful are analyzed, including random coefficient models, distributional mixtures, and copula mixture models. Simulations exploring the finite sample properties of our methods are provided. An empirical application conducts lifetime income path comparisons across different demographic groups according to years of work experience. Gender and education levels produce differences in mean income paths corroborating earlier research. However, the mean income paths are found to be proportional so that, upon rescaling, the paths match over gender and across education levels.
    Keywords: Functional data; Mean function; Wald test statistic; Lagrange multiplier test statistic; Quasi-likelihood ratio test statistic.
    JEL: C11 C12 C80
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:yon:wpaper:2019rwp-153&r=all
  3. By: Wang, Xiaohu (The Chinese University of Hong Kong); Xiao, Weilin (Zhejiang University); Yu, Jun (School of Economics, Singapore Management University)
    Abstract: This paper proposes a two-stage method for estimating parameters in a para-metric fractional continuous-time model based on discrete-sampled observations. In the first stage, the Hurst parameter is estimated based on the ratio of two second-order differences of observations from different time scales. In the second stage, the other parameters are estimated by the method of moments. All estimators have closed-form expressions and are easy to obtain. A large sample theory of the pro-posed estimators is derived under either the in-fill asymptotic scheme or the double asymptotic scheme. Extensive simulations show that the proposed theory performs well in finite samples. Two empirical studies are carried out. The first, based on the daily realized volatility of equities from 2011 to 2017, shows that the Hurst parameter is much lower than 0.5, which suggests that the realized volatility is too rough for continuous-time models driven by standard Brownian motion or fractional Brownian motion with Hurst parameter larger than 0.5. The second empirical study is of the daily realized volatility of exchange rates from 1986 to 1999. The estimate of the Hurst parameter is again much lower than 0.5. Moreover, the proposed frac-tional continuous-time model performs better than the autoregressive fractionally integrated moving average (ARFIMA) model out-of-sample.
    Keywords: Rough Volatility; Hurst Parameter; Second-order Difference; Different Time Scales; Method of Moments; ARFIMA
    JEL: C15 C22 C32
    Date: 2019–09–16
    URL: http://d.repec.org/n?u=RePEc:ris:smuesw:2019_017&r=all
  4. By: Sven Schreiber
    Abstract: As applied cointegration analysis faces the challenge that (a) potentially relevant variables are unobservable and (b) it is uncertain which covariates are relevant, partial systems are often used and potential (stationary) covariates are ignored. Recently it has been argued that a nominally significant cointegration outcome using the bootstrapped rank test (Cavaliere, Rahbek, and Taylor, 2012) in a bivariate setting might be due to test size distortions when a larger data-generating process (DGP) with covariates is assumed. This study reviews the issue systematically and generally finds noticeable but only mild size distortions, even when the specified DGP includes a large borderline stationary root. The previously found drastic test size problems in an application of a long-run Phillips curve (inflation and unemployment in the euro area) appear to hinge on the particular construction of a time series for the output gap as a covariate. We conclude that the problems of the bootstrapped rank test are not severe and that it is still to be recommended for applied research.
    Keywords: bootstrap, cointegration rank test, empirical size
    JEL: C32 C15 E31
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:imk:wpaper:199-2019&r=all
  5. By: Fabien Perez (ENSAE - Ecole Nationale de la Statistique et de l'Analyse Economique - Ecole Nationale de la Statistique et de l'Analyse Economique); Guillaume Hollard (CES - Centre d'économie de la Sorbonne - CNRS - Centre National de la Recherche Scientifique - UP1 - Université Panthéon-Sorbonne); Radu Vranceanu (THEMA - Théorie économique, modélisation et applications - UCP - Université de Cergy Pontoise - Université Paris-Seine - CNRS - Centre National de la Recherche Scientifique); Delphine Dubart (ESSEC Business School - Essec Business School)
    Abstract: This paper uses the test/retest data from the Holt and Laury (2002) experiment to provide estimates of the measurement error in this popular risk-aversion task. Maximum likelihood estimation suggests that the variance of the measurement error is approximately equal to the variance of the number of safe choices. Simulations confirm that the coefficient on the risk measure in univariate OLS regressions is approximately half of its true value. Unlike measurement error, the discrete transformation of continuous riskaversion is not a major issue. We discuss the merits of a number of different solutions: increasing the number of observations, IV and the ORIV method developed by Gillen et al. (2019).
    Keywords: ORIV,Experiments,Measurement error,Risk-aversion,Test/retest
    Date: 2019–09–17
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02291224&r=all
  6. By: Olivier Ledoit; Michael Wolf
    Abstract: This paper constructs a new estimator for large covariance matrices by drawing a bridge between the classic Stein (1975) estimator in finite samples and recent progress under large-dimensional asymptotics. Our formula is quadratic: it has two shrinkage targets weighted by quadratic functions of the concentration ratio (matrix dimension divided by sample size, a standard measure of the curse of dimensionality). The first target dominates mid-level concentrations and the second one higher levels. This extra degree of freedom enables us to outperform linear shrinkage when optimal shrinkage is not linear (which is the general case). Both of our targets are based on what we term the “Stein shrinker”, a local attraction operator that pulls sample covariance matrix eigenvalues towards their nearest neighbors, but whose force diminishes with distance, like gravitation. We prove that no cubic or higher- order nonlinearities beat quadratic with respect to Frobenius loss under large-dimensional asymptotics. Non-normality and the case where the matrix dimension exceeds the sample size are accommodated. Monte Carlo simulations confirm state-of-the-art performance in terms of accuracy, speed, and scalability.
    Keywords: Inverse shrinkage, Hilbert transform, large-dimensional asymptotics, signal amplitude, Stein shrinkage
    JEL: C13
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:zur:econwp:335&r=all
  7. By: Daniel Jacob; Wolfgang Karl H\"ardle; Stefan Lessmann
    Abstract: The paper proposes an estimator to make inference on key features of heterogeneous treatment effects sorted by impact groups (GATES) for non-randomised experiments. Observational studies are standard in policy evaluation from labour markets, educational surveys, and other empirical studies. To control for a potential selection-bias we implement a doubly-robust estimator in the first stage. Keeping the flexibility to use any machine learning method to learn the conditional mean functions as well as the propensity score we also use machine learning methods to learn a function for the conditional average treatment effect. The group average treatment effect is then estimated via a parametric linear model to provide p-values and confidence intervals. The result is a best linear predictor for effect heterogeneity based on impact groups. Cross-splitting and averaging for each observation is a further extension to avoid biases introduced through sample splitting. The advantage of the proposed method is a robust estimation of heterogeneous group treatment effects under mild assumptions, which is comparable with other models and thus keeps its flexibility in the choice of machine learning methods. At the same time, its ability to deliver interpretable results is ensured.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1911.02688&r=all
  8. By: HAFNER Christian, (Université catholique de Louvain); KYRIAKOPOULOU Dimitra, (Université catholique de Louvain, CORE, Belgium)
    Abstract: One of the implications of the intertemporal capital asset pricing model (CAPM) is that the risk premium of the market portfolio is a linear function of its variance. Yet, estimation theory of classical GARCH-in-mean models with linear-in-variance risk premium requires strong assumptions and is incomplete. We show that exponential-type GARCH models such as EGARCH or Log-GARCH are more natural in dealing with linear-in-variance risk premia. For the popular and more difficult case of EGARCH-in-mean, we derive conditions for the existence of a unique stationary and ergodic solution and invertibility following a stochastic recurrence equation approach. We then show consistency and asymptotic normality of the quasi maximum likelihood estimator under weak moment assumptions. An empirical application estimates the dynamic risk premia of a variety of stock indices using both EGARCH-M and Log-GARCH-M models.
    Keywords: GARC H-in-Mean, EGARCH, Log-GARCH, CAPM, risk premium, maximum likelihood, stochastic recurrence equation
    JEL: C71 C78
    Date: 2019–07–10
    URL: http://d.repec.org/n?u=RePEc:cor:louvco:2019013&r=all
  9. By: Emmanuelle Jay (Fidéas Capital, Quanted & Europlace Institute of Finance); Thibault Soler (Fidéas Capital et Centre d'Economie de la Sorbonne); Eugénie Terreaux (DEMR, ONERA - Université Paris-Saclay); Jean-Philippe Ovarlez (DEMR, ONERA - Université Paris-Saclay); Frédéric Pascal (L2S, Centrale Supélec - Université Paris-Saclay); Philippe De Peretti (Centre d'Economie de la Sorbonne - Université Paris 1Panthéon-Sorbonne; https://centredeconomiesorbonne.univ-paris1.fr); Christophe Chorro (Centre d'Economie de la Sorbonne - Université Paris 1 Panthéon-Sorbonne; https://centredeconomiesorbonne.univ-paris1.fr)
    Abstract: This paper presents how the most recent improvements made on covariance matrix estimation and model order selection can be applied to the portfolio optimization problem. The particular case of the Maximum Variety Portfolio is treated but the same improvements apply also in the other optimization problems such as the Minimum Variance Portfolio. We assume that the most important information (or the latent factors) are embedded in correlated Elliptical Symmetric noise extending classical Gaussian assumptions. We propose here to focus on a recent method of model order selection allowing to efficiently estimate the subspace of main factors describing the market. This non-standard model order selection problem is solved through Random Matrix Theory and robust covariance matrix estimation. Morepver we extend the method to non-homogeneous assets returns. The proposed procedure will be explained through synthetic data and be applied and compared with standard techniques on real market data showing promising improvements
    Keywords: Robust Covariance Matrix Estimation; Model Order Selection; Random Matrix Theory; Portfolio Optimization; Financial Time Series; Multi-Factor Model; Elliptical Symmetric Noise; Maximum Variety Portfolio
    JEL: C5 G11
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:mse:cesdoc:19022&r=all
  10. By: van Hasselt, Martijn (University of North Carolina at Greensboro, Department of Economics); Bollinger, Christopher (University of Kentucky); Bray, Jeremy (University of North Carolina at Greensboro, Department of Economics)
    Abstract: In this paper we present a Bayesian approach to estimate the mean of a binary variable and changes in the mean over time, when the variable is subject to misclassification error. These parameters are partially identified and we derive identified sets under various assumptions about the misclassification rates. We apply our method to estimating the prevalence and trend of prescription opioid misuse, using data from the 2002-2014 National Survey on Drug Use and Health. Using a range of priors, the posterior distribution provides evidence that the prevalence of opioid misuse increases multiple times between 2002 and 2012.
    Keywords: Misclassification; partial identification; Bayesian estimation;
    JEL: C11 C13 C15 I12
    Date: 2019–10–24
    URL: http://d.repec.org/n?u=RePEc:ris:uncgec:2019_013&r=all
  11. By: Hajivassiliou, Vassilis; Savignac, Frédérique
    Abstract: We develop novel methods for establishing coherency conditions in Static and Dynamic Limited Dependent Variables (LDV) Models. We propose estimation strategies based on Conditional Maximum Likelihood Estimation for simultaneous LDV models without imposing recursivity. Monte-Carlo experiments confirm substantive Mean-Squared-Error improvements of our approach over other estimators. We analyse the impact of financing constraints on innovation: ceteris paribus, a firm facing bindingfinance constraints is substantially less likely to undertake innovation, while the probability that a firm encounters a binding finance constraint more than doubles if the firm is innovative. A strong role for state dependence in dynamic versions of our models is also established.
    JEL: C51 C52 C15
    Date: 2019–10
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:102544&r=all
  12. By: Claudia Pigini (Dipartimento di Scienze Economiche e Sociali - Universita' Politecnica delle Marche)
    Abstract: Panel logit models have proved to be simple and effective tools to build Early Warning Systems (EWS) for financial crises. But because crises are rare events, the estimation of EWS does not usually account for country fixed effects, so as to avoid losing all the information relative to countries that never face a crisis. I propose using a penalized maximum likelihood estimator for fixed-effects logit-based EWS where all the observations are retained. I show that including country effects, while preserving the entire sample, greatly improves the predictive power of EWS with respect to the pooled, random-effects and standard fixed-effects models.
    Keywords: Keywords: Banking Crisis, Bias Reduction, Fixed-Effects Logit, Separated Data
    JEL: C23 C25 G17 G21
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:anc:wpaper:441&r=all
  13. By: Eraslan, Sercan; Schröder, Maximilian
    Abstract: We propose a novel time-varying parameters mixed-frequency dynamic factor model which is integrated into a dynamic model averaging framework for macroeconomic nowcasting. Our suggested model can efficiently deal with the nature of the real-time data flow as well as parameter uncertainty and time-varying volatility. In addition, we develop a fast estimation algorithm. This enables us to generate nowcasts based on a large factor model space. We apply the suggested framework to nowcast German GDP. Our recursive out-of-sample forecast evaluation results reveal that our framework is able to generate forecasts superior to those obtained from a naive and more competitive benchmark models. These forecast gains seem to emerge especially during unstable periods, such as the Great Recession, but also remain over more tranquil periods.
    Keywords: dynamic factor model,forecasting,GDP,mixed-frequency,model averaging,time-varying-parameter
    JEL: C11 C32 C51 C52 C53
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:412019&r=all
  14. By: Nadav Ben Zeev (BGU)
    Keywords: Sign-dependency of impulse responses, Local projections, Second-order specification, Dichotomous specification
    JEL: E32
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:bgu:wpaper:1907&r=all
  15. By: Öberg, Stefan (Department of Economic History, School of Business, Economics and Law, Göteborg University)
    Abstract: There has been a fundamental flaw in the conceptual design of many natural experiments used in the economics literature, particularly among studies aiming to estimate a local average treatment effect (LATE). When we use an instrumental variable (IV) to estimate a LATE, the IV only has an indirect effect on the treatment of interest. Such IVs do not work as intended and will produce severely biased and/or uninterpretable results. This comment demonstrates that the LATE does not work as previously thought and explains why using the natural experiment proposed by Angrist and Evans (1998) as the example.
    Keywords: causal inference; natural experiment; local average treatment effect; complier average causal effect; instrumental variable
    JEL: C21 C90 J13
    Date: 2019–11–05
    URL: http://d.repec.org/n?u=RePEc:hhs:gunhis:0025&r=all
  16. By: Jochmans, K.
    Abstract: This paper concerns linear models for grouped data with group-specific effects. We construct a portmanteau test for the null of no within-group correlation beyond that induced by the group-specific effect. The approach allows for heteroskedasticity and is applicable to models with exogenous, predetermined, or endogenous regressors. The test can be implemented as soon as three observations per group are available and is applicable to unbalanced data. A test with such general applicability is not available elsewhere. We provide theoretical results on size and power under asymptotics where the number of groups grows but their size is held fixed. Extensive power comparisons with other tests available in the literature for special cases of our setup reveal that our test compares favorably. In a simulation study we find that, under heteroskedasticity, only our procedure yields a test that is both size-correct and powerful. In a large data set on mothers with multiple births we find that infant birthweight is correlated across children even after after controlling for mother fixed effects and a variety of prenatal care factors. This suggests that such a strategy may be inadequate to take care of all confounding factors that correlate with the mother's decision to engage in activities that are detrimental to the infant's health.
    Keywords: analysis of variance, clustered standard errors, error components, fixed effects, heteroskedasticity, within-group correlation, portmanteau test, short panel data
    JEL: C12 C23
    Date: 2019–09–04
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1993&r=all
  17. By: Adrian Pagan; Michael Wickens
    Abstract: Pesaran and Smith (2011) concluded that DSGE models were sometimes a straitjacket which hampered the ability to match certain features of the data. In this paper we look at how one might assess the fit of these models using a variety of measures, rather than what seems to be an increasingly common device - the Marginal Data Density. We apply these in the context of models by Christiano et.al (2014) and Ireland (2004), finding they fail to make a match by a large margin. Against this, there is a strong argument for having a straitjacket as it enforces some desirable behaviour on models and makes researchers think about how to account for any non-stationarity in the data. We illustrate this with examples drawn from the SVAR literature and also more eclectic models such as Holston et al (2017) for extracting an estimate of the real natural rate.
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2019-81&r=all
  18. By: Viviana Celli (Department of Methods and Models for Economics, Territory and Finance.)
    Abstract: The aim of mediation analysis is to identify and evaluate the mechanisms through which a treatment affects an outcome. The goal is to disentangle the total treatment effect into two components: the indirect effect that operates through one or more intermediate variables, called mediators, and the direct effect that captures the other mechanisms. This paper reviews the methodological advancements in causal mediation literature in economics, in particular focusing on quasi-experimental designs. It defines the parameters of interest under the counterfactual approach, the assumptions and the identification strategies, presenting the Instrumental Variables (IV), Difference-in-Differences (DID) and the Synthetic Control (SC) methods.
    Keywords: mediation, policy evaluation, direct effect, indirect effect, sequential conditional independence, quasi-experimental designs.
    JEL: B41 C18 C21 C52 D04
    Date: 2019–11
    URL: http://d.repec.org/n?u=RePEc:saq:wpaper:12/19&r=all
  19. By: David M. Kaplan (Department of Economics, University of Missouri); Lonnie Hofmann (Department of Economics, University of Missouri)
    Abstract: Using fractional order statistics, we characterize the high-order frequentist coverage probability of smoothed and unsmoothed Bayesian bootstrap credible intervals for population quantiles. The original Rubin (1981) unsmoothed intervals have O(n-1/2) coverage error, whereas intervals based on the smoothed Bayesian bootstrap of Banks (1988) have much smaller O(n-3/2[log(n)]3) coverage error. No smoothing parameter is required. In special cases, the smoothed intervals are exact. Simulations illustrate our results.
    Keywords: continuity correction, credibility, high-order accuracy, smoothing
    JEL: C21
    Date: 2019–11–04
    URL: http://d.repec.org/n?u=RePEc:umc:wpaper:1914&r=all
  20. By: Jochmans, K.; Verardi, V.
    Abstract: We present an instrumental-variable approach to estimate gravity equations. Our procedure accommodates the potential endogeneity of policy variables and is fully theory-consistent. It is based on the model in levels and accounts for multilateral resistance terms by means of importer and export fixed effects. The implementation is limited-information in nature, and so is silent on the form of the mechanism that drives the actual policy decisions. The procedure spawns specification tests for the validity of the instruments used as well as a test for exogeneity. We estimate gravity equations from five cross-sections of bilateral-trade data where the policy decision of interest is the engagement in a free trade agreement. We rely on the interaction of the countries in the pair with third-party trading partners to construct a credible instrumental variable based on the substantial transitivity in the formation of trade agreements that is observed in the data. This instrument is strongly correlated with the policy variable. Our point estimate of the average impact of a free trade agreement increases over the sampling period, starting at 61% and clocking o_ at a 117% increase in bilateral trade volume. Not correcting for endogeneity yields stable estimates of around 25%.
    Keywords: endogeneity, fixed effects, gravity equation, instrumental variable, multilateral resistance, free trade agreement
    JEL: C26 F14
    Date: 2019–11–11
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1994&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.