nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒05‒07
twelve papers chosen by
Sune Karlsson
Örebro universitet

  1. Essays on functional coefficient models By Koo, Chao
  2. Interpreting Quantile Independence By Matthew A. Masten; Alexandre Poirier
  3. Order Invariant Tests for Proper Calibration of Multivariate Density Forecasts By Jonas Dovern; Hans Manner
  4. Targeted undersmoothing By Christian Hansen; Damian Kozbur; Sanjog Misra
  5. Model-based forecast adjustment; with an illustration to inflation By Franses, Ph.H.B.F.
  6. Nonstationary cointegration in the fractionally cointegrated VAR model By Søren Johansen; Morten Ørregaard Nielsen
  7. Series estimation for single-index models under constraints By Chaohua Dong; Jiti Gao; Bin Peng
  8. Even Count Estimation By Laszlo Balazsi; Felix Chan; Laszlo Matyas
  9. Dealing with heterogeneity in panel VARs using sparse finite mixtures By Huber, Florian
  10. Models with Multiplicative Decomposition of Conditional Variances and Correlations By Cristina Amado; Annastiina Silvennoinen; Timo Ter¨asvirta
  11. Estimating Treatment Effects in Mover Designs By Peter Hull
  12. Nonparametric estimation of non-exchangeable latent-variable models By Stéphane Bonhomme; Koen Jochmans; Jean-Marc Robin

  1. By: Koo, Chao (Tilburg University, School of Economics and Management)
    Abstract: This dissertation is composed of three essays on functional coefficient models (also referred to as varying-coefficient models) in the time series context. The first essay proposes two estimators for a functional coefficient model with discontinuities in the coefficient functions. One is based on the weighted residual mean squared error, which works well only if the conditional error variance is continuous. The other estimator is based on the local Wald test statistics which is applicable even if the conditional error variance contains discontinuities. In the second essay, we introduce a new model – the semiparametric transition model, and propose an iterative estimation procedure which is based on the straightforward application of (local) least squares. Simulations demonstrate that the proposed estimation provides precise estimates for many types of transition functions. The third essay proposes an estimator for a functional coefficient model with endogenous variables. In contrast to the existing functional coefficient IV literature, our estimator is adapted to the case that coefficients are functions of an endogenous variable. To illustrate the utility of our approach, we provide an empirical example based on the relationship among the hourly wage rate, education level, and work experience.
    Date: 2018
  2. By: Matthew A. Masten; Alexandre Poirier
    Abstract: How should one assess the credibility of assumptions weaker than statistical independence, like quantile independence? In the context of identifying causal effects of a treatment variable, we argue that such deviations should be chosen based on the form of selection on unobservables they allow. For quantile independence, we characterize this form of treatment selection. Specifically, we show that quantile independence is equivalent to a constraint on the average value of either a latent propensity score (for a binary treatment) or the cdf of treatment given the unobservables (for a continuous treatment). In both cases, this average value constraint requires a kind of non-monotonic treatment selection. Using these results, we show that several common treatment selection models are incompatible with quantile independence. We introduce a class of assumptions which weakens quantile independence by removing the average value constraint, and therefore allows for monotonic treatment selection. In a potential outcomes model with a binary treatment, we derive identified sets for the ATT and QTT under both classes of assumptions. In a numerical example we show that the average value constraint inherent in quantile independence has substantial identifying power. Our results suggest that researchers should carefully consider the credibility of this non-monotonicity property when using quantile independence to weaken full independence.
    Date: 2018–04
  3. By: Jonas Dovern (Alfred-Weber-Institute for Economics, Heidelberg University); Hans Manner (University of Graz, Austria)
    Abstract: Established tests for proper calibration of multivariate density forecasts based on Rosenblatt probability integral transforms can be manipulated by changing the order of variables in the forecasting model. We derive order invariant tests. The new tests are applicable to densities of arbitrary dimensions and can deal with parameter estimation uncertainty and dynamic misspecification. Monte Carlo simulations show that they often have superior power relative to established approaches. We use the tests to evaluate GARCH-based multivariate density forecasts for a vector of stock market returns.
    Keywords: Density calibration; Goodness-of-fit test; Predictive density; Rosenblatt transformation
    JEL: C12 C32 C52 C53
    Date: 2018–04
  4. By: Christian Hansen; Damian Kozbur; Sanjog Misra
    Abstract: This paper proposes a post-model selection inference procedure, called targeted undersmoothing, designed to construct uniformly valid confidence sets for functionals of sparse high-dimensional models, including dense functionals that may depend on many or all elements of the high-dimensional parameter vector. The confidence sets are based on an initially selected model and two additional models which enlarge the initial model. By varying the enlargements of the initial model, one can also conduct sensitivity analysis of the strength of empirical conclusions to model selection mistakes in the initial model. We apply the procedure in two empirical examples: estimating heterogeneous treatment effects in a job training program and estimating profitability from an estimated mailing strategy in a marketing campaign. We also illustrate the procedure’s performance through simulation experiments.
    Keywords: model selection, sparsity, dense functionals, hypothesis testing, sensitivity analysis
    JEL: C12 C51
    Date: 2016–08
  5. By: Franses, Ph.H.B.F.
    Abstract: This paper introduces the idea to adjust forecasts from a linear time series model where the adjustment relies on the assumption that this linear model is an approximation of for example a nonlinear time series model. This way to create forecasts can be convenient when inference for the nonlinear model is impossible, complicated or unreliable in small samples. The size of the forecast adjustment can be based on the estimation results for the linear model and on other data properties like the first few moments or autocorrelations. An illustration is given for an ARMA(1,1) model which is known to approximate a first order diagonal bilinear time series model. For this case, the forecast adjustment is easy to derive, which is convenient as the particular bilinear model is indeed cumbersome to analyze. An application to a range of inflation series for low income countries shows that such adjustment can lead to improved forecasts, although the gain is not large nor frequent
    Keywords: ARMA(1, 1), Inflation, First-order diagonal bilinear time series model, Methods, of Moments, Adjustment of forecasts
    JEL: C22 C53
    Date: 2018–03–01
  6. By: Søren Johansen (University of Copenhagen and CREATES); Morten Ørregaard Nielsen (Queen's University and CREATES)
    Abstract: We consider the fractional cointegrated vector autoregressive (CVAR) model of Johansen and Nielsen (2012a) and make two distinct contributions. First, in their consistency proof, Johansen and Nielsen (2012a) imposed moment conditions on the errors that depend on the parameter space, such that when the parameter space is larger, stronger moment conditions are required. We show that these moment conditions can be relaxed, and for consistency we require just eight moments regardless of the parameter space. Second, Johansen and Nielsen (2012a) assumed that the cointegrating vectors are stationary, and we extend the analysis to include the possibility that the cointegrating vectors are nonstationary. Both contributions require new analysis and results for the asymptotic properties of the likelihood function of the fractional CVAR model, which we provide. Finally, our analysis follows recent research and applies a parameter space large enough that the usual (non-fractional) CVAR model constitutes an interior point and hence can be tested against the fractional model using a χ²-test.
    Keywords: cointegration, fractional integration, likelihood inference, vector autoregressive model
    JEL: C32
    Date: 2018–05
  7. By: Chaohua Dong; Jiti Gao; Bin Peng
    Abstract: This paper discusses a semiparametric single-index model. The link function is allowed to be unbounded and has unbounded support that fill the gap in the literature. The link function is treated as a point in an infinitely many dimensional function space which enables us to derive the estimates for the index parameter and the link function simultaneously. This approach is different from the profile method commonly used in the literature. The estimator is derived from an optimization with the constraint of an identification condition for the index parameter, which solves an important problem in the literature of single-index models. In addition, making use of a property of Hermite orthogonal polynomials, an explicit estimator for the index parameter is obtained. Asymptotic properties of the two estimators of the index parameter are established. Their efficiency is discussed in some special cases as well. The finite sample properties of the two estimators are demonstrated through an extensive Monte Carlo study and an empirical example.
    Keywords: Asymptotic theory; closed-form estimation; cross-sectional model; Hermite series expansion.
    JEL: C13 C14 C51
    Date: 2018
  8. By: Laszlo Balazsi; Felix Chan; Laszlo Matyas
    Abstract: This paper proposes a new estimation procedure called Event Count Estimator (ECE). The estimator is straightforward to implement and it is robust against outliers, censoring and excess zeros in the data. The paper establishes asymptotic properties of the new estimator and the theoretical results are supported by several Monte Carlo experiments. These also show that the estimator has reasonable properties in moderate to large samples. As such, the cost of inefficiency for robustness is negligible from an applied viewpoint. The practical usefulness of the new estimator is demonstrated via an empirical application of the Gravity Model of trade.
    Date: 2018–04–11
  9. By: Huber, Florian
    Abstract: In this paper, we provide a parsimonious means of estimating panel VARs with stochastic volatility. We assume that coefficients associated with domestic lagged endogenous variables arise from a finite mixture of Gaussian distribution. Shrinkage on the cluster size is introduced through suitable priors on the component weights and cluster-relevant quantities are identified through novel normal-gamma shrinkage priors. To assess whether dynamic interdependencies between units are needed, we moreover impose shrinkage priors on the coefficients related to other countries' endogenous variables. Finally, our model controls for static interdependencies by assuming that the reduced form shocks of the model feature a factor stochastic volatility structure. We assess the merits of the proposed approach by using synthetic data as well as a real data application. In the empirical application, we forecast Eurozone unemployment rates and show that our proposed approach works well in terms of predictions.
    Keywords: multi country models, density predictions, hierarchical modeling, factor stochastic volatility models
    Date: 2018–04
  10. By: Cristina Amado (University of Minho and NIPE, CREATES and Aarhus University); Annastiina Silvennoinen (School of Economics and Finance, Queensland University of Technology); Timo Ter¨asvirta (CREATES and Aarhus University, C.A.S.E., Humboldt-Universit¨at zu Berlin)
    Abstract: Univariate and multivariate GARCH type models with multiplicative decomposition of the variance to short and long run components are surveyed. The latter component can be either deterministic or stochastic. Examples of both types are studied.
    Keywords: Conditional heteroskedasticity; Deterministically varying correlations; Multiplicative decomposition; Nonstationary volatility
    JEL: C12 C32 C51 C52
    Date: 2018
  11. By: Peter Hull
    Abstract: Researchers increasingly leverage movement across multiple treatments to estimate causal effects. While these "mover regressions" are often motivated by a linear constant-effects model, it is not clear what they capture under weaker quasi-experimental assumptions. I show that binary treatment mover regressions recover a convex average of four difference-in-difference comparisons and are thus causally interpretable under a standard parallel trends assumption. Estimates from multiple-treatment models, however, need not be causal without stronger restrictions on the heterogeneity of treatment effects and time-varying shocks. I propose a class of two-step estimators to isolate and combine the large set of difference-in-difference quasi-experiments generated by a mover design, identifying mover average treatment effects under conditional-on-covariate parallel trends and effect homogeneity restrictions. I characterize the efficient estimators in this class and derive specification tests based on the model's overidentifying restrictions. Future drafts will apply the theory to the Finkelstein et al. (2016) movers design, analyzing the causal effects of geography on healthcare utilization.
    Date: 2018–04
  12. By: Stéphane Bonhomme (University of Chicago); Koen Jochmans (Département d'économie); Jean-Marc Robin (Département d'économie)
    Abstract: We propose a two-step method to nonparametrically estimate multivariate models in which the observed outcomes are independent conditional on a discrete latent variable. Applications include microeconometric models with unobserved types of agents, regime-switching models, and models with misclassification error. In the first step, we estimate weights that transform moments of the marginal distribution of the data into moments of the conditional distribution of the data for given values of the latent variable. In the second step, these conditional moments are estimated as weighted sample averages. We illustrate the method by estimating a model of wages with unobserved heterogeneity on PSID data.
    Keywords: Latent variable models; Unobserved heterogeneity; Finite mixtures; Hidden Markov models; Nonparametric estimation; Panel data; Wage dynamics
    JEL: C14 C33 C38 J31
    Date: 2017–12

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.