nep-ecm New Economics Papers
on Econometrics
Issue of 2018‒09‒03
fifteen papers chosen by
Sune Karlsson
Örebro universitet

  1. Sensitivity Analysis using Approximate Moment Condition Models By Timothy B. Armstrong; Michal Koles\'ar
  2. Density Forecasts in Panel Models: A semiparametric Bayesian Perspective* By Laura Liu
  3. Inference for nonlinear state space models: A comparison of different methods applied to Markov-switching multifractal models By Lux, Thomas
  4. Robust inference in structural VARs with long-run restrictions By Chevillon, Guillaume; Mavroeidis, Sophocles; Zhan, Zhaoguo
  5. A Panel Quantile Approach to Attrition Bias in Big Data: Evidence from a Randomized Experiment By Matthew Harding; Carlos Lamarche
  6. Structural Estimation of Dynamic Macroeconomic Models using Higher-Frequency Financial Data By Max Ole Liemen; Michel van der Wel; Olaf Posch
  7. Estimation in a Generalization of Bivariate Probit Models with Dummy Endogenous Regressors By Sukjin Han; Sungwon Lee
  8. Score Permutation Based Finite Sample Inference for Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) Models By Bal\'azs Csan\'ad Cs\'aji
  9. About Local Projection Impulse Responde Function Reliability By Luca Brugnolini
  10. Farm heterogeneity and agricultural policy impacts on size dynamics: evidence from France By Saint-Cyr, Legrand D. F.
  11. Concentration Based Inference in High Dimensional Generalized Regression Models (I: Statistical Guarantees) By Zhu, Ying
  12. Optimal Design of Experiments in the Presence of Interference*, Second Version By Sarah Baird; Aislinn Bohren; Craig McIntosh; Berk Ozler
  13. Optimizing the tie-breaker regression discontinuity design By Art B. Owen; Hal Varian
  14. Reflected maxmin copulas and modelling quadrant subindependence By Toma\v{z} Ko\v{s}ir; Matja\v{z} Omladi\v{c}
  15. A New Nonparametric Estimate of the Risk-Neutral Density with Application to Variance Swap By Liyuan Jiang; Shuang Zhou; Keren Li; Fangfang Wang; Jie Yang

  1. By: Timothy B. Armstrong; Michal Koles\'ar
    Abstract: We consider inference in models defined by approximate moment conditions. We show that near-optimal confidence intervals (CIs) can be formed by taking a generalized method of moments (GMM) estimator, and adding and subtracting the standard error times a critical value that takes into account the potential bias from misspecification of the moment conditions. In order to optimize performance under potential misspecification, the weighting matrix for this GMM estimator takes into account this potential bias, and therefore differs from the one that is optimal under correct specification. To formally show the near-optimality of these CIs, we develop asymptotic efficiency bounds for inference in the locally misspecified GMM setting. These bounds may be of independent interest, due to their implications for the possibility of using moment selection procedures when conducting inference in moment condition models. We apply our methods in an empirical application to automobile demand, and show that adjusting the weighting matrix can shrink the CIs by a factor of up to 5 or more.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.07387&r=ecm
  2. By: Laura Liu (Federal Reserve Bank)
    Abstract: This paper constructs individual-specific density forecasts for a panel of firms or households using a dynamic linear model with common and heterogeneous coeficients and cross-sectional heteroskedasticity. The panel considered in this paper features large cross-sectional dimension (N) but short time series (T). Due to short T, traditional methods have difficulty in disentanglingthe heterogeneous parameters from the shocks, which contaminates the estimates of the heterogeneous parameters. To tackle this problem, I assume that there is an underlying distribution of heterogeneous parameters, model this distribution nonparametrically allowing for correlation between heterogeneous parameters and initial conditions as well as individual-specific regressors, and then estimate this distribution by pooling the information from the whole cross-section together. I develop a simulation-based posterior sampling algorithm specifically addressing the nonparametric density estimation of unobserved heterogeneous parameters. I prove that both the estimated common parameters and the estimated distribution of the heterogeneous parameters achieve posterior consistency, and that the density forecasts asymptotically converge to the oracle forecast, an (infeasible) benchmark that is defined as the individual-specific posterior predictive distribution under the assumption that the common parameters and the distribution of the heterogeneous parameters are known. Monte Carlo simulations demonstrate improvements in density forecasts relative to alternative approaches. An application to young firm dynamics also shows that the proposed predictor provides more accurate density predictions.
    Keywords: Bayesian, Semiparametric Methods, Panel Data, Density Forecasts, Posterior Consistency, Young Firms Dynamics
    JEL: C11 C14 C23 C53 L25
    Date: 2017–04–28
    URL: http://d.repec.org/n?u=RePEc:pen:papers:17-006&r=ecm
  3. By: Lux, Thomas
    Abstract: Nonlinear, non-Gaussian state space models have found wide applications in many areas. Since such models usually do not allow for an analytical representation of their likelihood function, sequential Monte Carlo or particle filter methods are mostly applied to estimate their parameters. Since such stochastic approximations lead to non-smooth likelihood functions, finding the best-fitting parameters of a model is a non-trivial task. In this paper, we compare recently proposed iterative filtering algorithms developed for this purpose with simpler online filters and more traditional methods of inference. We use a highly nonlinear class of Markov-switching models, the so called Markov-switching multifractal model (MSM), as our workhorse in the comparison of different optimisation routines. Besides the well-established univariate discrete-time MSM, we introduce univariate and multivariate continuous-time versions of MSM. Monte Carlo simulation experiments indicate that across a variety of MSM specifications, the classical Nelder-Mead or simplex algorithm appears still as more efficient and robust compared to a number of online and iterated filters. A very close competitor is the iterated filter recently proposed by Ionides et al. (2006) while other alternatives are mostly dominated by these two algorithms. An empirical application of both discrete and continuous-time MSM to seven financial time series shows that both models dominate GARCH and FIGARCH models in terms of in-sample goodness-of-fit. Out-of-sample forecast comparisons show in the majority of cases a clear dominance of the continuous-time MSM under a mean absolute error criterion, and less conclusive results under a mean squared error criterion.
    Keywords: partially observed Markov processes,state space models,Markov-switching mulitfracted model,nonlinear filtering,forecasting of volatility
    JEL: C20 G15
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:201807&r=ecm
  4. By: Chevillon, Guillaume (ESSEC Research Center, ESSEC Business School); Mavroeidis, Sophocles (Department of Economics and Institute for New Economic Thinking at the Oxford Martin School, Oxford University); Zhan, Zhaoguo (Department of Economics, Finance and Quantitative Analysis, Kennesaw State University)
    Abstract: Long-run restrictions are a very popular method for identifying structural vector autoregressions, but they suffer from weak identification when the data is very persistent, i.e., when the highest autoregressive roots are near unity. Near unit roots introduce additional nuisance parameters and make standard weak-instrument-robust methods of inference inapplicable. We develop a method of inference that is robust to both weak identi fication and strong persistence. The method is based on a combination of the Anderson-Rubin test with instruments derived by fi ltering potentially non-stationary variables to make them near stationary. We apply our method to obtain robust con fidence bands on impulse responses in two leading applications in the literature.
    Keywords: weak instruments; identification; SVARs; near unit roots; IVX
    JEL: C12 C32 E32
    Date: 2016–11–22
    URL: http://d.repec.org/n?u=RePEc:ebg:essewp:dr-17002&r=ecm
  5. By: Matthew Harding; Carlos Lamarche
    Abstract: This paper introduces a quantile regression estimator for panel data models with individual heterogeneity and attrition. The method is motivated by the fact that attrition bias is often encountered in Big Data applications. For example, many users sign-up for the latest program but few remain active users several months later, making the evaluation of such interventions inherently very challenging. Building on earlier work by Hausman and Wise (1979), we provide a simple identification strategy that leads to a two-step estimation procedure. In the first step, the coefficients of interest in the selection equation are consistently estimated using parametric or nonparametric methods. In the second step, standard panel quantile methods are employed on a subset of weighted observations. The estimator is computationally easy to implement in Big Data applications with a large number of subjects. We investigate the conditions under which the parameter estimator is asymptotically Gaussian and we carry out a series of Monte Carlo simulations to investigate the finite sample properties of the estimator. Lastly, using a simulation exercise, we apply the method to the evaluation of a recent Time-of-Day electricity pricing experiment inspired by the work of Aigner and Hausman (1980).
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.03364&r=ecm
  6. By: Max Ole Liemen (Universität Hamburg); Michel van der Wel (Erasmus University Rotterdam); Olaf Posch (Universität Hamburg)
    Abstract: In this paper we show how high-frequency financial data can be used in a combined macro-finance framework to estimate the underlying structural parameters. Our formulation of the model allows for substituting macro variables by asset prices in a way that enables casting the relevant estimation equations partly (or completely) in terms of financial data. We show that using only financial data allows for identification of the majority of the relevant parameters. Adding macro data allows for identification of all parameters. In our simulation study, we find that it also improves the accuracy of the parameter estimates. In the empirical application we use interest rate, macro, and S&P500 stock index data, and compare the results using different combinations of macro and financial variables.
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:red:sed018:1049&r=ecm
  7. By: Sukjin Han; Sungwon Lee
    Abstract: The purpose of this paper is to provide guidelines for empirical researchers who use a class of bivariate threshold crossing models with dummy endogenous variables. A common practice employed by the researchers is the specification of the joint distribution of the unobservables as a bivariate normal distribution, which results in a bivariate probit model. To address the problem of misspecification in this practice, we propose an easy-to-implement semiparametric estimation framework with parametric copula and nonparametric marginal distributions. We establish asymptotic theory, including root-n normality, for the sieve maximum likelihood estimators that can be used to conduct inference on the individual structural parameters and the average treatment effect (ATE). In order to show the practical relevance of the proposed framework, we conduct a sensitivity analysis via extensive Monte Carlo simulation exercises. The results suggest that the estimates of the parameters, especially the ATE, are sensitive to parametric specification, while semiparametric estimation exhibits robustness to underlying data generating processes. In this paper, we also show that the absence of excluded instruments may result in identification failure, in contrast to what some practitioners believe.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.05792&r=ecm
  8. By: Bal\'azs Csan\'ad Cs\'aji
    Abstract: A standard model of (conditional) heteroscedasticity, i.e., the phenomenon that the variance of a process changes over time, is the Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) model, which is especially important for economics and finance. GARCH models are typically estimated by the Quasi-Maximum Likelihood (QML) method, which works under mild statistical assumptions. Here, we suggest a finite sample approach, called ScoPe, to construct distribution-free confidence regions around the QML estimate, which have exact coverage probabilities, despite no additional assumptions about moments are made. ScoPe is inspired by the recently developed Sign-Perturbed Sums (SPS) method, which however cannot be applied in the GARCH case. ScoPe works by perturbing the score function using randomly permuted residuals. This produces alternative samples which lead to exact confidence regions. Experiments on simulated and stock market data are also presented, and ScoPe is compared with the asymptotic theory and bootstrap approaches.
    Date: 2018–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1807.08390&r=ecm
  9. By: Luca Brugnolini (Central Bank of Malta's Research Department)
    Abstract: I compare the performance of the vector autoregressive (VAR) model impulse response function estimator with the Jordà (2005) local projection (LP) methodology. In a Monte Carlo experiment, I demonstrate that when the data generating process is a well-specified VAR, the standard impulse response function estimator is the best option. However, when the sample size is small, and the model lag-length is misspecified, I prove that the local projection estimator is a competitive alternative. Finally, I show how to improve the local projection performance by fixing the lag-length at each horizon.
    Keywords: VAR,information criteria,lag-length,Monte Carlo
    JEL: C32 C52 C53 E52
    Date: 2018–06–09
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:440&r=ecm
  10. By: Saint-Cyr, Legrand D. F.
    Abstract: This article investigates the impact of agricultural policies on structural change in farming. Since not all farmers may behave alike, a non-stationary mixed-Markov chain modeling (M-MCM) approach is applied to capture unobserved heterogeneity in the transition process of farms. A multinomial logit specification is used for transition probabilities and the parameters are estimated by the maximum likelihood method and the Expectation-Maximization (EM) algorithm. An empirical application to an unbalanced panel dataset from 2000 to 2013 shows that French farming mainly consists of a mixture of two farm types characterized by specific transition processes. The main finding is that the impact of farm subsidies from both pillars of the Common Agricultural Policy (CAP) highly depends on the farm type. A comparison between the non-stationary M-MCM and a homogeneous non-stationary MCM shows that the latter model leads to either overestimation or underestimation of the impact of agricultural policy on change in farm size. This suggests that more attention should be paid to both observed and unobserved farm heterogeneity in assessing the impact of agricultural policy on structural change in farming.
    Keywords: Farm Management
    Date: 2017–06–23
    URL: http://d.repec.org/n?u=RePEc:ags:inrasl:258013&r=ecm
  11. By: Zhu, Ying
    Abstract: We develop simple and non-asymptotically justified methods for hypothesis testing about the coefficients ($\theta^{*}\in\mathbb{R}^{p}$) in the high dimensional generalized regression models where $p$ can exceed the sample size. Given a function $h:\,\mathbb{R}^{p}\mapsto\mathbb{R}^{m}$, we consider $H_{0}:\,h(\theta^{*})=\mathbf{0}_{m}$ against $H_{1}:\,h(\theta^{*})\neq\mathbf{0}_{m}$, where $m$ can be any integer in $\left[1,\,p\right]$ and $h$ can be nonlinear in $\theta^{*}$. Our test statistics is based on the sample ``quasi score'' vector evaluated at an estimate $\hat{\theta}_{\alpha}$ that satisfies $h(\hat{\theta}_{\alpha})=\mathbf{0}_{m}$, where $\alpha$ is the prespecified Type I error. By exploiting the concentration phenomenon in Lipschitz functions, the key component reflecting the dimension complexity in our non-asymptotic thresholds uses a Monte-Carlo approximation to mimic the expectation that is concentrated around and automatically captures the dependencies between the coordinates. We provide probabilistic guarantees in terms of the Type I and Type II errors for the quasi score test. Confidence regions are also constructed for the population quasi-score vector evaluated at $\theta^{*}$. The first set of our results are specific to the standard Gaussian linear regression models; the second set allow for reasonably flexible forms of non-Gaussian responses, heteroscedastic noise, and nonlinearity in the regression coefficients, while only requiring the correct specification of $\mathbb{E}\left(Y_{i}|X_{i}\right)$s. The novelty of our methods is that their validity does not rely on good behavior of $\left\Vert \hat{\theta}_{\alpha}-\theta^{*}\right\Vert _{2}$ (or even $n^{-1/2}\left\Vert X\left(\hat{\theta}_{\alpha}-\theta^{*}\right)\right\Vert _{2}$ in the linear regression case) nonasymptotically or asymptotically.
    Keywords: Nonasymptotic inference, concentration inequalities, high dimensional inference, hypothesis testing, confidence sets
    JEL: C1 C12 C2 C21
    Date: 2018–08–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:88502&r=ecm
  12. By: Sarah Baird (Department of Global Health, George Washington University); Aislinn Bohren (Department of Economics, University of Pennsylvania); Craig McIntosh (Department of Economics, UC San Diego); Berk Ozler (The World Bank)
    Abstract: This paper formalizes the optimal design of randomized controlled trials (RCTs) in the presence of interference between units, where an individual's outcome depends on the behavior and outcomes of others in her group. We focus on randomized saturation (RS) designs, which are two-stage RCTs that first randomize the treatment saturation of a group, then randomize individual treatment assignment. Our main contributions are to map the potential outcomes framework with partial interference to a regression model with clustered errors, calculate the statistical power of different RS designs, and derive analytical insights for how to optimally design an RS experiment. We show that the power to detect average treatment effects declines precisely with the ability to identify novel treatment and spillover estimands, such as how effects vary with the intensity of treatment. We provide software that assists researchers in designing RS experiments.
    Keywords: Experimental Design, Causal Inference
    JEL: C93 O22 I25
    Date: 2017–11–30
    URL: http://d.repec.org/n?u=RePEc:pen:papers:16-025&r=ecm
  13. By: Art B. Owen; Hal Varian
    Abstract: Motivated by customer loyalty plans, we study tie-breaker designs which are hybrids of randomized controlled trials (RCTs) and regression discontinuity designs (RDDs). We quantify the statistical efficiency of a tie-breaker design in which a proportion $\Delta$ of observed customers are in the RCT. In a two line regression, statistical efficiency increases monotonically with $\Delta$, so efficiency is maximized by an RCT. That same regression model quantifies the short term value of the treatment allocation and this comparison favors smaller $\Delta$ with the RDD being best. We solve for the optimal tradeoff between these exploration and exploitation goals. The usual tie-breaker design experiments on the middle $\Delta$ subjects as ranked by the running variable. We quantify the efficiency of other designs such as experimenting only in the second decile from the top. We also consider more general models such as quadratic regressions.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.07563&r=ecm
  14. By: Toma\v{z} Ko\v{s}ir; Matja\v{z} Omladi\v{c}
    Abstract: Copula models have become popular in different applications, including modeling shocks, in view of their ability to describe better the dependence concepts in stochastic systems. The class of maxmin copulas was recently introduced by Omladi\v{c} and Ru\v{z}i\'{c}. It extends the well known classes of Marshall-Olkin and Marshall copulas by allowing the external shocks to have different effects on the two components of the system. By a reflection (flip) in one of the variables we introduce a new class of bivariate copulas called reflected maxmin (RMM) copulas. We explore their properties and show that symmetric RMM copulas relate to general RMM copulas similarly as do semilinear copulas relate to Marshall copulas. We transfer that relation also to maxmin copulas. We also characterize possible diagonal functions of symmetric RMM copulas.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.07646&r=ecm
  15. By: Liyuan Jiang; Shuang Zhou; Keren Li; Fangfang Wang; Jie Yang
    Abstract: In this paper, we develop a new nonparametric approach for estimating the risk-neutral density of asset price and reformulate its estimation into a double-constrained optimization problem. We implement our approach in R and evaluate it using the S\&P 500 market option prices from 1996 to 2015. A comprehensive cross-validation study shows that our approach outperforms the existing nonparametric quartic B-spline and cubic spline methods, as well as the parametric method based on the Normal Inverse Gaussian distribution. More specifically, our approach is capable of recovering option prices much better over a broad spectrum of strikes and expirations. While the other methods essentially fail for long-term options (1 year or 2 years to maturity), our approach still works reasonably well. As an application, we use the proposed density estimator to price long-term variance swaps, and our prices match reasonably well with those of the variance future downloaded from the Chicago Board Options Exchange website.
    Date: 2018–08
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1808.05289&r=ecm

This nep-ecm issue is ©2018 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.