nep-ecm New Economics Papers
on Econometrics
Issue of 2019‒04‒01
23 papers chosen by
Sune Karlsson
Örebro universitet

  1. Time series models for realized covariance matrices based on the matrix-F distribution By Jiayuan Zhou; Feiyu Jiang; Ke Zhu; Wai Keung Li
  2. Data cloning estimation for asymmetric stochastic volatility models By Lopes Moreira Da Veiga, María Helena; Marín Díazaraque, Juan Miguel; Zea Bermudez, Patrícia de
  3. Nonparametric Predictive Regressions for Stock Return Prediction By Cheng, T.; Gao, J.; Linton, O.
  4. The finite sample properties of Sparse M-estimators with Pseudo-Observations By Benjamin Poignard; Jean-David Fermanian
  5. Identification of Causal Effects with Multiple Instruments: Problems and Some Solutions By Magne Mogstad; Alexander Torgovitsky; Christopher R. Walters
  6. Identification and estimation of triangular models with a binary treatment By Santiago Pereda Fernández
  7. On the Effect of Imputation on the 2SLS Variance By Helmut Farbmacher; Alexander Kann
  8. Hierarchical Time Varying Estimation of a Multi Factor Asset Pricing Model By Richard T. Baillie; Fabio Calonaci; George Kapetanios
  9. Ensemble Methods for Causal Effects in Panel Data Settings By Susan Athey; Mohsen Bayati; Guido Imbens; Zhaonan Qu
  10. What They Did Not Tell You About Algebraic (Non-)Existence, Mathematical (IR-)Regularity and (Non-)Asymptotic Properties of the Dynamic Conditional Correlation (DCC) Model By McAleer, M.J.
  11. Bayesian Structural VAR Models: A New Approach for Prior Beliefs on Impulse Responses By Martin Bruns; Michele Piffer
  12. The Impact of Jumps and Leverage in Forecasting the Co-Volatility of Oil and Gold Futures By Asai, M.; Gupta, R.; McAleer, M.J.
  13. Changing impact of shocks: a time-varying proxy SVAR approach By Haroon Mumtaz; Katerina Petrova
  14. Idiosyncratic shocks: a new procedure for identifying shocks in a VAR with application to the New Keynesian model By Wickens, Michael R.
  15. Machine Learning Methods Economists Should Know About By Susan Athey; Guido Imbens
  16. Long Memory, Realized Volatility and HAR Models By Richard T. Baillie; Fabio Calonaci; Dooyeon Cho; Seunghwa Rho
  17. Identification of Causal Intensive Margin Effects by Difference-in-Difference Methods By Markus Hersche; Elias Moor
  18. Asymmetric competition, risk, and return distribution By Mundt, Philipp; Oh, Ilfan
  19. "The Democracy Effect: a weights-based identification strategy" By Pedro Dal Bo; Andrew Foster; Kenju Kamei
  20. Measuring Differences in Stochastic Network Structure By Eric Auerbach
  21. Identification and Estimation of a Partially Linear Regression Model using Network Data By Eric Auerbach
  22. Does Obamacare Care? A Fuzzy Difference-in-discontinuities Approach By Hector Galindo Silva; Nibene Habib Somé; Guy Tchuente; Nibene Habib Somé; Guy Tchuente
  23. A regression discontinuity design for categorical ordered running variables with an application to central bank purchases of corporate bonds By Fan Li; Andrea Mercatanti; Taneli Mäkinen; Andrea Silvestrini

  1. By: Jiayuan Zhou; Feiyu Jiang; Ke Zhu; Wai Keung Li
    Abstract: We propose a new Conditional BEKK matrix-F (CBF) model for the time-varying realized covariance (RCOV) matrices. This CBF model is capable of capturing heavy-tailed RCOV, which is an important stylized fact but could not be handled adequately by the Wishart-based models. To further mimic the long memory feature of the RCOV, a special CBF model with the conditional heterogeneous autoregressive (HAR) structure is introduced. Moreover, we give a systematical study on the probabilistic properties and statistical inferences of the CBF model, including exploring its stationarity, establishing the asymptotics of its maximum likelihood estimator, and giving some new inner-product-based tests for its model checking. In order to handle a large dimensional RCOV matrix, we construct two reduced CBF models --- the variance-target CBF model (for moderate but fixed dimensional RCOV matrix) and the factor CBF model (for high dimensional RCOV matrix). For both reduced models, the asymptotic theory of the estimated parameters is derived. The importance of our entire methodology is illustrated by simulation results and two real examples.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.12077&r=all
  2. By: Lopes Moreira Da Veiga, María Helena; Marín Díazaraque, Juan Miguel; Zea Bermudez, Patrícia de
    Abstract: The paper proposes the use of data cloning (DC) to the estimation of general asymmetric stochastic volatility (ASV) models with flexible distributions for the standardized returns. These models are able to capture the asymmetric volatility, the leptokurtosis and the skewness of the distribution of returns. Data cloning is a general technique to compute maximum likelihood estimators, along with their asymptotic variances, by means of a Markov chain Monte Carlo (MCMC) methodology. The main aim of this paper is to illustrate how easily general ASV models can be estimated and consequently studied via data cloning. Changes of specifications, priors and sampling error distributions are done with minor modifications of the code. Using an intensive simulation study, the finite sample properties of the estimators of the parameters are evaluated and compared to those of a benchmark estimator that is also user-friendly.The results show that the proposed estimator is computationally efficient and robust, and can be an effective alternative to the exiting estimation methods applied to ASV models. Finally, we use data cloning to estimate the parameters of general ASV models and forecast the one-step-ahead volatility of S&P 500 and FTSE-100 daily returns.
    Keywords: Skewed and Heavy-Tailed distributions; Non-Gaussian Nonlinear Time Series Models; Data Cloning; Asymmetric Volatility
    Date: 2019–03–19
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:28214&r=all
  3. By: Cheng, T.; Gao, J.; Linton, O.
    Abstract: We propose two new nonparametric predictive models: the multi-step nonparametric predictive regression model and the multi-step additive predictive regression model, in which the predictive variables are locally stationary time series. We define estimation methods and establish the large sample properties of these methods in the short horizon and the long horizon case. We apply our methods to stock return prediction using a number of standard predictors such as dividend yield. The empirical results show that all of these models can substantially outperform the traditional linear predictive regression model in terms of both in-sample and out-of-sample performance. In addition, we _nd that these models can always beat the historical mean model in terms of in-sample fitting, and also for some cases in terms of the out-of-sample forecasting. We also compare our methods with the linear regression and historical mean methods according to an economic metric. In particular, we show how our methods can be used to deliver a trading strategy that beats the buy and hold strategy (and linear regression based alternatives) over our sample period.
    Keywords: Kernel estimator, locally stationary process, series estimator, stock return prediction
    JEL: C14 C22 G17
    Date: 2019–03–25
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1932&r=all
  4. By: Benjamin Poignard (Osaka University, Graduate School of Engineering Science.); Jean-David Fermanian (CREST; ENSAE.)
    Abstract: We provide ?nite sample properties of general regularized statistical criteria in the presence of pseudo-observations. Under the restricted strong convexity assump-tion of the unpenalized loss function and regularity conditions on the penalty, we derive non-asymptotic error bounds on the regularized M-estimator that hold with high probability. This penalized framework with pseudo-observations is then ap-plied to the M-estimation of some usual copula-based models. These theoretical results are supported by an empirical study.
    Keywords: Non-convex regularizer; copulas; pseudo-observations; statistical consis-tency; exponential bounds.
    Date: 2019–01–04
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2019-01&r=all
  5. By: Magne Mogstad; Alexander Torgovitsky; Christopher R. Walters
    Abstract: Empirical researchers often combine multiple instruments for a single treatment using two stage least squares (2SLS). When treatment effects are heterogeneous, a common justification for including multiple instruments is that the 2SLS estimand can still be interpreted as a positively-weighted average of local average treatment effects (LATEs). This justification requires the well-known monotonicity condition. However, we show that with more than one instrument, this condition can only be satisfied if choice behavior is effectively homogenous. Based on this finding, we consider the use of multiple instruments under a weaker, partial monotonicity condition. This condition is implied by standard choice theory and allows for richer heterogeneity. First, we show that the weaker partial monotonicity condition can still suffice for the 2SLS estimand to be a positively-weighted average of LATEs. We characterize a simple sufficient and necessary condition that empirical researchers can check to ensure positive weights. Second, we develop a general method for using multiple instruments to identify a wide range of causal parameters other than LATEs. The method allows researchers to combine multiple instruments to obtain more informative empirical conclusions than one would obtain by using each instrument separately.
    JEL: C01 C1 C26
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25691&r=all
  6. By: Santiago Pereda Fernández (Bank of Italy)
    Abstract: I study the identification and estimation of a nonseparable triangular model with an endogenous binary treatment. Unlike other studies, I do not impose rank invariance or rank similarity on the unobservable of the outcome equation. Instead, I achieve identification using continuous variation of the instrument and a shape restriction on the distribution of the unobservables, which is modeled with a copula. The latter captures the endogeneity of the model and is one of the components of the marginal treatment effect, making it informative about the effects of extending the treatment to untreated individuals. The estimation is a multi-step procedure based on rotated quantile regression. Finally, I use the estimator to revisit the effects of Work First Job Placements on future earnings.
    Keywords: copula, endogeneity, policy analysis, quantile regression, unconditional distributional effects
    JEL: C31 C36
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1210_19&r=all
  7. By: Helmut Farbmacher; Alexander Kann
    Abstract: Endogeneity and missing data are common issues in empirical research. We investigate how both jointly affect inference on causal parameters. Conventional methods to estimate the variance, which treat the imputed data as if it was observed in the first place, are not reliable. We derive the asymptotic variance and propose a heteroskedasticity robust variance estimator for two-stage least squares which accounts for the imputation. Monte Carlo simulations support our theoretical findings.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.11004&r=all
  8. By: Richard T. Baillie (Michigan State University, USA, King’s College London & Rimini Center for Economic Analysis, Italy); Fabio Calonaci (Queen Mary University of London); George Kapetanios (King’s College London)
    Abstract: This paper presents a new hierarchical methodology for estimating multi factor dynamic asset pricing models. The approach is loosely based on the sequential approach of Fama and MacBeth (1973). However, the hierarchical method uses very flexible bandwidth selection methods in kernel weighted regressions which can emphasize local, or recent data and information to derive the most appropriate estimates of risk premia and factor loadings at each point of time. The choice of bandwidths and weighting schemes, are achieved by cross validation. This leads to consistent estimators of the risk premia and factor loadings. Also, out of sample forecasting for stocks and two large portfolios indicates that the hierarchical method leads to statistically significant improvement in forecast RMSE.
    Keywords: Asset pricing model, FamaMacBeth model, estimation of beta, kernel weighted regressions, cross validation, time-varying parameter regressions
    JEL: C22 F31 G01 G15
    Date: 2019–01–07
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:879&r=all
  9. By: Susan Athey; Mohsen Bayati; Guido Imbens; Zhaonan Qu
    Abstract: In many prediction problems researchers have found that combinations of prediction methods (“ensembles”) perform better than individual methods. A simple example is random forests, which combines predictions from many regression trees. A striking, and substantially more complex, example is the Netflix Prize competition where the winning entry combined predictions using a wide variety of conceptually very different models. In macro-economic forecasting researchers have often found that averaging predictions from different models leads to more accurate forecasts. In this paper we apply these ideas to synthetic control type problems in panel data setting. In this setting a number of conceptually quite different methods have been developed, with some assuming correlations between units that are stable over time, others assuming stable time series patterns common to all units, and others using factor models. With data on state level GDP for 270 quarters, we focus on three basic approaches to predicting missing values, one from each of these strands of the literature. Rather than try to test the different models against each other and find a true model, we focus on combining predictions based on each of the separate models using ensemble methods. For the ensemble predictor we focus on a weighted average of the three individual methods, with non-negative weights determined through out-of-sample cross-validation.
    JEL: C01 C14
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:25675&r=all
  10. By: McAleer, M.J.
    Abstract: In order to hedge efficiently, persistently high negative covariances or, equivalently, correlations, between risky assets and the hedging instruments are intended to mitigate against financial risk and subsequent losses. If there is more than one hedging instrument, multivariate covariances and correlations will have to be calculated. As optimal hedge ratios are unlikely to remain constant using high frequency data, it is essential to specify dynamic time-varying models of covariances and correlations. These values can either be determined analytically or numerically on the basis of highly advanced computer simulations. Analytical developments are occasionally promulgated for multivariate conditional volatility models. The primary purpose of the paper is to analyse purported analytical developments for the only multivariate dynamic conditional correlation model to have been developed to date, namely Engle’s (2002) widely-used Dynamic Conditional Correlation (DCC) model. Dynamic models are not straightforward (or even possible) to translate in terms of the algebraic existence, underlying stochastic processes, specification, mathematical regularity conditions, and asymptotic properties of consistency and asymptotic normality, or the lack thereof. The paper presents a critical analysis, discussion, evaluation and presentation of caveats relating to the DCC model, and an emphasis on the numerous dos and don’ts in implementing the DCC and related model in practice
    Keywords: Hedging, covariances, correlations, existence, mathematical regularity, invertibility, likelihood function, statistical asymptotic properties, caveats, practical implementation
    JEL: C22 C32 C51 C52 C58 C62 G32
    Date: 2019–03–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:115611&r=all
  11. By: Martin Bruns; Michele Piffer
    Abstract: Fairtrade certification aims at transferring wealth from the consumer to the farmer; however, coffee passes through many hands before reaching final consumers. Bringing together retail, wholesale, and stock market data, this study estimates how much more consumers are paying for Structural VAR models are frequently identified using sign restrictions on contemporaneous impulse responses. We develop a methodology that can handle a set of prior distributions that is much larger than the one currently allowed for by traditional methods. We then develop an importance sampler that explores the posterior distribution just as conveniently as with traditional approaches. This makes the existing trade-off between careful prior selection and tractable posterior sampling disappear. We use this framework to combine sign restrictions with information on the volatility of the variables in the model, and show that this sharpens posterior inference. Applying the methodology to the oil market, we find that supply shocks have a strong role in driving the dynamics of the price of oil and in explaining the drop in oil production during the Gulf war.
    Keywords: Sign restrictions, Bayesian inference, oil market
    JEL: C32 C11 E50 H62
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1796&r=all
  12. By: Asai, M.; Gupta, R.; McAleer, M.J.
    Abstract: The paper investigates the impact of jumps in forecasting co-volatility in the presence of leverage effects. We modify the jump-robust covariance estimator of Koike (2016), such that the estimated matrix is positive definite. Using this approach, we can disentangle the estimates of the integrated co-volatility matrix and jump variations from the quadratic covariation matrix. Empirical results for daily crude oil and gold futures show that the co-jumps of the two futures have significant impacts on future co-volatility, but that the impact is negligible in forecasting weekly and monthly horizons
    Keywords: Commodity Markets, Co-volatility, Forecasting, Jump, Leverage Effects, Realized, Covariance, Threshold Estimation.
    JEL: C32 C33 C58 Q02
    Date: 2019–03–01
    URL: http://d.repec.org/n?u=RePEc:ems:eureir:115614&r=all
  13. By: Haroon Mumtaz (Queen Mary University of London); Katerina Petrova (University of St. Andrews)
    Abstract: In this paper we extend the Bayesian Proxy VAR to incorporate time variation in the parameters. A Gibbs sampling algorithm is provided to approximate the posterior distributions of the model's parameters. Using the proposed algorithm, we estimate the time-varying effects of taxation shocks in the US and show that there is limited evidence for a structural change in the tax multiplier.
    Keywords: Time-Varying parameters, Stochastic volatility, Proxy VAR, tax shocks
    JEL: C2 C11 E3
    Date: 2018–11–07
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:875&r=all
  14. By: Wickens, Michael R.
    Abstract: A key issue in VAR analysis is how best to identify economic shocks. The paper discusses the problems that the standard methods pose and proposes a new type of shock. Named an idiosyncratic shock, it is designed to identify the component in each VAR residual associated with the corresponding VAR variable. The procedure is applied to a calibrated New Keynesian model and to a VAR based on the same variables and using US data. The resulting impulse response functions are compared with those from standard procedures.
    Keywords: Macroeconomic Shocks; New Keynesian Model; VAR analysis
    JEL: C32 E32
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:cpr:ceprdp:13613&r=all
  15. By: Susan Athey; Guido Imbens
    Abstract: We discuss the relevance of the recent Machine Learning (ML) literature for economics and econometrics. First we discuss the differences in goals, methods and settings between the ML literature and the traditional econometrics and statistics literatures. Then we discuss some specific methods from the machine learning literature that we view as important for empirical researchers in economics. These include supervised learning methods for regression and classification, unsupervised learning methods, as well as matrix completion methods. Finally, we highlight newly developed methods at the intersection of ML and econometrics, methods that typically perform better than either off-the-shelf ML or more traditional econometric methods when applied to particular classes of problems, problems that include causal inference for average treatment effects, optimal policy estimation, and estimation of the counterfactual effect of price changes in consumer choice models.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.10075&r=all
  16. By: Richard T. Baillie (Michigan State University, USA, Kings College, University of London, UK & Rimini Center for Economic Analysis, Italy); Fabio Calonaci (Queen Mary University of London); Dooyeon Cho (Sungkyunkwan University, Republic of Korea); Seunghwa Rho (Emory University, USA)
    Abstract: The presence of long memory in Realized Volatility (RV) is a widespread stylized fact. The origins of long memory in RV have been attributed to jumps, structural breaks, non-linearities, or pure long memory. An important development has been the Heterogeneous Autoregressive (HAR) model and its extensions. This paper assesses the separate roles of fractionally integrated long memory models, extended HAR models and time varying parameter HAR models. We find that the presence of the long memory parameter is often important in addition to the HAR models.
    Keywords: Long memory, Restricted ARFIMA, Realized volatility, HAR model, Time varying parameters
    JEL: C22 C31
    Date: 2019–01–08
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:881&r=all
  17. By: Markus Hersche (ETH Zurich, Switzerland); Elias Moor (ETH Zurich, Switzerland)
    Abstract: This paper discusses identification of causal intensive margin effects. The causal intensive margin effect is defined as the treatment effect on the outcome of individuals with a positive outcome irrespective of whether they are treated or not (always-takers or participants). A potential selection problem arises when conditioning on positive outcomes, even if treatment is randomly assigned. We propose to use difference-in-difference methods - conditional on positive outcomes - to estimate causal intensive margin effects. We derive sufficient conditions under which the difference-in-difference methods identify the causal intensive margin effect in a setting with random treatment.
    Keywords: Intensive margin effect, difference-in-difference, corner solution models, potential outcomes, policy evaluation
    JEL: C21 C24 C18
    Date: 2018–11
    URL: http://d.repec.org/n?u=RePEc:eth:wpswif:18-302&r=all
  18. By: Mundt, Philipp; Oh, Ilfan
    Abstract: We propose a parsimonious statistical model of firm competition where structural differences in the strength of competitive pressure and the magnitude of return fluctuations above and below the system-wide benchmark translate into a skewed Subbotin or asymmetric exponential power (AEP) distribution of returns to capital. Empirical evidence from US data illustrates that the AEP distribution compares favorably to popular alternative models such as the symmetric or asymmetric Laplace density in terms of goodness of fit when entry and exit dynamics of markets are taken into account.
    Keywords: return on capital,maximum entropy,asymmetric Subbotin distribution
    JEL: C16 D21 L10 E10 C12
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:zbw:bamber:145&r=all
  19. By: Pedro Dal Bo; Andrew Foster; Kenju Kamei
    Abstract: "Dal Bó, Foster and Putterman (2010) show experimentally that the effect of a policy may be greater when it is democratically selected than when it is exogenously imposed. In this paper we propose a new and simpler identification strategy to measure this democracy effect. We derive the distribution of the statistic of the democracy effect, and apply the new strategy to the data from Dal Bó, Foster and Putterman (2010) and data from a new real-effort experiment in which subjects’ payoffs do not depend on the effort of others. The new identification strategy is based on calculating the average behavior under democracy by weighting the behavior of each type of voter by its prevalence in the whole population (and not conditional on the vote outcome). We show that use of these weights eliminates selection effects under certain conditions. Application of this method to the data in Dal Bó, Foster and Putterman (2010) confirms the presence of the democracy effect in that experiment, but no such effect is found for the real-effort experiment."
    Date: 2019
    URL: http://d.repec.org/n?u=RePEc:bro:econwp:2019-4&r=all
  20. By: Eric Auerbach
    Abstract: How can one determine whether the realized differences between two stochastic networks are statistically significant? This paper considers a two-sample goodness-of-fit testing problem for network data in which the null hypothesis is that the corresponding entries of the networks' adjacency matrices are identically distributed. It first outlines a randomization test for the null hypothesis that controls size in finite samples based on any network statistic. It then focuses on two particular network statistics that produce tests powerful against a large class of alternative hypotheses. The statistics are based on the magnitude of the difference between the networks' adjacency matrices as measured by the two-two and infinity-one operator norms. The power properties of the tests are examined analytically, in simulation, and through an empirical demonstration. A key finding is that while the test based on the infinity-one norm requires relatively new tools from the semidefinite programming literature to implement, it can be substantially more powerful than that based on the two-two norm for the kinds of sparse and degree-heterogeneous networks common in economics.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.11117&r=all
  21. By: Eric Auerbach
    Abstract: I study a regression model in which one covariate is an unknown function of a latent driver of link formation in a network. Rather than specify or fit a parametric network formation model, I introduce a new method based on matching pairs of agents with similar columns of the squared adjacency matrix, the ijth entry of which contains the number of other agents linked to both agents i and j. The intuition behind this approach is that for a large class of network formation models the columns of this matrix characterize all of the identifiable information about individual linking behavior. In the paper, I first describe the model and formalize this intuition. I then introduce estimators for the parameters of the regression model and characterize their large sample properties.
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1903.09679&r=all
  22. By: Hector Galindo Silva; Nibene Habib Somé; Guy Tchuente; Nibene Habib Somé; Guy Tchuente
    Abstract: This paper explores the use of fuzzy regression-discontinuity design in the context where multiple treatments are applied at the threshold. It derives the conditions for identification of the effects of one of the treatments. The identification result shows that, under the strong assumption that the change in the probability of treatment at the cut off is equal across treatments, a difference-in-discontinuities estimator identifies the treatment effect of interest. Point identification of the treatment effect using fuzzy difference-in-discontinuities is impossible if the changes in the treatment probabilities are not equal across treatments. Estimable bounds of the treatment effect and a modification of the fuzzy difference-in-discontinuities are proposed under milder assumptions. These results suggest some caution when applying before-and-after methods in presence of fuzzy discontinuities. Using data from the National Health Interview Survey (NHIS), we apply this new identification strategy to evaluate the causal effect of the Affordable Care Act (ACA) on older Americans’ health care access and utilization. Our results suggest that the implementation of the Affordable Care Act has (1) led to 5% increase in the hospitalization rate of elderly Americans, (2) increased by 3.6% the probability of delaying care for cost reasons, and (3) exacerbated cost-related barriers to follow-up and continuity of care–7% more elderly could not afford prescriptions, 7% more could not see a specialist and, 5.5% more could not afford a follow-up visit- as a result of the ACA.
    Keywords: Fuzzy Difference-in-Discontinuities, Identification, Regression-Discontinuity Design, Affordable Care Act
    JEL: C13 I12 I13 I18
    Date: 2019–02–15
    URL: http://d.repec.org/n?u=RePEc:col:000416:017211&r=all
  23. By: Fan Li (Duke University); Andrea Mercatanti (Bank of Italy); Taneli Mäkinen (Bank of Italy); Andrea Silvestrini (Bank of Italy)
    Abstract: We propose a regression discontinuity design which can be employed when assignment to a treatment is determined by an ordinal variable. The proposal first requires an ordered probit model for the ordinal running variable to be estimated. The estimated probability of being assigned to a treatment is then adopted as a latent continuous running variable and used to identify a covariate-balanced subsample around the threshold. Assuming the local unconfoundedness of the treatment in the subsample, an estimate of the effect of the programme is obtained by employing a weighted estimator of the average treatment effect. We apply our methodology to estimate the causal effect of the corporate sector purchase programme of the European Central Bank on bond spreads.
    Keywords: program evaluation, regression discontinuity design, asset purchase programs
    JEL: C21 G18
    Date: 2019–03
    URL: http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1213_19&r=all

This nep-ecm issue is ©2019 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.