nep-ecm New Economics Papers
on Econometrics
Issue of 2020‒10‒26
thirty papers chosen by
Sune Karlsson
Örebro universitet

  1. Time-Varying Instrumental Variable Estimation By Luidas Giraitis; George Kapetanios; Massimiliano Marcellino
  2. Heteroscedasticity test of high-frequency data with jumps and microstructure noise By Qiang Liu; Zhi Liu; Chuanhai Zhang
  3. Encompassing Tests for Value at Risk and Expected Shortfall Multi-Step Forecasts based on Inference on the Boundary By Timo Dimitriadis; Xiaochun Liu; Julie Schnaitmann
  4. Statistical Inference for the Tangency Portfolio in High Dimension By Karlsson, Sune; Mazur, Stepan; Muhinyuza, Stanislas
  5. Fixed Effects Binary Choice Models with Three or More Periods By Laurent Davezies; Xavier D'Haultfoeuille; Martin Mugnier
  6. Noise-Induced Randomization in Regression Discontinuity Designs By Dean Eckles; Nikolaos Ignatiadis; Stefan Wager; Han Wu
  7. Inference for Large-Scale Linear Systems with Known Coefficients By Zheng Fang; Andres Santos; Azeem M. Shaikh; Alexander Torgovitsky
  8. Semiparametric Testing with Highly Persistent Predictors By Bas Werker; Bo Zhou
  9. Using Survey Information for Improving the Density Nowcasting of US GDP with a Focus on Predictive Performance during Covid-19 Pandemic By Cem Cakmakli; Hamza Demircan
  10. Nonparametric Bounds on Treatment Effects with Imperfect Instruments By Ban, Kyunghoon; Kedagni, Desire
  11. Ordinal-response models for irregularly spaced transactions: A forecasting exercise By Dimitrakopoulos, Stefanos; Tsionas, Mike G.; Aknouche, Abdelhakim
  12. Recent Developments on Factor Models and its Applications in Econometric Learning By Jianqing Fan; Kunpeng Li; Yuan Liao
  13. Realized Volatility Forecasting Based on Dynamic Quantile Model Averaging By Zongwu Cai; Chaoqun Ma; Xianhua Mi
  14. Bounding Average Returns to Schooling using Unconditional Moment Restrictions By Kedagni, Desire; Li, Lixiong; Mourifie, Ismael
  15. A Panel Data Model with Generalized Higher-Order Network Effects By Badi H. Baltagi; Sophia Ding; Peter H. Egger
  16. Edgeworth Expansions for Multivariate Random Sums By Javed, Farrukh; Loperfido, Nicola; Mazur, Stepan
  17. How to remove the testing bias in CoV-2 statistics By Klaus Wälde
  18. Multivariate cointegration and temporal aggregation: some further simulation results By Jesus Otero; Theodore Panagiotidis; Georgios Papapanagiotou
  19. Ranking-based variable selection for high-dimensional data By Baranowski, Rafal; Chen, Yining; Fryzlewicz, Piotr
  20. Comment on Gouri\'eroux, Monfort, Renne (2019): Identification and Estimation in Non-Fundamental Structural VARMA Models By Bernd Funovits
  21. A Class of Time-Varying Vector Moving Average (infinity) Models By Yayi Yan; Jiti Gao; Bin peng
  22. A Generalised Stochastic Volatility in Mean VAR. An Updated Algorithm By Haroon Mumtaz
  23. Spillovers of Program Benefits with Mismeasured Networks By Lina Zhang
  24. Further results on the estimation of dynamic panel logit models with fixed effects By Hugo Kruiniger
  25. Testing homogeneity in dynamic discrete games in finite samples By Federico A. Bugni; Jackson Bunting; Takuya Ura
  26. Data Science: A Primer for Economists By Gomez-Ruano, Gerardo
  27. Identification and Estimation of A Rational Inattention Discrete Choice Model with Bayesian Persuasion By Moyu Liao
  28. Manipulation-Robust Regression Discontinuity Design By Takuya Ishihara; Masayuki Sawada
  29. On the Existence of Conditional Maximum Likelihood Estimates of the Binary Logit Model with Fixed Effects By Martin Mugnier
  30. Hot Spots, Cold Feet, and Warm Glow: Identifying Spatial Heterogeneity in Willingness to Pay By Dennis Guignet; Christoper Moore; Haoluan Wang

  1. By: Luidas Giraitis (Queen Mary University of London); George Kapetanios (King's College London); Massimiliano Marcellino (Bocconi University)
    Abstract: We develop non-parametric instrumental variable estimation and inferential theory for econometric models with possibly endogenous regressors whose coefficients can vary over time either deterministically or stochastically, and the time-varying and uniform versions of the standard Hausman exogeneity test. After deriving the asymptotic properties of the proposed procedures, we assess their finite sample performance by means of a set of Monte Carlo experiments, and illustrate their application by means of an empirical example on the Phillips curve.
    Keywords: Instrumental variables, Time-varying parameters, endogeneity, Hausman test, Non-parametric methods, Phillips curve.
    JEL: C14 C26 C51
    Date: 2020–08–17
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:911&r=all
  2. By: Qiang Liu; Zhi Liu; Chuanhai Zhang
    Abstract: In this paper, we are interested in testing if the volatility process is constant or not during a given time span by using high-frequency data with the presence of jumps and microstructure noise. Based on estimators of integrated volatility and spot volatility, we propose a nonparametric way to depict the discrepancy between local variation and global variation. We show that our proposed test estimator converges to a standard normal distribution if the volatility is constant, otherwise it diverges to infinity. Simulation studies verify the theoretical results and show a good finite sample performance of the test procedure. We also apply our test procedure to do the heteroscedasticity test for some real high-frequency financial data. We observe that in almost half of the days tested, the assumption of constant volatility within a day is violated. And this is due to that the stock prices during opening and closing periods are highly volatile and account for a relative large proportion of intraday variation.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.07659&r=all
  3. By: Timo Dimitriadis; Xiaochun Liu; Julie Schnaitmann
    Abstract: We propose forecast encompassing tests for the Expected Shortfall (ES) jointly with the Value at Risk (VaR) based on flexible link (or combination) functions. Our setup allows testing encompassing for convex forecast combinations and for link functions which preclude crossings of the combined VaR and ES forecasts. As the tests based on these link functions involve parameters which are on the boundary of the parameter space under the null hypothesis, we derive and base our tests on nonstandard asymptotic theory on the boundary. Our simulation study shows that the encompassing tests based on our new link functions outperform tests based on unrestricted linear link functions for one-step and multi-step forecasts. We further illustrate the potential of the proposed tests in a real data analysis for forecasting VaR and ES of the S&P 500 index.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.07341&r=all
  4. By: Karlsson, Sune (Örebro University School of Business); Mazur, Stepan (Örebro University School of Business); Muhinyuza, Stanislas (Department of Mathematics, Stockholm University)
    Abstract: In this paper, we study the distributional properties of the tangency portfolio (TP) weights assuming a normal distribution of the logarithmic returns. We derive a stochastic representation of the TP weights that fully describes their distribution. Under a high-dimensional asymptotic regime, i.e. the dimension of the portfolio, k, and the sample size, n, approach infinity such that k/n → c ∈ (0, 1), we deliver the asymptotic distribution of the TP weights. Moreover, we consider tests about the elements of the TP and derive the asymptotic distribution of the test statistic under the null and alternative hypotheses. In a simulation study, we compare the asymptotic distribution of the TP weights with the exact finite sample density. We also compare the high-dimensional asymptotic test with an exact small sample test. We document a good performance of the asymptotic approximations except for small sample sizes combined with c close to one. In an empirical study, we analyze the TP weights in portfolios containing stocks from the S&P 500 index.
    Keywords: Tangency portfolio; high-dimensional asymptotics; hypothesis testing
    JEL: C12 C13 G11
    Date: 2020–10–09
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2020_010&r=all
  5. By: Laurent Davezies; Xavier D'Haultfoeuille; Martin Mugnier
    Abstract: We consider fixed effects binary choice models with a fixed number of periods T and without a large support condition on the regressors. If the time-varying unobserved terms are i.i.d. with known distribution F, Chamberlain (2010) shows that the common slope parameter is point-identified if and only if F is logistic. However, he considers in his proof only T=2. We show that actually, the result does not generalize to T>2: the common slope parameter and some parameters of the distribution of the shocks can be identified when F belongs to a family including the logit distribution. Identification is based on a conditional moment restriction. We give necessary and sufficient conditions on the covariates for this restriction to identify the parameters. In addition, we show that under mild conditions, the corresponding GMM estimator reaches the semiparametric efficiency bound when T=3.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.08108&r=all
  6. By: Dean Eckles; Nikolaos Ignatiadis; Stefan Wager; Han Wu
    Abstract: Regression discontinuity designs are used to estimate causal effects in settings where treatment is determined by whether an observed running variable crosses a pre-specified threshold. While the resulting sampling design is sometimes described as akin to a locally randomized experiment in a neighborhood of the threshold, standard formal analyses do not make reference to probabilistic treatment assignment and instead identify treatment effects via continuity arguments. Here we propose a new approach to identification, estimation, and inference in regression discontinuity designs that exploits measurement error in the running variable. Under an assumption that the measurement error is exogenous, we show how to consistently estimate causal effects using a class of linear estimators that weight treated and control units so as to balance a latent variable of which the running variable is a noisy measure. We find this approach to facilitate identification of both familiar estimands from the literature, as well as policy-relevant estimands that correspond to the effects of realistic changes to the existing treatment assignment rule. We demonstrate the method with a study of retention of HIV patients and evaluate its performance using simulated data and a regression discontinuity design artificially constructed from test scores in early childhood.
    Date: 2020–04
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2004.09458&r=all
  7. By: Zheng Fang; Andres Santos; Azeem M. Shaikh; Alexander Torgovitsky
    Abstract: This paper considers the problem of testing whether there exists a non-negative solution to a possibly under-determined system of linear equations with known coefficients. This hypothesis testing problem arises naturally in a number of settings, including random coefficient, treatment effect, and discrete choice models, as well as a class of linear programming problems. As a first contribution, we obtain a novel geometric characterization of the null hypothesis in terms of identified parameters satisfying an infinite set of inequality restrictions. Using this characterization, we devise a test that requires solving only linear programs for its implementation, and thus remains computationally feasible in the high-dimensional applications that motivate our analysis. The asymptotic size of the proposed test is shown to equal at most the nominal level uniformly over a large class of distributions that permits the number of linear equations to grow with the sample size.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.08568&r=all
  8. By: Bas Werker; Bo Zhou
    Abstract: We address the issue of semiparametric efficiency in the bivariate regression problem with a highly persistent predictor, where the joint distribution of the innovations is regarded an infinite-dimensional nuisance parameter. Using a structural representation of the limit experiment and exploiting invariance relationships therein, we construct invariant point-optimal tests for the regression coefficient of interest. This approach naturally leads to a family of feasible tests based on the component-wise ranks of the innovations that can gain considerable power relative to existing tests under non-Gaussian innovation distributions, while behaving equivalently under Gaussianity. When an i.i.d. assumption on the innovations is appropriate for the data at hand, our tests exploit the efficiency gains possible. Moreover, we show by simulation that our test remains well behaved under some forms of conditional heteroskedasticity.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.08291&r=all
  9. By: Cem Cakmakli (Koc University, Istanbul, Turkey); Hamza Demircan (Central Bank of the Republic of Turkey, Istanbul, Turkey)
    Abstract: We provide a methodology that efficiently combines the statistical models of nowcasting with the survey information for improving the (density) nowcasting of US real GDP. Specifically, we use the conventional dynamic factor model together with a stochastic volatility component as the baseline statistical model. We augment the model with information from the survey expectations by aligning the first and second moments of the predictive distribution implied by this baseline model with those extracted from the survey information at various horizons. Results indicate that survey information bears valuable information over the baseline model for nowcasting GDP. While the mean survey predictions deliver valuable information during extreme events such as the Covid-19 pandemic, the variation in the survey participants’ predictions, often used as a measure of ‘ambiguity’, conveys crucial information beyond the mean of those predictions for capturing the tail behavior of the GDP distribution.
    Keywords: Dynamic factor model; Stochastic volatility; Survey of Professional Forecasters; Disagreement; Predictive density evaluation; Bayesian inference.
    JEL: C32 C38 C53 E32 E37
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:koc:wpaper:2016&r=all
  10. By: Ban, Kyunghoon; Kedagni, Desire
    Abstract: This paper extends the identification results in Nevo and Rosen(2012) to nonparametric models. We derive nonparametric bounds on the averagetreatment effect when an imperfect instrument is available. As in Nevo andRosen (2012), we assume that the correlation between the imperfect instrumentand the unobserved latent variables has the same sign as the correlationbetween the endogenous variable and the latent variables. We show that themonotone treatment selection and monotone instrumental variable restrictions,introduced by Manski and Pepper (2000, 2009), jointly imply this assumption.We introduce the concept of comonotone instrumental variable, which alsosatisfies this assumption. Moreover, we show how the assumption that theimperfect instrument is less endogenous than the treatment variable can helptighten the bounds. We also use the monotone treatment response assumption toget tighter bounds. The identified set can be written in the form ofintersection bounds, which is more conducive to inference. We illustrate ourmethodology using the National Longitudinal Survey of Young Men data toestimate returns to schooling.
    Date: 2020–10–12
    URL: http://d.repec.org/n?u=RePEc:isu:genstf:202010120700001113&r=all
  11. By: Dimitrakopoulos, Stefanos; Tsionas, Mike G.; Aknouche, Abdelhakim
    Abstract: We propose a new model for transaction data that accounts jointly for the time duration between transactions and for the discreteness of the intraday stock price changes. Duration is assumed to follow a stochastic conditional duration model, while price discreteness is captured by an autoregressive moving average ordinal-response model with stochastic volatility and time-varying parameters. The proposed model also allows for endogeneity of the trade durations as well as for leverage and in-mean effects. In a purely Bayesian framework we conduct a forecasting exercise using multiple high-frequency transaction data sets and show that the proposed model produces better point and density forecasts than competing models.
    Keywords: Ordinal-response models, irregularly spaced data, stochastic conditional duration, time varying ARMA-SV model, Bayesian MCMC, model confidence set.
    JEL: C1 C11 C15 C4 C41 C5 C51 C53 C58
    Date: 2020–10–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:103250&r=all
  12. By: Jianqing Fan; Kunpeng Li; Yuan Liao
    Abstract: This paper makes a selective survey on the recent development of the factor model and its application on statistical learnings. We focus on the perspective of the low-rank structure of factor models, and particularly draws attentions to estimating the model from the low-rank recovery point of view. The survey mainly consists of three parts: the first part is a review on new factor estimations based on modern techniques on recovering low-rank structures of high-dimensional models. The second part discusses statistical inferences of several factor-augmented models and applications in econometric learning models. The final part summarizes new developments dealing with unbalanced panels from the matrix completion perspective.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.10103&r=all
  13. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Chaoqun Ma (School of Business, Hunan University, Changsha, Hunan 410082, China); Xianhua Mi (School of Business, Hunan University, Changsha, Hunan 410082, China)
    Abstract: Heterogeneity, volatility persistence, leverage effect and fat right tails are the most documented stylized features of realized volatility (RV), which introduce substantial difficulties in econometric modeling that requires some rigid distributional assumptions. To accommodate these features without making these assumptions, we study the quantile forecasting of RV by proposing five novel dynamic model averaging strategies designed to combine individual quantile models, termed as dynamic quantile model averaging (DQMA). The empirical results of analyzing high-frequency price data of the S&P 500 index clearly indicate that the stylized facts of RV can be captured by different quantiles, with stronger effects at high-level quantiles. Therefore, DQMA can not only reduce the risk of model uncertainty but also generate more accurate and robust out-of-sample quantile forecasts than those of individual heterogeneous autoregressive quantile models.
    Keywords: Dynamic moving averaging; Model uncertainty; Fat tails; Heterogeneity; Quantile regression; Realized volatility; Time-varying parameters.
    JEL: C12 C13 C14 C23
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202016&r=all
  14. By: Kedagni, Desire; Li, Lixiong; Mourifie, Ismael
    Abstract: Abstract. In the last 20 years, the bounding approach for the average treatment effect (ATE) has been developing on the theoretical side, however, empirical work has lagged far behind theory in this area. One main reason is that, in practice, traditional bounding methods fall into two extreme cases: (i) On the one hand, the bounds are too wide to be informative and this happens, in general, when the instrumental variable (IV) has little variation; (ii) while on the other hand, the bounds cross, in which case the researcher learns nothing about the parameter of interest other than that the IV restrictions are rejected. This usually happens when the IV has a rich support and the IV restriction imposed in the model — full, quantile or mean independence— is too stringent, as illustrated in Ginther (2000). In this paper, we provide sharp bounds on the ATE using only a finite set of unconditional moment restrictions, which is a weaker version of mean independence. We revisit Ginther’s (2000) return to schooling application using our bounding approach and derive informative bounds on the average returns to schooling in US.
    Date: 2018–12–29
    URL: http://d.repec.org/n?u=RePEc:isu:genstf:201812290800001086&r=all
  15. By: Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, 426 Eggers Hall, Syracuse, NY 13244); Sophia Ding (ETH Zurich); Peter H. Egger (ETH Zurich and CEPR)
    Abstract: Many data situations require the consideration of network effects among the cross-sectional units of observation. In this paper, we present a generalized panel model which accounts for two features: (i) three types of network effects on the right-hand side of the model, namely through weighted dependent variable, weighted exogenous variables, as well as weighted error components, and (ii) higher-order network effects due to ex-ante unknown network-decay functions or the presence of multiplex (or multi-layer) networks among all of those. We outline the model, the basic assumptions, and present simulation results.
    Keywords: Spatial and Network Interdependence, Panel Data, Higher-Order Network Effects
    JEL: C23 C33 C34
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:max:cprwps:233&r=all
  16. By: Javed, Farrukh (Örebro University School of Business); Loperfido, Nicola (Università degli Studi di Urbino "Carlo Bo".); Mazur, Stepan (Örebro University School of Business)
    Abstract: The sum of a random number of independent and identically distributed random vectors has a distribution which is not analytically tractable, in the general case. The problem has been addressed by means of asymptotic approximations embedding the number of summands in a stochastically increasing sequence. Another approach relies on tting exible and tractable parametric, multivariate distributions, as for example nite mixtures. In this paper we investigate both approaches within the framework of Edgeworth expansions. We derive a general formula for the fourth-order cumulants of the random sum of independent and identically distributed random vectors and show that the above mentioned asymptotic approach does not necessarily lead to valid asymptotic normal approximations. We address the problem by means of Edgeworth expansions. Both theoretical and empirical results suggest that mixtures of two multivariate normal distributions with proportional covariance matrices satisfactorily t data generated from random sums where the counting random variable and the random summands are Poisson and multivariate skew-normal, respectively.
    Keywords: Edgeworth expansion; Fourth cumulant; Random sum; Skew-normal
    JEL: C10
    Date: 2020–10–07
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2020_009&r=all
  17. By: Klaus Wälde (Johannes Gutenberg University)
    Abstract: BACKGROUND. Public health measures and private behaviour are based on reported numbers of SARS-CoV-2 infections. Some argue that testing influences the confirmed number of infections. OBJECTIVES/METHODS. Do time series on reported infections and the number of tests allow one to draw conclusions about actual infection numbers? A SIR model is presented where the true numbers of susceptible, infectious and removed individuals are unobserved. Testing is also modelled. RESULTS. Official confirmed infection numbers are likely to be biased and cannot be compared over time. The bias occurs because of different reasons for testing (e.g. by symptoms, representative or testing travellers). The paper illustrates the bias and works out the effect of the number of tests on the number of reported cases. The paper also shows that the positive rate (the ratio of positive tests to the total number of tests) is uninformative in the presence of non-representative testing. CONCLUSIONS. A severity index for epidemics is proposed that is comparable over time. This index is based on Covid-19 cases and can be obtained if the reason for testing is known.
    Keywords: Covid-19, number of tests, reported number of CoV-2 infections, (correcting the) bias, SIR model, unbiased epidemiological severity index
    Date: 2020–10–09
    URL: http://d.repec.org/n?u=RePEc:jgu:wpaper:2021&r=all
  18. By: Jesus Otero (Universidad del Rosario); Theodore Panagiotidis (Department of Economics, University of Macedonia); Georgios Papapanagiotou (Department of Economics, University of Macedonia)
    Abstract: We perform Monte Carlo simulations to study the effect of increasing the frequency of observations and data span on the Johansen (1988, 1995) maximum likelihood cointegration testing approach, as well as on the bootstrap and wild bootstrap implementations of the method developed by Cavaliere et al. (2012, 2014). Considering systems with three and four variables, we find that when both the data span and the frequency vary, the power of the tests depend more on the sample length. We illustrate our findings by investigating the existence of long-run equilibrium relationships among four indicators prices of coffee.
    Keywords: Monte Carlo, Span, Power, Cointegration, Coffee prices.
    JEL: C13
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:mcd:mcddps:2020_05&r=all
  19. By: Baranowski, Rafal; Chen, Yining; Fryzlewicz, Piotr
    Abstract: We propose a ranking-based variable selection (RBVS) technique that identifies important variables influencing the response in high-dimensional data. RBVS uses subsampling to identify the covariates that appear nonspuriously at the top of a chosen variable ranking. We study the conditions under which such a set is unique, and show that it can be recovered successfully from the data by our procedure. Unlike many existing high-dimensional variable selection techniques, among all relevant variables, RBVS distinguishes between important and unimportant variables, and aims to recover only the important ones. Moreover, RBVS does not require model restrictions on the relationship between the response and the covariates, and, thus, is widely applicable in both parametric and nonparametric contexts. Lastly, we illustrate the good practical performance of the proposed technique by means of a comparative simulation study. The RBVS algorithm is implemented in rbvs, a publicly available R package.
    Keywords: variable screening; subset selection; bootstrap; stability selection.
    JEL: C1
    Date: 2020–07–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:90233&r=all
  20. By: Bernd Funovits
    Abstract: This comment points out a serious flaw in the article "Gouri\'eroux, Monfort, Renne (2019): Identification and Estimation in Non-Fundamental Structural VARMA Models" with regard to mirroring complex-valued roots with Blaschke polynomial matrices. Moreover, the (non-) feasibility of the proposed method (if the handling of Blaschke transformation were not prohibitive) for cross-sectional dimensions greater than two and vector moving average (VMA) polynomial matrices of degree greater than one is discussed.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.02711&r=all
  21. By: Yayi Yan; Jiti Gao; Bin peng
    Abstract: Multivariate time series analyses are widely encountered in practical studies, e.g., modelling policy transmission mechanism and measuring connectedness between economic agents. To better capture the dynamics, this paper proposes a class of multivariate dynamic models with time-varying coefficients, which have a general time-varying vector moving average (VMA) representation, and nest, for instance, time-varying vector autoregression (VAR), time–varying vector autoregression moving–average (VARMA), and so forth as special cases. The paper then develops a unified estimation method for the unknown quantities before an asymptotic theory for the proposed estimators is established. In the empirical study, we investigate the transmission mechanism of monetary policy using U.S. data, and uncover a fall in the volatilities of exogenous shocks. In addition, we find that (i) monetary policy shocks have less influence on inflation before and during the so-called Great Moderation, (ii) inflation is more anchored recently, and (iii) the long-run level of inflation is below, but quite close to the Federal Reserve’s target of two percent after the beginning of the Great Moderation period.
    Keywords: multivariate time series model, nonparametric kernel estimation, trending stationarity
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2020-39&r=all
  22. By: Haroon Mumtaz (Queen Mary University of London)
    Abstract: In this note we present an updated algorithm to estimate the VAR with stochastic volatility proposed in Mumtaz (2018). The model is re-written so that some of the Metropolis Hastings steps are avoided.
    Keywords: VAR, Stochastic volatility in mean, error covariance
    JEL: C3 C11 E3
    Date: 2020–07–05
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:908&r=all
  23. By: Lina Zhang
    Abstract: In studies of program evaluation under network interference, correctly measuring spillovers of the intervention is crucial for making appropriate policy recommendations. However, increasing empirical evidence has shown that network links are often measured with errors. This paper explores the identification and estimation of treatment and spillover effects when the network is mismeasured. I propose a novel method to nonparametrically point-identify the treatment and spillover effects, when two network observations are available. The method can deal with a large network with missing or misreported links and possesses several attractive features: (i) it allows heterogeneous treatment and spillover effects; (ii) it does not rely on modelling network formation or its misclassification probabilities; and (iii) it accommodates samples that are correlated in overlapping ways. A semiparametric estimation approach is proposed, and the analysis is applied to study the spillover effects of an insurance information program on the insurance adoption decisions.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.09614&r=all
  24. By: Hugo Kruiniger
    Abstract: Kitazawa (2013, 2016) showed that the common parameters in the panel logit AR(1) model with strictly exogenous covariates and fixed effects are estimable at the root-n rate using the Generalized Method of Moments. Honor\'e and Weidner (2020) extended his results in various directions: they found additional moment conditions for the logit AR(1) model and also considered estimation of logit AR(p) models with p>1. In this note we prove a conjecture in their paper and show that 2^{T}-2T of their moment functions for the logit AR(1) model are linearly independent and span the set of valid moment functions, which is a 2^{T}-2T -dimensional linear subspace of the 2^{T} -dimensional vector space of real valued functions over the outcomes y element of {0,1}^{T}.
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.03382&r=all
  25. By: Federico A. Bugni; Jackson Bunting; Takuya Ura
    Abstract: The literature on dynamic discrete games often assumes that the conditional choice probabilities and the state transition probabilities are homogeneous across markets and over time. We refer to this as the "homogeneity assumption" in dynamic discrete games. This homogeneity assumption enables empirical studies to estimate the game's structural parameters by pooling data from multiple markets and from many time periods. In this paper, we propose a hypothesis test to evaluate whether the homogeneity assumption holds in the data. Our hypothesis is the result of an approximate randomization test, implemented via a Markov chain Monte Carlo (MCMC) algorithm. We show that our hypothesis test becomes valid as the (user-defined) number of MCMC draws diverges, for any fixed number of markets, time-periods, and players. We apply our test to the empirical study of the U.S. Portland cement industry in Ryan (2012).
    Date: 2020–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2010.02297&r=all
  26. By: Gomez-Ruano, Gerardo
    Abstract: The last years have seen an explosion in the demand for data science skills. In this paper, I introduce the reader to the term, point out the technological jumps that allowed the rise of its methods, and give an overview of the most common ones. I close by pointing out the strengths and weaknesses of the corresponding tools as well as their complementarities with economic analysis.
    Keywords: Data Science; Statistics; Quantitative Methods; Labor Market; Technological Change; Numerical Methods; Econometric Methods
    JEL: A12 C01 C13 C14 C45 C55 C87 C88 C99 J24 O31 Y20
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:102928&r=all
  27. By: Moyu Liao
    Abstract: This paper studies the semi-parametric identification and estimation of a rational inattention model with Bayesian persuasion. The identification requires the observation of a cross-section of market-level outcomes. The empirical content of the model can be characterized by three moment conditions. A two-step estimation procedure is proposed to avoid computation complexity in the structural model. In the empirical application, I study the persuasion effect of Fox News in the 2000 presidential election. Welfare analysis shows that persuasion will not influence voters with high school education but will generate higher dispersion in the welfare of voters with a partial college education and decrease the dispersion in the welfare of voters with a bachelors degree.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.08045&r=all
  28. By: Takuya Ishihara; Masayuki Sawada
    Abstract: Regression discontinuity designs (RDDs) may not deliver reliable results if units manipulate their running variables. It is commonly believed that imprecise manipulations are harmless and, diagnostic tests detect precise manipulations. However, we demonstrate that RDDs may fail to point-identify in the presence of imprecise manipulation, and that not all harmful manipulations are detectable. To formalize these claims, we propose a class of RDDs with harmless or detectable manipulations over locally randomized running variables as manipulation-robust RDDs. The conditions for the manipulation-robust RDDs may be intuitively verified using the institutional background. We demonstrate its verification process in case studies of applications that use the McCrary (2008) density test. The restrictions of manipulation-robust RDDs generate partial identification results that are robust to possible manipulation. We apply the partial identification result to a controversy regarding the incumbency margin study of the U.S. House of Representatives elections. The results show the robustness of the original conclusion of Lee (2008).
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.07551&r=all
  29. By: Martin Mugnier
    Abstract: By exploiting McFadden (1974)'s results on conditional logit estimation, we show that there exists a one-to-one mapping between existence and uniqueness of conditional maximum likelihood estimates of the binary logit model with fixed effects and the spatial configuration of data points. Our results extend those in Albert and Anderson (1984) for the cross-sectional case and can be used to build a simple algorithm that detects spurious estimates in finite samples. Importantly, we show an instance from artificial data for which the STATA's command clogit returns spurious estimates.
    Date: 2020–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2009.09998&r=all
  30. By: Dennis Guignet; Christoper Moore; Haoluan Wang
    Abstract: We propose a novel extension of existing semi-parametric approaches to examine spatial patterns of willingness to pay (WTP) and status quo effects, including tests for global spatial autocorrelation, spatial interpolation techniques, and local hotspot analysis. We are the first to formally account for the fact that observed WTP values are estimates, and to incorporate the statistical precision of those estimates into our spatial analyses. We demonstrate our two-step methodology using data from a stated preference survey that elicited values for improvements in water quality in the Chesapeake Bay and lakes in the surrounding watershed. Our methodology offers a flexible way to identify potential spatial patterns of welfare impacts, with the ultimate goal of facilitating more accurate benefit-cost and distributional analyses, both in terms of defining the appropriate extent of the market and in interpolating values within that market.
    Keywords: Bayesian; hotspot analysis; semi-parametric; spatial heterogeneity; stated preference; water quality
    JEL: C11 C14 Q51 Q53
    Date: 2020–03
    URL: http://d.repec.org/n?u=RePEc:nev:wpaper:wp202001&r=all

This nep-ecm issue is ©2020 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.