nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒08‒28
34 papers chosen by
Sune Karlsson, Örebro universitet


  1. Bootstrapping Nonstationary Autoregressive Processes with Predictive Regression Models By Christis Katsouris
  2. Asymptotic equivalence of Principal Component and Quasi Maximum Likelihood estimators in Large Approximate Factor Models By Matteo Barigozzi
  3. Tuning-free testing of factor regression against factor-augmented sparse alternatives By Jad Beyhum; Jonas Striaukas
  4. Synthetic Control Methods by Density Matching under Implicit Endogeneitiy By Masahiro Kato; Akari Ohda; Masaaki Imaizumi; Kenichiro McAlinn
  5. Predictability Tests Robust against Parameter Instability By Christis Katsouris
  6. On the Efficiency of Finely Stratified Experiments By Yuehao Bai; Jizhou Liu; Azeem M. Shaikh; Max Tabord-Meehan
  7. Fast, Order-Invariant Bayesian Inference in VARs using the Eigendecomposition of the Error Covariance Matrix By Ping Wu; Gary Koop
  8. Dynamic Regression Discontinuity: A Within-Design Approach By Francesco Ruggieri
  9. Large sample properties of GMM estimators under second-order identification By Hugo Kruiniger
  10. Identification Robust Inference for the Risk Premium in Term Structure Models By Frank Kleibergen; Lingwei Kong
  11. Inference in Experiments with Matched Pairs and Imperfect Compliance By Yuehao Bai; Hongchang Guo; Azeem M. Shaikh; Max Tabord-Meehan
  12. One-step nonparametric instrumental regression using smoothing splines By Jad Beyhum; Elia Lapenta; Pascal Lavergne
  13. Latent Gaussian dynamic factor modeling and forecasting for multivariate count time series By Younghoon Kim; Zachary F. Fisher; Vladas Pipiras
  14. The Yule-Frisch-Waugh-Lovell Theorem for Linear Instrumental Variables Estimation By Deepankar Basu
  15. BayesMultiMode: Bayesian Mode Inference in R By Nalan Basturk; Jamie Cross; Peter de Knijff; Lennart Hoogerheide; Paul Labonne; Herman K van Dijk
  16. Weak (Proxy) Factors Robust Hansen-Jagannathan Distance For Linear Asset Pricing Models By Lingwei Kong
  17. Generalized Autoregressive Gamma Processes By Bruno Feunou
  18. Source Condition Double Robust Inference on Functionals of Inverse Problems By Andrew Bennett; Nathan Kallus; Xiaojie Mao; Whitney Newey; Vasilis Syrgkanis; Masatoshi Uehara
  19. Functional Differencing in Networks By St\'ephane Bonhomme; Kevin Dano
  20. Using Probabilistic Stated Preference Analyses to Understand Actual Choices By Romuald Meango
  21. Real-Time Detection of Local No-Arbitrage Violations By Torben G. Andersen; Viktor Todorov; Bo Zhou
  22. Assessing the Impact of Policy Changes on a Nowcast By Mr. Sam Ouliaris; Ms. Celine Rochon
  23. Scenario Sampling for Large Supermodular Games By Bryan S. Graham; Andrin Pelican
  24. Exact simulation of Poisson-Dirichlet distribution and generalised gamma process By Dassios, Angelos; Zhang, Junyi
  25. Estimation of an Order Book Dependent Hawkes Process for Large Datasets By Luca Mucciante; Alessio Sancetta
  26. Group-Heterogeneous Changes-in-Changes and Distributional Synthetic Controls By Songnian Chen; Junlong Feng
  27. Quantal Response Equilibrium with a Continuum of Types: Characterization and Nonparametric Identification By Evan Friedman; Duarte Gon\c{c}alves
  28. Scenario Sampling for Large Supermodular Games By Bryan S. Graham; Andrin Pelican
  29. Bayesian Forecasting of Stock Returns on the JSE using Simultaneous Graphical Dynamic Linear Models By Nelson Kyakutwika; Bruce Bartlett
  30. Reconciling the Theory of Factor Sequences By Philipp Gersing; Christoph Rust; Manfred Deistler
  31. Random Walk Forecasts of Stationary Processes Have Low Bias By Kurt Graden Lunsford; Kenneth D. West
  32. How to Construct Monthly VAR Proxies Based on Daily Futures Market Surprises By Lutz Kilian
  33. A Non-Parametric Estimation of Productivity with Idiosyncratic and Aggregate Shocks: The Role of Research and Development (R&D) and Corporate Tax By Bournakis, Ioannis; Tsionas, Mike G.
  34. Meta-Analysis of Social Science Research: A Practitioner’s Guide By Irsova, Zuzana; Doucouliagos, Hristos; Havranek, Tomas; Stanley, T. D.

  1. By: Christis Katsouris
    Abstract: We establish the asymptotic validity of the bootstrap-based IVX estimator proposed by Phillips and Magdalinos (2009) for the predictive regression model parameter based on a local-to-unity specification of the autoregressive coefficient which covers both nearly nonstationary and nearly stationary processes. A mixed Gaussian limit distribution is obtained for the bootstrap-based IVX estimator. The statistical validity of the theoretical results are illustrated by Monte Carlo experiments for various statistical inference problems.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.14463&r=ecm
  2. By: Matteo Barigozzi
    Abstract: We prove that in an approximate factor model for an $n$-dimensional vector of stationary time series the factor loadings estimated via Principal Components are asymptotically equivalent, as $n\to\infty$, to those estimated by Quasi Maximum Likelihood. Both estimators are, in turn, also asymptotically equivalent, as $n\to\infty$, to the unfeasible Ordinary Least Squares estimator we would have if the factors were observed. We also show that the usual sandwich form of the asymptotic covariance matrix of the Quasi Maximum Likelihood estimator is asymptotically equivalent to the simpler asymptotic covariance matrix of the unfeasible Ordinary Least Squares. This provides a simple way to estimate asymptotic confidence intervals for the Quasi Maximum Likelihood estimator without the need of estimating the Hessian and Fisher information matrices whose expressions are very complex. All our results hold in the general case in which the idiosyncratic components are cross-sectionally heteroskedastic as well as serially and cross-sectionally weakly correlated.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.09864&r=ecm
  3. By: Jad Beyhum; Jonas Striaukas
    Abstract: This study introduces a bootstrap test of the validity of factor regression within a high-dimensional factor-augmented sparse regression model that integrates factor and sparse regression techniques. The test provides a means to assess the suitability of the classical (dense) factor regression model compared to alternative (sparse plus dense) factor-augmented sparse regression models. Our proposed test does not require tuning parameters, eliminates the need to estimate covariance matrices, and offers simplicity in implementation. The validity of the test is theoretically established under time-series dependence. Through simulation experiments, we demonstrate the favorable finite sample performance of our procedure. Moreover, using the FRED-MD dataset, we apply the test and reject the adequacy of the classical factor regression model when the dependent variable is inflation but not when it is industrial production. These findings offer insights into selecting appropriate models for high-dimensional datasets.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.13364&r=ecm
  4. By: Masahiro Kato; Akari Ohda; Masaaki Imaizumi; Kenichiro McAlinn
    Abstract: Synthetic control methods (SCMs) have become a crucial tool for causal inference in comparative case studies. The fundamental idea of SCMs is to estimate counterfactual outcomes for a treated unit by using a weighted sum of observed outcomes from untreated units. The accuracy of the synthetic control (SC) is critical for estimating the causal effect, and hence, the estimation of SC weights has been the focus of much research. In this paper, we first point out that existing SCMs suffer from an implicit endogeneity problem, which is the correlation between the outcomes of untreated units and the error term in the model of a counterfactual outcome. We show that this problem yields a bias in the causal effect estimator. We then propose a novel SCM based on density matching, assuming that the density of outcomes of the treated unit can be approximated by a weighted average of the densities of untreated units (i.e., a mixture model). Based on this assumption, we estimate SC weights by matching moments of treated outcomes and the weighted sum of moments of untreated outcomes. Our proposed method has three advantages over existing methods. First, our estimator is asymptotically unbiased under the assumption of the mixture model. Second, due to the asymptotic unbiasedness, we can reduce the mean squared error for counterfactual prediction. Third, our method generates full densities of the treatment effect, not only expected values, which broadens the applicability of SCMs. We provide experimental results to demonstrate the effectiveness of our proposed method.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.11127&r=ecm
  5. By: Christis Katsouris
    Abstract: We consider Wald type statistics designed for joint predictability and structural break testing based on the instrumentation method of Phillips and Magdalinos (2009). We show that under the assumption of nonstationary predictors: (i) the tests based on the OLS estimators converge to a nonstandard limiting distribution which depends on the nuisance coefficient of persistence; and (ii) the tests based on the IVX estimators can filter out the persistence under certain parameter restrictions due to the supremum functional. These results contribute to the literature of joint predictability and parameter instability testing by providing analytical tractable asymptotic theory when taking into account nonstationary regressors. We compare the finite-sample size and power performance of the Wald tests under both estimators via extensive Monte Carlo experiments. Critical values are computed using standard bootstrap inference methodologies. We illustrate the usefulness of the proposed framework to test for predictability under the presence of parameter instability by examining the stock market predictability puzzle for the US equity premium.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.15151&r=ecm
  6. By: Yuehao Bai; Jizhou Liu; Azeem M. Shaikh; Max Tabord-Meehan
    Abstract: This paper studies the efficient estimation of a large class of treatment effect parameters that arise in the analysis of experiments. Here, efficiency is understood to be with respect to a broad class of treatment assignment schemes for which the marginal probability that any unit is assigned to treatment equals a pre-specified value, e.g., one half. Importantly, we do not require that treatment status is assigned in an i.i.d. fashion, thereby accommodating complicated treatment assignment schemes that are used in practice, such as stratified block randomization and matched pairs. The class of parameters considered are those that can be expressed as the solution to a restriction on the expectation of a known function of the observed data, including possibly the pre-specified value for the marginal probability of treatment assignment. We show that this class of parameters includes, among other things, average treatment effects, quantile treatment effects, local average treatment effects as well as the counterparts to these quantities in experiments in which the unit is itself a cluster. In this setting, we establish two results. First, we derive a lower bound on the asymptotic variance of estimators of the parameter of interest in the form of a convolution theorem. Second, we show that the n\"aive method of moments estimator achieves this bound on the asymptotic variance quite generally if treatment is assigned using a "finely stratified" design. By a "finely stratified" design, we mean experiments in which units are divided into groups of a fixed size and a proportion within each group is assigned to treatment uniformly at random so that it respects the restriction on the marginal probability of treatment assignment. In this sense, "finely stratified" experiments lead to efficient estimators of treatment effect parameters "by design" rather than through ex post covariate adjustment.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.15181&r=ecm
  7. By: Ping Wu (Department of Economics, University of Strathclyde); Gary Koop (Department of Economics, University of Strathclyde)
    Abstract: Bayesian inference in Vector Autoregressions (VARs) involves manipulating large matrices which appear in the posterior (or conditional posterior) of the VAR coe- cients. For large VARs, the computational time involved with these manipulations becomes so large as to make empirical work impractical. In response to this, many researchers transform their VARs so as to allow for Bayesian estimation to proceed one equation at a time. This leads to a massive reduction in the computational bur- den. This transformation involves taking the Cholesky decomposition for the error covariance matrix. However, this strategy implies that posterior inference depends on the order the variables enter the VAR. In this paper we develop an alternative transformation, based on the eigendecomposition, which does not lead to order de- pendence. Beginning with an inverse-Wishart prior on the error covariance matrix, we derive and discuss the properties of the prior it implies on the eigenmatrix and eigenvalues. We then show how an extension of the prior on the eigenmatrix can allow for greater exibility while maintaining many of the bene ts of conjugacy. We exploit this exibility in order to extend the prior on the eigenvalues to allow for stochastic volatility. The properties of the eigendecomposition approach are investigated in a macroeconomic forecasting exercise involving VARs with 20 variables.
    Keywords: Eigendecomposition, order invariance, large vector autoregression
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:str:wpaper:2310&r=ecm
  8. By: Francesco Ruggieri
    Abstract: I propose a novel argument to point identify economically interpretable intertemporal treatment effects in dynamic regression discontinuity designs (RDDs). Specifically, I develop a dynamic potential outcomes model and specialize two assumptions of the difference-in-differences literature, the no anticipation and common trends restrictions, to point identify cutoff-specific impulse responses. The estimand associated with each target parameter can be expressed as the sum of two static RDD outcome contrasts, thereby allowing for estimation via standard local polynomial tools. I leverage a limited path independence assumption to reduce the dimensionality of the problem.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.14203&r=ecm
  9. By: Hugo Kruiniger
    Abstract: Dovonon and Hall (Journal of Econometrics, 2018) proposed a limiting distribution theory for GMM estimators for a p - dimensional globally identified parameter vector {\phi} when local identification conditions fail at first-order but hold at second-order. They assumed that the first-order underidentification is due to the expected Jacobian having rank p-1 at the true value {\phi}_{0}, i.e., having a rank deficiency of one. After reparametrizing the model such that the last column of the Jacobian vanishes, they showed that the GMM estimator of the first p-1 parameters converges at rate T^{-1/2} and the GMM estimator of the remaining parameter, {\phi}_{p}, converges at rate T^{-1/4}. They also provided a limiting distribution of T^{1/4}({\phi}_{p}-{\phi}_{0, p}) subject to a (non-transparent) condition which they claimed to be not restrictive in general. However, as we show in this paper, their condition is in fact only satisfied when {\phi} is overidentified and the limiting distribution of T^{1/4}({\phi}_{p}-{\phi}_{0, p}), which is non-standard, depends on whether {\phi} is exactly identified or overidentified. In particular, the limiting distributions of the sign of T^{1/4}({\phi}_{p}-{\phi}_{0, p}) for the cases of exact and overidentification, respectively, are different and are obtained by using expansions of the GMM objective function of different orders. Unsurprisingly, we find that the limiting distribution theories of Dovonon and Hall (2018) for Indirect Inference (II) estimation under two different scenarios with second-order identification where the target function is a GMM estimator of the auxiliary parameter vector, are incomplete for similar reasons. We discuss how our results for GMM estimation can be used to complete both theories and how they can be used to obtain the limiting distributions of the II estimators in the case of exact identification under either scenario.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.13475&r=ecm
  10. By: Frank Kleibergen; Lingwei Kong
    Abstract: We propose identification robust statistics for testing hypotheses on the risk premia in dynamic affine term structure models. We do so using the moment equation specification proposed for these models in Adrian et al. (2013). We extend the subset (factor) Anderson-Rubin test from Guggenberger et al. (2012) to models with multiple dynamic factors and time-varying risk prices. Unlike projection-based tests, it provides a computationally tractable manner to conduct identification robust tests on a larger number of parameters. We analyze the potential identification issues arising in empirical studies. Statistical inference based on the three-stage estimator from Adrian et al. (2013) requires knowledge of the factors' quality and is misleading without full-rank beta's or with sampling errors of comparable size as the loadings. Empirical applications show that some factors, though potentially weak, may drive the time variation of risk prices, and weak identification issues are more prominent in multi-factor models.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.12628&r=ecm
  11. By: Yuehao Bai; Hongchang Guo; Azeem M. Shaikh; Max Tabord-Meehan
    Abstract: This paper studies inference for the local average treatment effect in randomized controlled trials with imperfect compliance where treatment status is determined according to "matched pairs." By "matched pairs, " we mean that units are sampled i.i.d. from the population of interest, paired according to observed, baseline covariates and finally, within each pair, one unit is selected at random for treatment. Under weak assumptions governing the quality of the pairings, we first derive the limiting behavior of the usual Wald (i.e., two-stage least squares) estimator of the local average treatment effect. We show further that the conventional heteroskedasticity-robust estimator of its limiting variance is generally conservative in that its limit in probability is (typically strictly) larger than the limiting variance. We therefore provide an alternative estimator of the limiting variance that is consistent for the desired quantity. Finally, we consider the use of additional observed, baseline covariates not used in pairing units to increase the precision with which we can estimate the local average treatment effect. To this end, we derive the limiting behavior of a two-stage least squares estimator of the local average treatment effect which includes both the additional covariates in addition to pair fixed effects, and show that the limiting variance is always less than or equal to that of the Wald estimator. To complete our analysis, we provide a consistent estimator of this limiting variance. A simulation study confirms the practical relevance of our theoretical results. We use our results to revisit a prominent experiment studying the effect of macroinsurance on microenterprise in Egypt.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.13094&r=ecm
  12. By: Jad Beyhum; Elia Lapenta; Pascal Lavergne
    Abstract: We extend nonparametric regression smoothing splines to a context where there is endogeneity and instrumental variables are available. Unlike popular existing estimators, the resulting estimator is one-step and relies on a unique regularization parameter. We derive uniform rates of the convergence for the estimator and its first derivative. We also address the issue of imposing monotonicity in estimation. Simulations confirm the good performances of our estimator compared to two-step procedures. Our method yields economically sensible results when used to estimate Engel curves.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.14867&r=ecm
  13. By: Younghoon Kim; Zachary F. Fisher; Vladas Pipiras
    Abstract: This work considers estimation and forecasting in a multivariate count time series model based on a copula-type transformation of a Gaussian dynamic factor model. The estimation is based on second-order properties of the count and underlying Gaussian models and applies to the case where the model dimension is larger than the sample length. In addition, novel cross-validation schemes are suggested for model selection. The forecasting is carried out through a particle-based sequential Monte Carlo, leveraging Kalman filtering techniques. A simulation study and an application are also considered.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.10454&r=ecm
  14. By: Deepankar Basu
    Abstract: I present the Yule-Frisch-Waugh-Lovell theorem for linear instrumental variables estimation of a multiple regression model that is either exactly or overidentified. I show that with linear instrumental variables estimation: (a) coefficients on endogenous variables are identical in full and partial (or residualized) regressions; (b) residual vectors are identical for full and partial regressions; and (c) estimated covariance matrices of the coefficient vectors from full and partial regressions are equal (up to a degree of freedom correction) if the estimator of the error vector is a function only of the residual vectors. While estimation of the full model uses the full set of instrumental variables, estimation of the partial model uses the residualized version of the same set of instrumental variables, with residualization carried out, with respect to the set of exogenous variables. I also trace the historical and analytical development of the theorem and suggest that it be renamed as the Yule-Frisch-Waugh-Lovell (YFWL) theorem to recognize the pioneering contribution of the statistician G. Udny Yule in its development.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.12731&r=ecm
  15. By: Nalan Basturk (University of Maastricht); Jamie Cross (Melbourne Business School); Peter de Knijff (Leiden University); Lennart Hoogerheide (Vrije Universiteit Amsterdam); Paul Labonne (BI Norwegian Business School); Herman K van Dijk (Erasmus University Rotterdam)
    Abstract: Multimodal empirical distributions arise in many fields like Astrophysics, Bioinformatics, Climatology and Economics due to the heterogeneity of the underlying populations. Mixture processes are a popular tool for accurate approximation of such distributions and implied mode detection. Using Bayesian mixture models and methods, BayesMultiMode estimates posterior probabilities of the number of modes, their locations and uncertainty, yielding a powerful tool for mode inference. The approach works in two stages. First, a flexible mixture with an unknown number of components is estimated using a Bayesian MCMC method due to Malsiner-Walli, Frühwirth-Schnatter, and Grün (2016). Second, suitable detection algorithms are employed to estimate modes for continuous and discrete probability distributions. Given these mode estimates, posterior probabilities for the number of modes, their locations and uncertainties are constructed. BayesMultiMode supports a range of mixture processes, complementing and extending existing software for mixture modeling. The mode detection algorithms implemented in BayesMultiMode also support MCMC draws for mixture estimation generated with external software. The package uses for illustrative purposes both continuous and discrete empirical distributions from the four listed fields yielding credible multiple mode detection with substantial posterior probability where frequentist tests fail to reject the null hypothesis of unimodality.
    Keywords: multimodality, mixture distributions, Bayesian estimation, sparse finite mixtures, R
    JEL: C11 C63 C87 C88
    Date: 2023–07–24
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20230041&r=ecm
  16. By: Lingwei Kong
    Abstract: The Hansen-Jagannathan (HJ) distance statistic is one of the most dominant measures of model misspecification. However, the conventional HJ specification test procedure has poor finite sample performance, and we show that it can be size distorted even in large samples when (proxy) factors exhibit small correlations with asset returns. In other words, applied researchers are likely to falsely reject a model even when it is correctly specified. We provide two alternatives for the HJ statistic and two corresponding novel procedures for model specification tests, which are robust against the presence of weak (proxy) factors, and we also offer a novel robust risk premia estimator. Simulation exercises support our theory. Our empirical application documents the non-reliability of the traditional HJ test since it may produce counter-intuitive results when comparing nested models by rejecting a four-factor model but not the reduced three-factor model. At the same time, our proposed methods are practically more appealing and show support for a four-factor model for Fama French portfolios.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.14499&r=ecm
  17. By: Bruno Feunou
    Abstract: We introduce generalized autoregressive gamma (GARG) processes, a class of autoregressive and moving-average processes that extends the class of existing autoregressive gamma (ARG) processes in one important dimension: each conditional moment dynamic is driven by a different and identifiable moving average of the variable of interest. The paper provides ergodicity conditions for GARG processes and derives closed-form conditional and unconditional moments. The paper also presents estimation and inference methods, illustrated by an application to European option pricing where the daily realized variance follows a GARG dynamic. Our results show that using GARG processes reduces pricing errors by substantially more than using ARG processes does.
    Keywords: Econometric and statistical methods; Asset pricing
    JEL: C58 G12
    Date: 2023–08
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:23-40&r=ecm
  18. By: Andrew Bennett; Nathan Kallus; Xiaojie Mao; Whitney Newey; Vasilis Syrgkanis; Masatoshi Uehara
    Abstract: We consider estimation of parameters defined as linear functionals of solutions to linear inverse problems. Any such parameter admits a doubly robust representation that depends on the solution to a dual linear inverse problem, where the dual solution can be thought as a generalization of the inverse propensity function. We provide the first source condition double robust inference method that ensures asymptotic normality around the parameter of interest as long as either the primal or the dual inverse problem is sufficiently well-posed, without knowledge of which inverse problem is the more well-posed one. Our result is enabled by novel guarantees for iterated Tikhonov regularized adversarial estimators for linear inverse problems, over general hypothesis spaces, which are developments of independent interest.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.13793&r=ecm
  19. By: St\'ephane Bonhomme; Kevin Dano
    Abstract: Economic interactions often occur in networks where heterogeneous agents (such as workers or firms) sort and produce. However, most existing estimation approaches either require the network to be dense, which is at odds with many empirical networks, or they require restricting the form of heterogeneity and the network formation process. We show how the functional differencing approach introduced by Bonhomme (2012) in the context of panel data, can be applied in network settings to derive moment restrictions on model parameters and average effects. Those restrictions are valid irrespective of the form of heterogeneity, and they hold in both dense and sparse networks. We illustrate the analysis with linear and nonlinear models of matched employer-employee data, in the spirit of the model introduced by Abowd, Kramarz, and Margolis (1999).
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.11484&r=ecm
  20. By: Romuald Meango
    Abstract: Can stated preferences help in counterfactual analyses of actual choice? This research proposes a novel approach to researchers who have access to both stated choices in hypothetical scenarios and actual choices. The key idea is to use probabilistic stated choices to identify the distribution of individual unobserved heterogeneity, even in the presence of measurement error. If this unobserved heterogeneity is the source of endogeneity, the researcher can correct for its influence in a demand function estimation using actual choices, and recover causal effects. Estimation is possible with an off-the-shelf Group Fixed Effects estimator.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.13966&r=ecm
  21. By: Torben G. Andersen; Viktor Todorov; Bo Zhou
    Abstract: This paper focuses on the task of detecting local episodes involving violation of the standard It\^o semimartingale assumption for financial asset prices in real time that might induce arbitrage opportunities. Our proposed detectors, defined as stopping rules, are applied sequentially to continually incoming high-frequency data. We show that they are asymptotically exponentially distributed in the absence of Ito semimartingale violations. On the other hand, when a violation occurs, we can achieve immediate detection under infill asymptotics. A Monte Carlo study demonstrates that the asymptotic results provide a good approximation to the finite-sample behavior of the sequential detectors. An empirical application to S&P 500 index futures data corroborates the effectiveness of our detectors in swiftly identifying the emergence of an extreme return persistence episode in real time.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.10872&r=ecm
  22. By: Mr. Sam Ouliaris; Ms. Celine Rochon
    Abstract: Nowcasting enables policymakers to obtain forecasts of key macroeconomic indicators using higher frequency data, resulting in more timely information to guide proposed policy changes. A significant shortcoming of nowcasting estimators is their “reduced-form” nature, which means they cannot be used to assess the impact of policy changes, for example, on the baseline nowcast of real GDP. This paper outlines two separate methodologies to address this problem. The first is a partial equilibrium approach that uses an existing baseline nowcasting regression and single-equation forecasting models for the high-frequency data in that regression. The second approach uses a non-parametric structural VAR estimator recently introduced in Ouliaris and Pagan (2022) that imposes minimal identifying restrictions on the data to estimate the impact of structural shocks. Each approach is illustrated using a country-specific example.
    Keywords: Nowcasting; high frequency indicators; impulse responses; structural models
    Date: 2023–07–28
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2023/153&r=ecm
  23. By: Bryan S. Graham; Andrin Pelican
    Abstract: This paper introduces a simulation algorithm for evaluating the log-likelihood function of a large supermodular binary-action game. Covered examples include (certain types of) peer effect, technology adoption, strategic network formation, and multi-market entry games. More generally, the algorithm facilitates simulated maximum likelihood (SML) estimation of games with large numbers of players, T, and/or many binary actions per player, M (e.g., games with tens of thousands of strategic actions, TM=O(10⁴)). In such cases the likelihood of the observed pure strategy combination is typically (i) very small and (ii) a TM-fold integral who region of integration has a complicated geometry. Direct numerical integration, as well as accept-reject Monte Carlo integration, are computationally impractical in such settings. In contrast, we introduce a novel importance sampling algorithm which allows for accurate likelihood simulation with modest numbers of simulation draws.
    JEL: C15 C31 C55 C7
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31511&r=ecm
  24. By: Dassios, Angelos; Zhang, Junyi
    Abstract: Let J1> J2> ⋯ be the ranked jumps of a gamma process τα on the time interval [0 , α] , such that τα=∑k=1∞Jk . In this paper, we design an algorithm that samples from the random vector (J1, ⋯, JN, ∑k=N+1∞Jk) . Our algorithm provides an analog to the well-established inverse Lévy measure (ILM) algorithm by replacing the numerical inversion of exponential integral with an acceptance-rejection step. This research is motivated by the construction of Dirichlet process prior in Bayesian nonparametric statistics. The prior assigns weight to each atom according to a GEM distribution, and the simulation algorithm enables us to sample from the N largest random weights of the prior. Then we extend the simulation algorithm to a generalised gamma process. The simulation problem of inhomogeneous processes will also be considered. Numerical implementations are provided to illustrate the effectiveness of our algorithms.
    Keywords: 60J25; 62F15; 62G05; exact simulation; gamma process; generalised gamma process; Lévy process; Poisson-Dirichlet distribution
    JEL: C1
    Date: 2023–06–10
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:119755&r=ecm
  25. By: Luca Mucciante; Alessio Sancetta
    Abstract: A point process for event arrivals in high frequency trading is presented. The intensity is the product of a Hawkes process and high dimensional functions of covariates derived from the order book. Conditions for stationarity of the process are stated. An algorithm is presented to estimate the model even in the presence of billions of data points, possibly mapping covariates into a high dimensional space. The large sample size can be common for high frequency data applications using multiple liquid instruments. Convergence of the algorithm is shown, consistency results under weak conditions is established, and a test statistic to assess out of sample performance of different model specifications is suggested. The methodology is applied to the study of four stocks that trade on the New York Stock Exchange (NYSE). The out of sample testing procedure suggests that capturing the nonlinearity of the order book information adds value to the self exciting nature of high frequency trading events.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.09077&r=ecm
  26. By: Songnian Chen; Junlong Feng
    Abstract: We develop new methods for changes-in-changes and distributional synthetic controls when there exists group level heterogeneity. For changes-in-changes, we allow individuals to belong to a large number of heterogeneous groups. The new method extends the changes-in-changes method in Athey and Imbens (2006) by finding appropriate subgroups within the control groups which share similar group level unobserved characteristics to the treatment groups. For distributional synthetic control, we show that the appropriate synthetic control needs to be constructed using units in potentially different time periods in which they have comparable group level heterogeneity to the treatment group, instead of units that are only in the same time period as in Gunsilius (2023). Implementation and data requirements for these new methods are briefly discussed.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.15313&r=ecm
  27. By: Evan Friedman; Duarte Gon\c{c}alves
    Abstract: Quantal response equilibrium (QRE), a statistical generalization of Nash equilibrium, is a standard benchmark in the analysis of experimental data. Despite its influence, nonparametric characterizations and tests of QRE are unavailable beyond the case of finite games. We address this gap by completely characterizing the set of QRE in a class of binary-action games with a continuum of types. Our characterization provides sharp predictions in settings such as global games, the volunteer's dilemma, and the compromise game. Further, we leverage our results to develop nonparametric tests of QRE. As an empirical application, we revisit the experimental data from Carrillo and Palfrey (2009) on the compromise game.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.08011&r=ecm
  28. By: Bryan S. Graham; Andrin Pelican
    Abstract: This paper introduces a simulation algorithm for evaluating the log-likelihood function of a large supermodular binary-action game. Covered examples include (certain types of) peer effect, technology adoption, strategic network formation, and multi-market entry games. More generally, the algorithm facilitates simulated maximum likelihood (SML) estimation of games with large numbers of players, $T$, and/or many binary actions per player, $M$ (e.g., games with tens of thousands of strategic actions, $TM=O(10^4)$). In such cases the likelihood of the observed pure strategy combination is typically (i) very small and (ii) a $TM$-fold integral who region of integration has a complicated geometry. Direct numerical integration, as well as accept-reject Monte Carlo integration, are computationally impractical in such settings. In contrast, we introduce a novel importance sampling algorithm which allows for accurate likelihood simulation with modest numbers of simulation draws.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.11857&r=ecm
  29. By: Nelson Kyakutwika; Bruce Bartlett
    Abstract: Cross-series dependencies are crucial in obtaining accurate forecasts when forecasting a multivariate time series. Simultaneous Graphical Dynamic Linear Models (SGDLMs) are Bayesian models that elegantly capture cross-series dependencies. This study forecasts returns of a 40-dimensional time series of stock data from the Johannesburg Stock Exchange (JSE) using SGDLMs. The SGDLM approach involves constructing a customised dynamic linear model (DLM) for each univariate time series. At each time point, the DLMs are recoupled using importance sampling and decoupled using mean-field variational Bayes. Our results suggest that SGDLMs forecast stock data on the JSE accurately and respond to market gyrations effectively.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.08665&r=ecm
  30. By: Philipp Gersing; Christoph Rust; Manfred Deistler
    Abstract: Factor Sequences are stochastic double sequences $(y_{it}: i \in \mathbb N, t \in \mathbb Z)$ indexed in time and cross-section which have a so called factor structure. The name was coined by Forni et al. 2001, who introduced dynamic factor sequences. We show the difference between dynamic factor sequences and static factor sequences which are the most common workhorse model of econometric factor analysis building on Chamberlain and Rothschild (1983), Stock and Watson (2002) and Bai and Ng (2002). The difference consists in what we call the weak common component which is spanned by a potentially infinite number of weak factors. Ignoring the weak common component can have substantial consequences for applications of factor models in structural analysis and forecasting. We also show that the dynamic common component of a dynamic factor sequence is causally subordinated to the output under general conditions. As a consequence only the dynamic common component can be interpreted as the projection on the common structural shocks of the economy whereas the static common component models the contemporaneous co-movement.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.10067&r=ecm
  31. By: Kurt Graden Lunsford; Kenneth D. West
    Abstract: We study the use of a zero mean first difference model to forecast the level of a scalar time series that is stationary in levels. Let bias be the average value of a series of forecast errors. Then the bias of forecasts from a misspecified ARMA model for the first difference of the series will tend to be smaller in magnitude than the bias of forecasts from a correctly specified model for the level of the series. Formally, let P be the number of forecasts. Then the bias from the first difference model has expectation zero and a variance that is O(1/P-squared), while the variance of the bias from the levels model is generally O(1/P). With a driftless random walk as our first difference model, we confirm this theoretical result with simulations and empirical work: random walk bias is generally one-tenth to one-half that of an appropriately specified model fit to levels.
    Keywords: ARMA Models; Overdifferenced; Prediction; Macroeconomic Time Series; Simulation
    JEL: C22 C53 E37 E47
    Date: 2023–08–03
    URL: http://d.repec.org/n?u=RePEc:fip:fedcwq:96521&r=ecm
  32. By: Lutz Kilian
    Abstract: It is common in applied work to estimate responses of macroeconomic aggregates to news shocks derived from surprise changes in daily futures prices around the date of policy announcements. This requires mapping the daily surprises into a monthly shock that may be used as an external instrument in a monthly VAR model or local projection. The standard approach has been to sum these daily surprises over the course of a given month when constructing the monthly proxy variable, ignoring the accounting relationship between daily and average monthly price data. In this paper, I provide a new approach to constructing monthly proxies from daily surprises that takes account of this link and revisit the question of how to use OPEC announcements to identify news shocks in VAR models of the global oil market. The proposed approach calls into question the interpretation of the identified shock as oil supply news and implies quantitatively and qualitatively different estimates of the macroeconomic impact of OPEC announcements.
    Keywords: Proxy VAR; instrumental variables; shock aggregation; time aggregation; identification; OPEC; supply news; storage demand; oil futures; oil price expectations
    JEL: C36 C51 E31 E32 E44 Q43
    Date: 2023–07–31
    URL: http://d.repec.org/n?u=RePEc:fip:feddwp:96517&r=ecm
  33. By: Bournakis, Ioannis; Tsionas, Mike G.
    Abstract: We developed a non-parametric technique to measure Total Factor Productivity (TFP). Our paper has two major novelties in estimating the production function. First, we propose a productivity modelling with both idiosyncratic firm factors and aggregate shocks within the same framework. Second, we apply Bayesian Markov Chain Monte Carlo (MCMC) estimation techniques to overcome restrictions associated with monotonicity between productivity and variable inputs and moment conditions in identifying input parameters. We implemented our methodology in a group of 4286 manufacturing firms from France, Germany, Italy, and the United Kingdom (2001-2014). The results show that: (i) aggregate shocks matter for firm TFP evolution. The global financial crisis of 2008 caused severe adverse effects on TFP albeit short in duration; (ii) there is substantial heterogeneity across countries in the way firms react to changes in R&D and taxation. German and U.K. firms are more sensitive to fiscal changes than R\&D, while Italian firms are the opposite. R\&D and taxation effects are symmetrical for French firms; (iii) the U.K. productivity handicap continued for years after the financial crisis; (iv) industrial clusters promote knowledge diffusion among German and Italian firms.
    Keywords: Total Factor Productivity (TFP), Control Function, Non-parametric Bayesian Estimation, Markov Chain Monte Carlo(MCMC), Research and Development (R\&D), Taxation, European firms
    JEL: C11 D24 H21 H25 Q55
    Date: 2023–07–21
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:118100&r=ecm
  34. By: Irsova, Zuzana; Doucouliagos, Hristos; Havranek, Tomas; Stanley, T. D.
    Abstract: This paper provides concise, nontechnical, step-by-step guidelines on how to conduct a modern meta-analysis, especially in social sciences. We treat publication bias, p-hacking, and heterogeneity as phenomena meta-analysts must always confront. To this end, we provide concrete methodological recommendations. Meta-analysis methods have advanced notably over the last few years. Yet many meta-analyses still rely on outdated approaches, some ignoring publication bias and systematic heterogeneity. While limitations persist, recently developed techniques allow robust inference even in the face of formidable problems in the underlying empirical literature. The purpose of this paper is to summarize the state of the art in a way accessible to aspiring meta-analysts in any field. We also discuss how meta-analysts can use advances in artificial intelligence to work more efficiently.
    Keywords: meta-analysis, publication bias, p-hacking, artificial intelligence, model uncertainty
    JEL: A14 B49 C83
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:esprep:273719&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.