nep-ecm New Economics Papers
on Econometrics
Issue of 2025–02–17
sixteen papers chosen by
Sune Karlsson, Örebro universitet


  1. IV Estimation of Heterogeneous Spatial Dynamic Panel Models with Interactive Effects By Jia Chen; Guowei Cui; Vasilis Sarafidis; Takashi Yamagata
  2. Robust Multivariate Observation-Driven Filtering for a Common Stochastic Trend: Theory and Application By Francisco Blasques; Janneke van Brummelen; Paolo Gorgi; Siem Jan Koopman
  3. Point-identifying semiparametric sample selection models with no excluded variable By Dongwoo Kim; Young Jun Lee
  4. Simple Inference on a Simplex-Valued Weight By Nathan Canen; Kyungchul Song
  5. Crossing penalised CAViaR By Tibor Szendrei
  6. A Comparison of Bayesian and Frequentist Variable Selection Methods for Estimating Average Treatment Effects in Logistic Regression By Alex H. Martinez; Brian Christensen; Elizabeth F. Sutton; Andrew G. Chapple
  7. Robust Quantile Factor Analysis By Songnian Chen; Junlong Feng
  8. Influence Function: Local Robustness and Efficiency By Ruonan Xu; Xiye Yang
  9. A General Approach to Relaxing Unconfoundedness By Matthew A. Masten; Alexandre Poirier; Muyang Ren
  10. Mitigating Estimation Risk: a Data-Driven Fusion of Experimental and Observational Data By Francisco Blasques; Paolo Gorgi; Siem Jan Koopman; Noah Stegehuis
  11. Quantile VARs and Macroeconomic Risk Forecasting By Stéphane Surprenant
  12. Differentiable, Filter Free Bayesian Estimation of DSGE Models Using Mixture Density Networks By Chris Naubert
  13. Philip G. Wright, directed acyclic graphs, and instrumental variables By Jaap H. Abbring; Victor Chernozhukov; Iv\'an Fern\'andez-Val
  14. Re examining confidence intervals for ratios of parameters By Zaka Ratsimalahelo
  15. Eco-RETINA: a green flexible algorithm for model building By Capilla, Javier; Alcaraz, Alba; Valarezo, Ã ngel; García-Hiernaux, Alfredo; Pérez Amaral, Teodosio
  16. The Role of Uncertainty in Forecasting Realized Covariance of US State-Level Stock Returns: A Reverse-MIDAS Approach By Jiawen Luo; Shengjie Fu; Oguzhan Cepni; Rangan Gupta

  1. By: Jia Chen; Guowei Cui; Vasilis Sarafidis; Takashi Yamagata
    Abstract: This paper develops a Mean Group Instrumental Variables (MGIV) estimator for spatial dynamic panel data models with interactive effects, under large N and T asymptotics. Unlike existing approaches that typically impose slope-parameter homogeneity, MGIV accommodates cross-sectional heterogeneity in slope coefficients. The proposed estimator is linear, making it computationally efficient and robust. Furthermore, it avoids the incidental parameters problem, enabling asymptotically valid inferences without requiring bias correction. The Monte Carlo experiments indicate strong finite-sample performance of the MGIV estimator across various sample sizes and parameter configurations. The practical utility of the estimator is illustrated through an application to regional economic growth in Europe. By explicitly incorporating heterogeneity, our approach provides fresh insights into the determinants of regional growth, underscoring the critical roles of spatial and temporal dependencies.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.18467
  2. By: Francisco Blasques (Vrije Universiteit Amsterdam and Tinbergen Institute); Janneke van Brummelen (Vrije Universiteit Amsterdam and Tinbergen Institute); Paolo Gorgi (Vrije Universiteit Amsterdam and Tinbergen Institute); Siem Jan Koopman (Vrije Universiteit Amsterdam and Tinbergen Institute)
    Abstract: We introduce a nonlinear semi-parametric model that allows for the robust filtering of a common stochastic trend in a multivariate system of cointegrated time series. The observation-driven stochastic trend can be specified using flexible updating mechanisms. The model provides a general approach to obtain an outlier-robust trend-cycle decomposition in a cointegrated multivariate process. A simple two-stage procedure for the estimation of the parameters of the model is proposed. In the first stage, the loadings of the common trend are estimated via ordinary least squares. In the second stage, the other parameters are estimated via Gaussian quasi-maximum likelihood. We formally derive the theory for the consistency of the estimators in both stages and show that the observation-driven stochastic trend can also be consistently estimated. A simulation study illustrates how such robust methodology can enhance the filtering accuracy of the trend compared to a linear approach as considered in previous literature. The practical relevance of the method is shown by means of an application to spot prices of oil-related commodities.
    Keywords: consistency, cycle, non-stationary time series, two-step estimation, vector autoregression
    JEL: C13 C32
    Date: 2024–11–03
    URL: https://d.repec.org/n?u=RePEc:tin:wpaper:20240062
  3. By: Dongwoo Kim; Young Jun Lee
    Abstract: Sample selection is pervasive in applied economic studies. This paper develops semiparametric selection models that achieve point identification without relying on exclusion restrictions, an assumption long believed necessary for identification in semiparametric selection models. Our identification conditions require at least one continuously distributed covariate and certain nonlinearity in the selection process. We propose a two-step plug-in estimator that is √ n-consistent, asymptotically normal, and computationally straightforward (readily available in statistical software), allowing for heteroskedasticity. Our approach provides a middle ground between Lee (2009)’s nonparametric bounds and Honoré and Hu (2020)’s linear selection bounds, while ensuring point identification. Simulation evidence confirms its excellent finite-sample performance. We apply our method to estimate the racial and gender wage disparity using data from the US Current Population Survey. Our estimates tend to lie outside the Honoré and Hu bounds.
    Date: 2025–02–11
    URL: https://d.repec.org/n?u=RePEc:azt:cemmap:07/25
  4. By: Nathan Canen; Kyungchul Song
    Abstract: In many applications, the parameter of interest involves a simplex-valued weight which is identified as a solution to an optimization problem. Examples include synthetic control methods with group-level weights and various methods of model averaging and forecast combination. The simplex constraint on the weight poses a challenge in statistical inference due to the constraint potentially binding. In this paper, we propose a simple method of constructing a confidence set for the weight and prove that the method is asymptotically uniformly valid. The procedure does not require tuning parameters or simulations to compute critical values. The confidence set accommodates both the cases of point-identification or set-identification of the weight. We illustrate the method with an empirical example.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.15692
  5. By: Tibor Szendrei
    Abstract: Dynamic quantiles, or Conditional Autoregressive Value at Risk (CAViaR) models, have been extensively studied at the individual level. However, efforts to estimate multiple dynamic quantiles jointly have been limited. Existing approaches either sequentially estimate fitted quantiles or impose restrictive assumptions on the data generating process. This paper fills this gap by proposing an objective function for the joint estimation of all quantiles, introducing a crossing penalty to guide the process. Monte Carlo experiments and an empirical application on the FTSE100 validate the effectiveness of the method, offering a flexible and robust approach to modelling multiple dynamic quantiles in time-series data.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.10564
  6. By: Alex H. Martinez; Brian Christensen; Elizabeth F. Sutton; Andrew G. Chapple
    Abstract: In many manuscripts, researchers use multivariable logistic regression to adjust for potential confounding variables when estimating a direct relationship of a treatment or exposure on a binary outcome. After choosing how variables are entered into that model, researchers can calculate an estimated average treatment effect (ATE), or the estimated change in the outcome probability with and without an exposure present. Which potential confounding variables should be included in that logistic regression model is often a concern, which is sometimes determined from variable selection methods. We explore how forward, backward, and stepwise confounding variable selection estimate the ATE compared to spike-and-slab Bayesian variable selection across 1, 000 randomly generated scenarios and various sample sizes. Our large simulation study allow us to make pseudo-theoretical conclusions about which methods perform best for different sample sizes, rarities of coutcomes, and number of confounders. An R package is also described to implement variable selection on the confounding variables only and provide estimates of the ATE. Overall, results suggest that Bayesian variable selection is more appealing in smaller sample sizes than frequentist variable selection methods in terms of estimating the ATE. Differences are minimal in larger sample sizes.
    Keywords: Average Treatment Effect, ATE, Bayesian, Frequentist, Variable Selection
    JEL: C01 C11 C21
    Date: 2025–03–01
    URL: https://d.repec.org/n?u=RePEc:eei:rpaper:eeri_rp_2025_01
  7. By: Songnian Chen; Junlong Feng
    Abstract: We propose a factor model and an estimator of the factors and loadings that are robust to weak factors. The factors can have an arbitrarily weak influence on the mean or quantile of the outcome variable at most quantile levels; each factor only needs to have a strong impact on the outcome's quantile near one unknown quantile level. The estimator for every factor, loading, and common component is asymptotically normal at the $\sqrt{N}$ or $\sqrt{T}$ rate. It does not require the knowledge of whether the factors are weak and how weak they are. We also develop a weak-factor-robust estimator of the number of factors and a consistent selectors of factors of any desired strength of influence on the quantile or mean of the outcome variable. Monte Carlo simulations demonstrate the effectiveness of our methods.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.15761
  8. By: Ruonan Xu; Xiye Yang
    Abstract: We propose a direct approach to calculating influence functions based on the concept of functional derivatives. The relative simplicity of our direct method is demonstrated through well-known examples. Using influence functions as a key device, we examine the connection and difference between local robustness and efficiency in both joint and sequential identification/estimation procedures. We show that the joint procedure is associated with efficiency, while the sequential procedure is linked to local robustness. Furthermore, we provide conditions that are theoretically verifiable and empirically testable on when efficient and locally robust estimation for the parameter of interest in a semiparametric model can be achieved simultaneously. In addition, we present straightforward conditions for an adaptive procedure in the presence of nuisance parameters.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.15307
  9. By: Matthew A. Masten; Alexandre Poirier; Muyang Ren
    Abstract: This paper defines a general class of relaxations of the unconfoundedness assumption. This class includes several previous approaches as special cases, including the marginal sensitivity model of Tan (2006). This class therefore allows us to precisely compare and contrast these previously disparate relaxations. We use this class to derive a variety of new identification results which can be used to assess sensitivity to unconfoundedness. In particular, the prior literature focuses on average parameters, like the average treatment effect (ATE). We move beyond averages by providing sharp bounds for a large class of parameters, including both the quantile treatment effect (QTE) and the distribution of treatment effects (DTE), results which were previously unknown even for the marginal sensitivity model.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.15400
  10. By: Francisco Blasques (Vrije Universiteit Amsterdam and Tinbergen Institute); Paolo Gorgi (Vrije Universiteit Amsterdam and Tinbergen Institute); Siem Jan Koopman (Vrije Universiteit Amsterdam and Tinbergen Institute); Noah Stegehuis (Vrije Universiteit Amsterdam and Tinbergen Institute)
    Abstract: The identification of causal effects of marketing campaigns (advertisements, discounts, promotions, loyalty programs) require the collection of experimental data. Such data sets frequently suffer from limited sample sizes due to constraints (time, budget) which can result in imprecise estimators and inconclusive outcomes. At the same time, companies passively accumulate observational data which oftentimes cannot be used to measure causal effects of marketing campaigns due to endogeneity issues. In this paper we show how estimation uncertainty of causal effects can be reduced by combining the two data sources by employing a self-regulatory weighting scheme that adapts to the underlying bias and variance. We also introduce an instrument-free exogeneity test designed to assess whether the observational data is significantly endogenous and experimentation is necessary. To demonstrate the effectiveness of our approach, we implement the combined estimator for a real-life data set in which returning customers were awarded with a discount. We demonstrate how the indecisive result of the experimental data alone can be improved by our weighted estimator, and arrive to the conclusion that the loyalty discount has a notably negative effect on net sales.
    Keywords: endogeneity, data fusion, experimental data, observational data
    JEL: C51 C55 C93
    Date: 2024–11–03
    URL: https://d.repec.org/n?u=RePEc:tin:wpaper:20240066
  11. By: Stéphane Surprenant
    Abstract: Recent rises in macroeconomic volatility have prompted the introduction of quantile vector autoregression (QVAR) models to forecast macroeconomic risk. This paper provides an extensive evaluation of the predictive performance of QVAR models in a pseudo-out-of-sample experiment spanning 112 monthly US variables over 40 years, with horizons of 1 to 12 months. We compare QVAR with three parametric benchmarks: a Gaussian VAR, a generalized autoregressive conditional heteroskedasticity VAR and a VAR with stochastic volatility. QVAR frequently, significantly and quantitatively improves upon the benchmarks and almost never performs significantly worse. Forecasting improvements are concentrated in the labour market and interest and exchange rates. Augmenting the QVAR model with factors estimated by principal components or quantile factors significantly enhances macroeconomic risk forecasting in some cases, mostly in the labour market. Generally, QVAR and the augmented models perform equally well. We conclude that both are adequate tools for modeling macroeconomic risks.
    Keywords: Econometrics and statistical methods; Business fluctuations and cycles
    JEL: C53 E37 C55
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:bca:bocawp:25-4
  12. By: Chris Naubert
    Abstract: I develop a methodology for Bayesian estimation of globally solved, non-linear macroeconomic models. A novel feature of my method is the use of a mixture density network to approximate the distribution of initial states. I use the methodology to estimate a medium-scale, two-agent New Keynesian model with irreversible investment and a zero lower bound on nominal interest rates. Using simulated data, I show that the method is able to recover the “true” parameters when using the mixture density network approximation of the initial state distribution. This contrasts with the case when the initial states are set to their steady-state values.
    Keywords: Business Fluctuations and Cycles; Economic Models
    JEL: C61 C63 E37 E47
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:bca:bocawp:25-3
  13. By: Jaap H. Abbring; Victor Chernozhukov; Iv\'an Fern\'andez-Val
    Abstract: Wright (1928) deals with demand and supply of oils and butter. In Appendix B of this book, Philip Wright made several fundamental contributions to causal inference. He introduced a structural equation model of supply and demand, established the identification of supply and demand elasticities via the method of moments and directed acyclical graphs, developed empirical methods for estimating demand elasticities using weather conditions as instruments, and proposed methods for counterfactual analysis of the welfare effect of imposing tariffs and taxes. Moreover, he took all of these methods to data. These ideas were far ahead, and much more profound than, any contemporary theoretical and empirical developments on causal inference in statistics or econometrics. This editorial aims to present P. Wright's work in a more modern framework, in a lecture note format that can be useful for teaching and linking to contemporary research.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.16395
  14. By: Zaka Ratsimalahelo (Université de Franche-Comté, CRESE, UR3190, F-25000 Besançon, France)
    JEL: C12 C13
    Date: 2024–12
    URL: https://d.repec.org/n?u=RePEc:crb:wpaper:2024-20
  15. By: Capilla, Javier (Instituto Complutense de Análisis Económico (ICAE), Universidad Complutense de Madrid (Spain).); Alcaraz, Alba (Instituto Complutense de Análisis Económico (ICAE), Universidad Complutense de Madrid (Spain).); Valarezo, Ã ngel (Instituto Complutense de Análisis Económico (ICAE), Universidad Complutense de Madrid (Spain).); García-Hiernaux, Alfredo (Instituto Complutense de Análisis Económico (ICAE), Universidad Complutense de Madrid (Spain).); Pérez Amaral, Teodosio (Instituto Complutense de Análisis Económico (ICAE), Universidad Complutense de Madrid (Spain).)
    Abstract: Eco-RETINA is an innovative and eco-friendly algorithm explicitly designed for out-of-sample prediction. Functioning as a regression-based flexible approximator, it is linear in parameters but nonlinear in inputs, employing a selective model search to optimize performance. The algorithm adeptly manages multicollinearity while emphasizing speed, accuracy, and environmental sustainability. Its modular and transparent structure facilitates easy interpretation and modification, making it an invaluable tool for researchers in developing explicit models for out-of-sample forecasting. The algorithm generates outputs such as a list of relevant transformed inputs, coefficients, standard deviations, and confidence intervals, enhancing its interpretability.
    Keywords: Eco-RETINA; Out-of-sample prediction.
    JEL: C14 C45 C51 C63
    Date: 2025
    URL: https://d.repec.org/n?u=RePEc:ucm:doicae:2501
  16. By: Jiawen Luo (School of Business Administration, South China University of Technology, Guangzhou 510640); Shengjie Fu (School of Business Administration, South China University of Technology, Guangzhou 510640); Oguzhan Cepni (Ostim Technical University, Ankara, Turkiye; University of Edinburgh Business School, Centre for Business, Climate Change, and Sustainability; Department of Economics, Copenhagen Business School, Denmark); Rangan Gupta (Department of Economics, University of Pretoria, Private Bag X20, Hatfield 0028, South Africa)
    Abstract: In this paper, we construct a set of reverse-Mixed Data Sampling (MIDAS) models to forecast the daily realized covariance matrix of United States (US) state-level stock returns, derived from 5-minute intraday data, by incorporating the information of volatility of weekly economic condition indices, which serve as proxies for economic uncertainty. We decompose the realized covariance matrix into a diagonal variance matrix and a correlation matrix and forecasting them separately using a two-step procedure. Particularly, the realized variances are forecasted by combining Heterogeneous Autoregressive (HAR) model with the reverse-MIDAS framework, incorporating the low-frequency uncertainty variable as a predictor. While the forecasting of the correlation matrix relies on the scalar MHAR model and a new log correlation matrix parameterization of Archakov and Hansen (2021). Our empirical results demonstrate that the forecast models incorporating uncertainty associated with economic conditions outperform the benchmark model in terms of both in-sample fit and out-of-sample forecasting accuracy. Moreover, economic evaluation results suggest that portfolios based on the proposed reverse-MIDAS covariance forecast models generally achieve higher annualized returns and Sharpe ratios, as well as lower portfolio concentrations and short positions.
    Keywords: US state-level stock returns, Covariance matrix, Uncertainty, Reverse-MIDAS, Forecasting
    JEL: C22 C32 C53 D80 G10
    Date: 2025–02
    URL: https://d.repec.org/n?u=RePEc:pre:wpaper:202501

This nep-ecm issue is ©2025 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.