nep-ecm New Economics Papers
on Econometrics
Issue of 2024‒02‒05
eleven papers chosen by
Sune Karlsson, Örebro universitet


  1. Robust Inference for Multiple Predictive Regressions with an Application on Bond Risk Premia By Xiaosai Liao; Xinjue Li; Qingliang Fan
  2. Semiparametric Conditional Mixture Copula Models with Copula Selection By Zongwu Cai; Guannan Liu; Wei Long; Xuelong Luo
  3. A Sparse Kalman Filter: A Non-Recursive Approach By Michal Andrle; Jan Bruha
  4. Efficiency of QMLE for dynamic panel data models with interactive effects By Jushan Bai
  5. Linking Frequentist and Bayesian Change-Point Methods By Ardia, David; Dufays, Arnaud; Ordás Criado, Carlos
  6. Negative Weights are No Concern in Design-Based Specifications By Kirill Borusyak; Peter Hull
  7. Machine Learning Based Panel Data Models By Bingduo Yang; Wei Long; Zongwu Cai
  8. Indirect Inference- a methodological essay on its role and applications By Minford, Patrick; Xu, Yongdeng
  9. Generalized Difference-in-Differences for Ordered Choice Models: Too Many "False Zeros"? By Daniel Gutknecht; Cenchen Liu
  10. Estimation of empirical models for margins of exports with unknown non-linear functional forms: A Kernel-Regularized Least Squares (KRLS) approach Evidence from eight European countries By Joachim Wagner
  11. Testing Collusion and Cooperation in Binary Choice Games By Erhao Xie

  1. By: Xiaosai Liao; Xinjue Li; Qingliang Fan
    Abstract: We propose a robust hypothesis testing procedure for the predictability of multiple predictors that could be highly persistent. Our method improves the popular extended instrumental variable (IVX) testing (Phillips and Lee, 2013; Kostakis et al., 2015) in that, besides addressing the two bias effects found in Hosseinkouchack and Demetrescu (2021), we find and deal with the variance-enlargement effect. We show that two types of higher-order terms induce these distortion effects in the test statistic, leading to significant over-rejection for one-sided tests and tests in multiple predictive regressions. Our improved IVX-based test includes three steps to tackle all the issues above regarding finite sample bias and variance terms. Thus, the test statistics perform well in size control, while its power performance is comparable with the original IVX. Monte Carlo simulations and an empirical study on the predictability of bond risk premia are provided to demonstrate the effectiveness of the newly proposed approach.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.01064&r=ecm
  2. By: Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Guannan Liu (School of Economics and WISE, Xiamen University, Xiamen, Fujian 361005, China); Wei Long (Department of Economics, Tulane University, New Orleans, LA 70118, USA); Xuelong Luo (School of Economics and WISE, Xiamen University, Xiamen, Fujian 361005, China)
    Abstract: This study proposes a semiparametric conditional mixture copula model, that allows for unspecified functions of a covariate in both the (conditional) marginal distributions and the copula dependence and weight parameters. To estimate this model, we propose a two-step procedure. In the first step, the (conditional) marginal distributions are nonparametrically estimated using the weighted Nadaraya- Watson method. In the second step, we apply a penalized local log-likelihood function with a penalty term to simultaneously estimate the copula parameters and choose an appropriate copula model. Furthermore, we propose a test of covariate effects for time series data. We establish the large sample properties of both the penalized and unpenalized estimators based on alpha-mixing conditions. Monte Carlo simulations show that the proposed method performs well in selecting and estimating conditional mixture copulas under various model specifications. Finally, we apply the proposed method to investigate the dynamic patterns of dependence among four states' housing markets along the interest rate path.
    Keywords: Conditional Copula; Mixture Copula; Semiparametric Estimation; Copula Selection; SCAD; EM algorithm.
    JEL: C14 C22
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202401&r=ecm
  3. By: Michal Andrle; Jan Bruha
    Abstract: We propose an algorithm to estimate unobserved states and shocks in a state-space model under sparsity constraints. Many economic models have a linear state-space form - for example, linearized DSGE models, VARs, time-varying VARs, and dynamic factor models. Under the conventional Kalman filter, which is essentially a recursive OLS algorithm, all estimated shocks are non-zero. However, the true shocks are often zero for multiple periods, and non-zero estimates are due to noisy data or ill-conditioning of the model. We show applications where sparsity is the natural solution. Sparsity of filtered shocks is achieved by applying an elastic-net penalty to the least-squares problem and improves statistical efficiency. The algorithm can be adapted for non-convex penalties and for estimates robust to outliers.
    Keywords: Kalman filter, regularization, sparsity
    JEL: C32 C52 C53
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:cnb:wpaper:2023/13&r=ecm
  4. By: Jushan Bai
    Abstract: This paper derives the efficiency bound for estimating the parameters of dynamic panel data models in the presence of an increasing number of incidental parameters. We study the efficiency problem by formulating the dynamic panel as a simultaneous equations system, and show that the quasi-maximum likelihood estimator (QMLE) applied to the system achieves the efficiency bound. Comparison of QMLE with fixed effects estimators is made.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.07881&r=ecm
  5. By: Ardia, David; Dufays, Arnaud; Ordás Criado, Carlos
    Abstract: We show that the two-stage minimum description length (MDL) criterion widely used to estimate linear change-point (CP) models corresponds to the marginal likelihood of a Bayesian model with a specific class of prior distributions. This allows results from the frequentist and Bayesian paradigms to be bridged together. Thanks to this link, one can rely on the consistency of the number and locations of the estimated CPs and the computational efficiency of frequentist methods, and obtain a probability of observing a CP at a given time, compute model posterior probabilities, and select or combine CP methods via Bayesian posteriors. Furthermore, we adapt several CP methods to take advantage of the MDL probabilistic representation. Based on simulated data, we show that the adapted CP methods can improve structural break detection compared to state-of-the-art approaches. Finally, we empirically illustrate the usefulness of combining CP detection methods when dealing with long time series and forecasting.
    Keywords: Change-point; Minimum description length; Model selection/combination; Structural change.
    JEL: C11 C12 C22 C32 C52 C53
    Date: 2023–12–20
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:119486&r=ecm
  6. By: Kirill Borusyak; Peter Hull
    Abstract: Recent work shows that popular partially-linear regression specifications can put negative weights on some treatment effects, potentially producing incorrectly-signed estimands. We counter by showing that negative weights are no problem in design-based specifications, in which low-dimensional controls span the conditional expectation of the treatment. Specifically, the estimands of such specifications are convex averages of causal effects with “ex-ante” weights that average the potentially negative “ex-post” weights across possible treatment realizations. This result extends to design-based instrumental variable estimands under a first-stage monotonicity condition, and applies to “formula” treatments and instruments such as shift-share instruments.
    JEL: C21 C26
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32017&r=ecm
  7. By: Bingduo Yang (School of Finance, Guangdong University of Finance and Economics, Guangzhou 510320, China); Wei Long (Department of Economics, Tulane University, New Orleans, LA 70118, USA); Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA)
    Abstract: We examine nonparametric panel data regression models with fixed effects and cross-sectional dependence through a diverse collection of machine learning techniques. We add cross-sectional averages and time averages as regressors to the model to account for unobserved common factors and fixed effects respectively. Additionally, we utilize the debiased machine learning method by Chernozhukov et al. (2018) to estimate parametric coefficients followed by the nonparametric component. We comprehensively investigate three commonly used machine learning techniques - LASSO, random forests, and neural network - in finite samples. Simulation results demonstrate the effectiveness of our proposed method across different combinations of the number of cross-sectional units, time dimension sample size, and the number of regressors, irrespective of the presence of fixed effects and cross-sectional dependence. In the empirical part, we employ the proposed machine learning-based panel data model to estimate the total factor productivity (TFP) of public companies of Chinese mainland and find that the proposed machine learning methods are comparable to other competitive methods.
    Keywords: Machine learning; panel data model; cross-sectional dependence; debiased machine learning.
    JEL: C12 C22
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:kan:wpaper:202402&r=ecm
  8. By: Minford, Patrick (Cardiff Business School); Xu, Yongdeng (Cardiff Business School)
    Abstract: In this short paper we review the intellectual history of indirect inference as a methodology in its progress from an informal method for evaluating early models of representative agents to formally testing DSGE models of the economy; and we have considered the issues that can arise in carrying out these tests. We have noted that it is asymptotically equivalent to using FIML-i.e. in large samples; and that in small samples it is superior to FIML both in lowering bias and achieving good power. In application its power needs to be evaluated by Monte Carlo experiment for the particular context. Structural models need to be defined in terms of their scope of application and auxiliary models chosen suitably to test their applicability within this scope. Power can be set too high by using too many auxiliary model features to match; and it can be pushed too low by using too few. Excessively high shocks, such as wars and crises, may also limit a model’s applicability by causing unusual behaviour that cannot be captured by the model. If so, these need to be excluded so that the model is evaluated for the ’normal times’ in which it is applicable.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:cdf:wpaper:2024/1&r=ecm
  9. By: Daniel Gutknecht; Cenchen Liu
    Abstract: In this paper, we develop a generalized Difference-in-Differences model for discrete, ordered outcomes, building upon elements from a continuous Changes-in-Changes model. We focus on outcomes derived from self-reported survey data eliciting socially undesirable, illegal, or stigmatized behaviors like tax evasion, substance abuse, or domestic violence, where too many "false zeros", or more broadly, underreporting are likely. We provide characterizations for distributional parallel trends, a concept central to our approach, within a general threshold-crossing model framework. In cases where outcomes are assumed to be reported correctly, we propose a framework for identifying and estimating treatment effects across the entire distribution. This framework is then extended to modeling underreported outcomes, allowing the reporting decision to depend on treatment status. A simulation study documents the finite sample performance of the estimators. Applying our methodology, we investigate the impact of recreational marijuana legalization for adults in several U.S. states on the short-term consumption behavior of 8th-grade high-school students. The results indicate small, but significant increases in consumption probabilities at each level. These effects are further amplified upon accounting for misreporting.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2401.00618&r=ecm
  10. By: Joachim Wagner (Leuphana Universität Lüneburg, Institut für Volkswirtschaftslehre and Kiel Centre for Globalization)
    Abstract: Empirical models for intensive or extensive margins of trade that relate measures of exports to firm characteristics are usually estimated by variants of (generalized) linear models. Usually, the firm characteristics that explain these export margins enter the empirical model in linear form, sometimes augmented by quadratic terms or higher order polynomials, or interaction terms, to take care or test for non-linear relationships. If these non-linear relationships do matter and if they are ignored in the specification of the empirical model this leads to biased results. Researchers, however, can never be sure that all possible non-linear relationships are taken care of in their chosen specifications. This note uses for the first time the Kernel-Regularized Least Squares (KRLS) estimator to deal with this issue in empirical models for margins of exports. KRLS is a machine learning method that learns the functional form from the data. Empirical examples show that it is easy to apply and works well. Therefore, it is considered as a useful addition to the box of tools of empirical trade economists.
    Keywords: Margins of exports, empirical models, non-linear relationships, kernel-regularized least squares, krls
    JEL: F14
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:lue:wpaper:424&r=ecm
  11. By: Erhao Xie
    Abstract: This paper studies the testable implication of players’ collusive or cooperative behaviours in a binary choice game with complete information. In this paper, these behaviours are defined as players coordinating their actions to maximize the weighted sum of their payoffs. I show that this collusive model is observationally equivalent to an equilibrium model that imposes two restrictions. The first restriction is on each player’s strategic effect and the second one requires a particular equilibrium selection mechanism. Under the equilibrium condition, these joint restrictions are simple to test using tools in the literature on empirical games. This test, as suggested by the observational equivalence result, is the same as testing collusive and cooperative behaviours. I illustrate the implementation of this test by revisiting the entry game between Walmart and Kmart studied by Jia (2008). Under the equilibrium condition, Jia’s original estimates are consistent with the first restriction on the strategic effects, serving as a warning sign of potential collusion. This paper tests and rejects the second restriction on the equilibrium selection mechanism. Thus, the empirical evidence suggests that Walmart and Kmart did not collude on their entry decisions.
    Keywords: Econometric and statistical methods; Market structure and pricing
    JEL: C57 L13
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:23-58&r=ecm

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.