|
on Econometrics |
By: | Kunyang Song; Feiyu Jiang; Ke Zhu |
Abstract: | We provide a new estimation method for conditional moment models via the martingale difference divergence (MDD).Our MDD-based estimation method is formed in the framework of a continuum of unconditional moment restrictions. Unlike the existing estimation methods in this framework, the MDD-based estimation method adopts a non-integrable weighting function, which could grab more information from unconditional moment restrictions than the integrable weighting function to enhance the estimation efficiency. Due to the nature of shift-invariance in MDD, our MDD-based estimation method can not identify the intercept parameters. To overcome this identification issue, we further provide a two-step estimation procedure for the model with intercept parameters. Under regularity conditions, we establish the asymptotics of the proposed estimators, which are not only easy-to-implement with analytic asymptotic variances, but also applicable to time series data with an unspecified form of conditional heteroskedasticity. Finally, we illustrate the usefulness of the proposed estimators by simulations and two real examples. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.11092&r=ecm |
By: | Martin Huber |
Abstract: | We demonstrate and discuss the testability of the common trend assumption imposed in Difference-in-Differences (DiD) estimation in panel data when not relying on multiple pre-treatment periods for running placebo tests. Our testing approach involves two steps: (i) constructing a control group of non-treated units whose pre-treatment outcome distribution matches that of treated units, and (ii) verifying if this control group and the original non-treated group share the same time trend in average outcomes. Testing is motivated by the fact that in several (but not all) panel data models, a common trend violation across treatment groups implies and is implied by a common trend violation across pre-treatment outcomes. For this reason, the test verifies a sufficient, but (depending on the model) not necessary condition for DiD-based identification. We investigate the finite sample performance of a testing procedure that is based on double machine learning, which permits controlling for covariates in a data-driven manner, in a simulation study and also apply it to labor market data from the National Supported Work Demonstration. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.16961&r=ecm |
By: | Kevin Huynh |
Abstract: | Model averaging methods have become an increasingly popular tool for improving predictions and dealing with model uncertainty, especially in Bayesian settings. Recently, frequentist model averaging methods such as information theoretic and least squares model averaging have emerged. This work focuses on the issue of covariate uncertainty where managing the computational resources is key: The model space grows exponentially with the number of covariates such that averaged models must often be approximated. Weighted-average least squares (WALS), first introduced for (generalized) linear models in the econometric literature, combines Bayesian and frequentist aspects and additionally employs a semiorthogonal transformation of the regressors to reduce the computational burden. This paper extends WALS for generalized linear models to the negative binomial (NB) regression model for overdispersed count data. A simulation experiment and an empirical application using data on doctor visits were conducted to compare the predictive power of WALS for NB regression to traditional estimators. The results show that WALS for NB improves on the maximum likelihood estimator in sparse situations and is competitive with lasso while being computationally more efficient. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.11324&r=ecm |
By: | Jinyong Hahn; Guido Kuersteiner; Andres Santos; Wavid Willigrod |
Abstract: | This paper studies the testability of identifying restrictions commonly employed to assign a causal interpretation to two stage least squares (TSLS) estimators based on Bartik instruments. For homogeneous effects models applied to short panels, our analysis yields testable implications previously noted in the literature for the two major available identification strategies. We propose overidentification tests for these restrictions that remain valid in high dimensional regimes and are robust to heteroskedasticity and clustering. We further show that homogeneous effect models in short panels, and their corresponding overidentification tests, are of central importance by establishing that: (i) In heterogenous effects models, interpreting TSLS as a positively weighted average of treatment effects can impose implausible assumptions on the distribution of the data; and (ii) Alternative identifying strategies relying on long panels can prove uninformative in short panel applications. We highlight the empirical relevance of our results by examining the viability of Bartik instruments for identifying the effect of rising Chinese import competition on US local labor markets. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.17049&r=ecm |
By: | Yi Lu; Jianguo Wang; Huihua Xie |
Abstract: | This paper develops a generalized framework for identifying causal impacts in a reduced-form manner under kinked settings when agents can manipulate their choices around the threshold. The causal estimation using a bunching framework was initially developed by Diamond and Persson (2017) under notched settings. Many empirical applications of bunching designs involve kinked settings. We propose a model-free causal estimator in kinked settings with sharp bunching and then extend to the scenarios with diffuse bunching, misreporting, optimization frictions, and heterogeneity. The estimation method is mostly non-parametric and accounts for the interior response under kinked settings. Applying the proposed approach, we estimate how medical subsidies affect outpatient behaviors in China. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.09117&r=ecm |
By: | Douglas Kiarelly Godoy de Araujo |
Abstract: | Synthetic control methods are a data-driven way to calculate counterfactuals from control individuals for the estimation of treatment effects in many settings of empirical importance. In canonical implementations, this weighting is linear and the key methodological steps of donor pool selection and covariate comparison between the treated entity and its synthetic control depend on some degree of subjective judgment. Thus current methods may not perform best in settings with large datasets or when the best synthetic control is obtained by a nonlinear combination of donor pool individuals. This paper proposes "machine controls", synthetic controls based on automated donor pool selection through clustering algorithms, supervised learning for flexible non-linear weighting of control entities and manifold learning to confirm numerically whether the synthetic control indeed resembles the target unit. The machine controls method is demonstrated with the effect of the 2017 labour deregulation on worker productivity in Brazil. Contrary to policymaker expectations at the time of enactment of the reform, there is no discernible effect on worker productivity. This result points to the deep challenges in increasing the level of productivity, and with it, economic welfare. |
Keywords: | causal inference, synthetic controls, machine learning, labour reforms, productivity |
JEL: | B41 C32 C54 E24 J50 J83 O47 |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:bis:biswps:1181&r=ecm |
By: | Marín Díazaraque, Juan Miguel; Romero, Eva; Lopes Moreira Da Veiga, María Helena |
Abstract: | In this paper, we propose a novel asymmetric stochastic volatility model that uses a heterogeneous autoregressive process to capture the persistence and decay of volatility asymmetry over time, which is different from traditional approaches. We analyze the properties of the model in terms of volatility asymmetry and propagation using a recently introduced concept in the field and find that the new model can generate both volatility asymmetry and propagation effects. We also introduce Data Cloning for parameter estimation, which provides robustness and computational efficiency compared to conventional techniques. Our empirical analysis shows that the new proposal outperforms a recent competitor in terms of in-sample fit and out-of-sample volatility prediction across different financial return series, making it a more effective tool for capturing the dynamics of volatility asymmetry in financial markets. |
Keywords: | Data cloning; Propagation; Stochastic volatility; Volatility asymmetry |
Date: | 2024–05–07 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:43887&r=ecm |
By: | Aknouche, Abdelhakim; Rabehi, Nadia |
Abstract: | This work proposes a class of seasonal autoregressive integrated moving average models whose period is an independent and identically distributed random process valued in a finite set. The causality, invertibility, and autocovariance shape of the model are first revealed. Then, the estimation of the parameters which are the model coefficients, the innovation variance, the probability distribution of the period, and the (unobserved) sample-path of the period, is carried out using the expectation-maximization algorithm. In particular, a procedure for random elimination of seasonality is proposed. An application of the methodology to the annual Wolfer sunspot numbers is provided. |
Keywords: | Seasonal ARIMA models, irregular seasonality, random period, non-integer period, SARIMAR model, EM algorithm. |
JEL: | C13 C18 C52 |
Date: | 2024–04–19 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:120758&r=ecm |
By: | Henrik Sigstad |
Abstract: | How robust are analyses based on marginal treatment effects (MTE) to violations of Imbens and Angrist (1994) monotonicity? In this note, I present weaker forms of monotonicity under which popular MTE-based estimands still identify the parameters of interest. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.03235&r=ecm |
By: | Jose Ignacio Hernandez; Niek Mouter; Sander van Cranenburgh |
Abstract: | Random utility maximisation (RUM) models are one of the cornerstones of discrete choice modelling. However, specifying the utility function of RUM models is not straightforward and has a considerable impact on the resulting interpretable outcomes and welfare measures. In this paper, we propose a new discrete choice model based on artificial neural networks (ANNs) named "Alternative-Specific and Shared weights Neural Network (ASS-NN)", which provides a further balance between flexible utility approximation from the data and consistency with two assumptions: RUM theory and fungibility of money (i.e., "one euro is one euro"). Therefore, the ASS-NN can derive economically-consistent outcomes, such as marginal utilities or willingness to pay, without explicitly specifying the utility functional form. Using a Monte Carlo experiment and empirical data from the Swissmetro dataset, we show that ASS-NN outperforms (in terms of goodness of fit) conventional multinomial logit (MNL) models under different utility specifications. Furthermore, we show how the ASS-NN is used to derive marginal utilities and willingness to pay measures. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.13198&r=ecm |
By: | Krzysztof Drachal (Faculty of Economic Sciences, University of Warsaw) |
Abstract: | This study examines the application of Bayesian Symbolic Regression (BSR) for in-sample modelling of various commodities spot prices. The studied method is a novel one, and it shows promising potential as a forecasting tool. Additionally, BSR offers capabilities for handling variable selection (feature selection) challenges in econometric modeling. The focus of the presented research is to analyze the suitable selection of the initial parameters for BSR in the context of modelling commodities spot prices. Generally, it is a challenge for (conventional) symbolic regression to properly specify the set of operators (functions). Here, the analysis is primarily focused on specific time-series, making the presented considerations especially tailored to time-series representing commodities markets. The analysis is done with an aim to assess the ability of BSR to fit the observed data effectively. The out-of-sample forecasting performance analysis is deferred for investigations elsewhere. Herein, the main objective is to analyze how the selection of initial parameters impacts the accuracy of the BSR model. Indeed, the already known simulations were based on synthetic data. Therefore, herein real-word data from commodities markets are used. The outcomes can be useful for researchers and practitioners further interested in econometric and financial applications of BSR. (Research funded by the grant of the National Science Centre, Poland, under the contract number DEC-2018/31/B/HS4/02021.) |
Keywords: | Bayesian symbolic regression, Commodities, Genetic algorithms, Modelling, Symbolic regression, Time-series |
JEL: | C32 C53 Q02 |
URL: | http://d.repec.org/n?u=RePEc:sek:iefpro:14116014&r=ecm |
By: | Tobias Fissler; Fangda Liu; Ruodu Wang; Linxiao Wei |
Abstract: | Tail risk measures are fully determined by the distribution of the underlying loss beyond its quantile at a certain level, with Value-at-Risk and Expected Shortfall being prime examples. They are induced by law-based risk measures, called their generators, evaluated on the tail distribution. This paper establishes joint identifiability and elicitability results of tail risk measures together with the corresponding quantile, provided that their generators are identifiable and elicitable, respectively. As an example, we establish the joint identifiability and elicitability of the tail expectile together with the quantile. The corresponding consistent scores constitute a novel class of weighted scores, nesting the known class of scores of Fissler and Ziegel for the Expected Shortfall together with the quantile. For statistical purposes, our results pave the way to easier model fitting for tail risk measures via regression and the generalized method of moments, but also model comparison and model validation in terms of established backtesting procedures. |
Date: | 2024–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2404.14136&r=ecm |