|
on Econometrics |
By: | Yiyan Huang; Cheuk Hang Leung; Xing Yan; Qi Wu; Shumin Ma; Zhiri Yuan; Dongdong Wang; Zhixiang Huang |
Abstract: | Many practical decision-making problems in economics and healthcare seek to estimate the average treatment effect (ATE) from observational data. The Double/Debiased Machine Learning (DML) is one of the prevalent methods to estimate ATE in the observational study. However, the DML estimators can suffer an error-compounding issue and even give an extreme estimate when the propensity scores are misspecified or very close to 0 or 1. Previous studies have overcome this issue through some empirical tricks such as propensity score trimming, yet none of the existing literature solves this problem from a theoretical standpoint. In this paper, we propose a Robust Causal Learning (RCL) method to offset the deficiencies of the DML estimators. Theoretically, the RCL estimators i) are as consistent and doubly robust as the DML estimators, and ii) can get rid of the error-compounding issue. Empirically, the comprehensive experiments show that i) the RCL estimators give more stable estimations of the causal parameters than the DML estimators, and ii) the RCL estimators outperform the traditional estimators and their variants when applying different machine learning models on both simulation and benchmark datasets. |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2209.01805&r= |
By: | Prüser, Jan; Blagov, Boris |
Abstract: | We propose a prior for VAR models that exploits the panel structure of macroeconomic time series while also providing shrinkage towards zero to address overfitting concerns. The prior is flexible as it detects shared dynamics of individual variables across endogenously determined groups of countries. We demonstrate the usefulness of our approach via a Monte Carlo study and use our model to capture the hidden homo- and heterogeneities of the euro area member states. Combining pairwise pooling with zero shrinkage delivers sharper parameter inference that improves point and density forecasts over only zero shrinkage or only pooling specifications, and helps with structural analysis by lowering the estimation uncertainty. |
Keywords: | BVAR,shrinkage,forecasting,structural analysis |
JEL: | C11 C32 C53 E37 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:zbw:rwirep:960&r= |
By: | Jinyuan Chang; Qing Jiang; Xiaofeng Shao |
Abstract: | In this paper, we consider testing the martingale difference hypothesis for high-dimensional time series. Our test is built on the sum of squares of the element-wise max-norm of the proposed matrix-valued nonlinear dependence measure at different lags. To conduct the inference, we approximate the null distribution of our test statistic by Gaussian approximation and provide a simulation-based approach to generate critical values. The asymptotic behavior of the test statistic under the alternative is also studied. Our approach is nonparametric as the null hypothesis only assumes the time series concerned is martingale difference without specifying any parametric forms of its conditional moments. As an advantage of Gaussian approximation, our test is robust to the cross-series dependence of unknown magnitude. To the best of our knowledge, this is the first valid test for the martingale difference hypothesis that not only allows for large dimension but also captures nonlinear serial dependence. The practical usefulness of our test is illustrated via simulation and a real data analysis. The test is implemented in a user-friendly R-function. |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2209.04770&r= |
By: | Giovanni Angelini (Università di Bologna); Luca Fanelli (Università di Bologna); Marco M. Sorge (Università di Salerno, University of Göttingen, and CSEF-DISES) |
Abstract: | Recently developed models of the business cycle exhibit a recursive timing structure, which enforces delayed propagation of exogenous shocks driving short-run dynamics. We propose a simple empirical strategy to test for the relevance of timing restrictions and ensuing shock transmission delays in general DSGE environments. Based on a bootstrap maximum likelihood estimator, our approach mitigates over-rejection concerns typically arising from conventional tests of non-linear hypotheses that exploit first-order asymptotic approximations. We showcase the empirical usefulness of the testing procedure by means of numerical simulations of a workhorse model of the monetary transmission mechanism. |
Keywords: | DSGE models; Timing restrictions; Transmission delays. |
JEL: | C1 C3 E3 E5 |
URL: | http://d.repec.org/n?u=RePEc:sef:csefwp:653&r= |
By: | Sebastian Letmathe (Paderborn University) |
Abstract: | This paper focuses on data-driven selection of the smoothing parameter in P-splines for time series with short-range dependence. Well-known asymptotic results of Psplines are first adapted to the current context. A fully automatic iterative plug-in (IPI) algorithm for P-splines is investigated in a comprehensive simulation study. Practical relevance of the IPI is shown by application to economic time series. Moreover, it is illustrated that the IPI can be used for automatic selection of the smoothing parameter of the Hodrick-Prescott filter. Furthermore, a P-spline Log-ACD model is proposed and applied to average daily trade duration data. Smoothing parameter selection is carried via the proposed IPI-algorithm, which performs very well in this context too. |
Keywords: | P-Splines for time series, selection of the smoothing parameter, iterative plug-in, Hodrick-Prescott filter |
JEL: | C14 C51 |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:pdn:ciepap:152&r= |
By: | Matteo Iacopini; Aubrey Poon; Luca Rossini; Dan Zhu |
Abstract: | Timely characterizations of risks in economic and financial systems play an essential role in both economic policy and private sector decisions. However, the informational content of low-frequency variables and the results from conditional mean models provide only limited evidence to investigate this problem. We propose a novel mixed-frequency quantile vector autoregression (MF-QVAR) model to address this issue. Inspired by the univariate Bayesian quantile regression literature, the multivariate asymmetric Laplace distribution is exploited under the Bayesian framework to form the likelihood. A data augmentation approach coupled with a precision sampler efficiently estimates the missing low-frequency variables at higher frequencies under the state-space representation. The proposed methods allow us to nowcast conditional quantiles for multiple variables of interest and to derive quantile-related risk measures at high frequency, thus enabling timely policy interventions. The main application of the model is to nowcast conditional quantiles of the US GDP, which is strictly related to the quantification of Value-at-Risk and the Expected Shortfall. |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2209.01910&r= |
By: | Kelsey Jack; Kathryn McDermott; Anja Sautmann |
Abstract: | Multiple price lists are a convenient tool to elicit willingness to pay (WTP) in surveys and experiments, but choice patterns such as “multiple switching” and “never switching” indicate high error rates. Existing measurement approaches often do not provide accurate standard errors and cannot correct for bias due to framing and order effects. We propose to combine a randomization approach with a random-effects latent utility model to detect bias and account for error. Data from a choice experiment in South Africa shows that significant order effects exist which, if uncorrected, would lead to distorted conclusions about subjects’ preferences. We provide templates to create a multiple price list survey instrument in SurveyCTO and analyze the resulting data using our proposed methods. |
JEL: | C91 C93 D46 O12 Q51 |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:30433&r= |
By: | Sylvain Catherine; Mehran Ebrahimian; David Sraer; David Thesmar |
Abstract: | Robustness checks, such as adding controls or sample splits, are a standard feature of reduced-form empirical research. Because of computational costs of reestimating alternative models, they are much less common in structural research using simulation-based methods. We propose a simple methodology to bypass this computational cost. Our approach is based on estimating a flexible approximation of the relation between moments and parameters. It provides a computationally cheap way to run the potentially large number of structural estimations required for such robustness checks. We demonstrate the validity and usefulness of this methodology in the context of two standard applications in economics and finance: (1) dynamic corporate finance (2) portfolio choice over the life cycle. |
JEL: | C51 C52 G0 |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:30443&r= |
By: | James Younker |
Abstract: | Forecast combinations, also known as ensemble models, routinely require practitioners to select a model from a massive number of potential candidates. Ten explanatory variables can be grouped into 2^1078 forecast combinations, and the number of possibilities increases further to 2^(1078+2^1078) if we allow for forecast combinations of forecast combinations. This paper derives a calculation for the effective degrees of freedom of a forecast combination under a set of general conditions for linear models. It also supports this calculation with simulations. The result allows users to perform several other computations, including the F-test and various information criteria. These computations are particularly useful when there are too many candidate models to evaluate out of sample. Furthermore, computing effective degrees of freedom shows that the complexity cost of a forecast combination is driven by the parameters in the weighting scheme and the weighted average of parameters in the auxiliary models as opposed to the number of auxiliary models. This identification of complexity cost contributions can help practitioners make informed choices about forecast combination design. |
Keywords: | Econometric and statistical methods |
JEL: | C01 C02 C1 C13 C5 C50 C51 C52 C53 |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocadp:22-19&r= |
By: | Thomas-Agnan, Christine; Margaretic, Paula; Laurent, Thibault |
Abstract: | We extend the impact decomposition proposed by LeSage and Thomas-Agnan (2015) in the spatialinteraction model to a more general framework, where the sets of origins and destinations can bedifferent, and where the relevant attributes characterizing the origins do not coincide with those of thedestinations. These extensions result in three flow data configurations which we study extensively: thesquare, the rectangular, and the non-cartesian cases. We propose numerical simplifications to computethe impacts, avoiding the inversion of a large filter matrix. These simplifications considerably reducecomputation time; they can also be useful for prediction. Furthermore, we define local measuresfor the intra, origin, destination and network effects. Interestingly, these local measures can beaggregated at different levels of analysis. Finally, we illustrate our methodology in a case study usingremittance flows all over the world. |
Keywords: | Impact decomposition ; local effects; spatial interaction autoregressive models; non-cartesian flow data |
JEL: | C13 C31 C46 C51 C65 |
Date: | 2022–09–13 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:127301&r= |
By: | Sebastian Letmathe (Paderborn University); Yuanhua Feng (Paderborn University) |
Abstract: | This paper proposes a new IPI- (iterative plug-in) rule for optimal smoothing for penalised splines with truncated polynomials. The IPI is based on a closed-form approximation to the optimal smoothing parameter. In contrast to a DPI- (direct plug-in) approach the current algorithm is fully automatic and self-contained. Our proposal is a fixpoint-search procedure and the resulting smoothing parameter is (theoretically) independent of the initial value. Like the DPI, the IPI-rule can be employed as a refining stage in order to improve the quality of other selection methods, e.g. Mallow’s Cp, Cross Validation or Residual Maximum Likelihood. Some numerical features of P-Splines as well as the performance of the IPI-algorithm are examined in detail through a simulation study. Our results reveal that our proposal works very well. Practical relevance of the IPI is illustrated by different data examples. |
Keywords: | P-Splines, smoothing parameter, iterative plug-in, simulation |
JEL: | C14 C51 |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:pdn:ciepap:151&r= |
By: | Lena Sasal; Tanujit Chakraborty; Abdenour Hadid |
Abstract: | Deep learning utilizing transformers has recently achieved a lot of success in many vital areas such as natural language processing, computer vision, anomaly detection, and recommendation systems, among many others. Among several merits of transformers, the ability to capture long-range temporal dependencies and interactions is desirable for time series forecasting, leading to its progress in various time series applications. In this paper, we build a transformer model for non-stationary time series. The problem is challenging yet crucially important. We present a novel framework for univariate time series representation learning based on the wavelet-based transformer encoder architecture and call it W-Transformer. The proposed W-Transformers utilize a maximal overlap discrete wavelet transformation (MODWT) to the time series data and build local transformers on the decomposed datasets to vividly capture the nonstationarity and long-range nonlinear dependencies in the time series. Evaluating our framework on several publicly available benchmark time series datasets from various domains and with diverse characteristics, we demonstrate that it performs, on average, significantly better than the baseline forecasters for short-term and long-term forecasting, even for datasets that consist of only a few hundred training samples. |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2209.03945&r= |
By: | Alban Moura |
Abstract: | Hamilton (2018) argues that one should never use the Hodrick-Prescott (HP) filter, given its drawbacks and the existence of a better alternative. This comment shows that the main drawback Hamilton finds in the HP filter, the presence of filter-induced dynamics in the estimate of the cyclical component, is also a key feature of the alternative filter proposed by Hamilton. As with the HP filter, the Hamilton filter applied to a random walk extracts a cyclical component that is highly predictable, that can predict other variables, and whose properties reflect as much the filter as the underlying data-generating process. In addition, the Hamilton trend lags the data by construction and there is some arbitrariness in the choice of a key parameter defining the filter. Therefore, a more balanced assessment is that the HP and Hamilton filters provide different ways to look at the data, with neither being clearly superior from a practical perspective. |
Keywords: | HP filter; Hamilton filter; business cycles; detrending; filtering. |
JEL: | B41 C22 E32 |
Date: | 2022–08 |
URL: | http://d.repec.org/n?u=RePEc:bcl:bclwop:bclwp162&r= |