|
on Econometrics |
By: | Chen, Liang; Dolado, Juan José; Gonzalo, Jesús; Pan, Haozi |
Abstract: | This paper studies the estimation of characteristic-based quantile factor models where the factor loadings are unknown functions of observed individual characteristics while the idiosyncratic error terms are subject to conditional quantile restrictions. We propose a three-stage estimation procedure that is easily implementable in practice and has nice properties. The convergence rates, the limiting distributions of the estimated factors and loading functions, and a consistent selection criterion for the number of factors at each quantile are derived under general conditions. The proposedestimation methodology is shown to work satisfactorily when: (i) the idiosyncratic errors have heavy tails, (ii) the time dimension of the panel dataset is not large, and (iii) the number of factors exceeds the number of characteristics. Finite sample simulations and an empirical application aimed at estimating the loading functions of the daily returns of a large panel of S&P500 index securities help illustrate these properties. |
Keywords: | Quantile Factor Models; Nonparametric Quantile Regression; Principal Component Analysis |
Date: | 2023–04–14 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:37095&r=ecm |
By: | Takahiro Ito (Graduate School of International Cooperation Studies, Kobe University) |
Abstract: | This study develops a novel distribution-free maximum likelihood estimator and formulates it for linear and binary choice models. The estimator is consistent and asymptotically normally distributed (at the rate of ???1/2). Monte Carlo simulation results show that the estimator is strongly consistent and efficient. For the binary model, when the linear combination of regressors is leptokurtic, the efficiency loss of having no distribution assumption is virtually nonexistent, and the estimator is always superior to the probit and other semiparametric estimators. The results further show that the estimator performs exceedingly well in the presence of a typical perfect prediction problem. |
Keywords: | semiparametric estimator, distribution-free maximum likelihood estimation, Monte Carlo Resampling with Replacement, binary choice model, perfect prediction problem |
URL: | http://d.repec.org/n?u=RePEc:kcs:wpaper:40&r=ecm |
By: | Abhineet Agarwal; Anish Agarwal; Suhas Vijaykumar |
Abstract: | We consider a setting with $N$ heterogeneous units and $p$ interventions. Our goal is to learn unit-specific potential outcomes for any combination of these $p$ interventions, i.e., $N \times 2^p$ causal parameters. Choosing combinations of interventions is a problem that naturally arises in many applications such as factorial design experiments, recommendation engines (e.g., showing a set of movies that maximizes engagement for users), combination therapies in medicine, selecting important features for ML models, etc. Running $N \times 2^p$ experiments to estimate the various parameters is infeasible as $N$ and $p$ grow. Further, with observational data there is likely confounding, i.e., whether or not a unit is seen under a combination is correlated with its potential outcome under that combination. To address these challenges, we propose a novel model that imposes latent structure across both units and combinations. We assume latent similarity across units (i.e., the potential outcomes matrix is rank $r$) and regularity in how combinations interact (i.e., the coefficients in the Fourier expansion of the potential outcomes is $s$ sparse). We establish identification for all causal parameters despite unobserved confounding. We propose an estimation procedure, Synthetic Combinations, and establish finite-sample consistency under precise conditions on the observation pattern. Our results imply Synthetic Combinations consistently estimates unit-specific potential outcomes given $\text{poly}(r) \times (N + s^2p)$ observations. In comparison, previous methods that do not exploit structure across both units and combinations have sample complexity scaling as $\min(N \times s^2p, \ \ r \times (N + 2^p))$. We use Synthetic Combinations to propose a data-efficient experimental design mechanism for combinatorial causal inference. We corroborate our theoretical findings with numerical simulations. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.14226&r=ecm |
By: | Mathias Silva (Aix-Marseille Univ, CNRS, AMSE, Marseille, France.) |
Abstract: | Recent empirical analysis of income distributions are often limited by the exclusive availability of data in a grouped format. This data format is made particularly restrictive by a lack of information on the underlying grouping mechanism and sampling variability of the grouped-data statistics it contains. These restrictions often result in the unavailability of an analytical parametric likelihood function exploiting all information available in the grouped data. Building on recent methods for inference on parametric income distributions for this type of data, this paper explores a new Approximate Bayesian Computation (ABC) approach. ABC overcomes the restrictions posed by grouped data for Bayesian inference through a non-parametric approximation of the likelihood function exploiting simulated data from the income distribution model. Empirical applications of the proposed ABC method in both simulated and World Bank's PovCalNet data illustrate the performance and suitability of the method for the typical formats of grouped data on incomes. |
Keywords: | Grouped data, Bayesian inference, Generalized Lorenz curve, GB2 |
JEL: | C11 C18 C63 |
Date: | 2023–04 |
URL: | http://d.repec.org/n?u=RePEc:aim:wpaimx:2310&r=ecm |
By: | Zhexiao Lin; Fang Han |
Abstract: | While researchers commonly use the bootstrap for statistical inference, many of us have realized that the standard bootstrap, in general, does not work for Chatterjee's rank correlation. In this paper, we provide proof of this issue under an additional independence assumption, and complement our theory with simulation evidence for general settings. Chatterjee's rank correlation thus falls into a category of statistics that are asymptotically normal but bootstrap inconsistent. Valid inferential methods in this case are Chatterjee's original proposal (for testing independence) and Lin and Han (2022)'s analytic asymptotic variance estimator (for more general purposes). |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.14088&r=ecm |
By: | Gael M. Martin; David T. Frazier; Ruben Loaiza-Maya; Florian Huber; Gary Koop; John Maheu; Didier Nibbering; Anastasios Panagiotelis |
Abstract: | The Bayesian statistical paradigm provides a principled and coherent approach to probabilistic forecasting. Uncertainty about all unknowns that characterize any forecasting problem -- model, parameters, latent states -- is factored into the forecast distribution, with forecasts conditioned only on what is known or observed. Allied with the elegance of the method, Bayesian forecasting is now underpinned by the burgeoning field of Bayesian computation, which enables Bayesian forecasts to be produced for virtually any problem, no matter how large, or complex. The current state of play in Bayesian forecasting is the subject of this review. The aim is to provide readers with an overview of modern approaches to the field, set in some historical context. Whilst our primary focus is on applications in the fields of economics and finance, and their allied disciplines, sufficient general details about implementation are provided to aid and inform all investigators. |
Keywords: | Bayesian prediction, macroeconomics, finance, marketing, electricity demand, Bayesian computational methods, loss-based Bayesian prediction |
JEL: | C01 C11 C53 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2023-1&r=ecm |
By: | Dan Ben-Moshe |
Abstract: | This paper proposes a novel approach for identifying coefficients in an earnings dynamics model with arbitrarily dependent contemporaneous income shocks. Traditional methods relying on second moments fail to identify these coefficients, emphasizing the need for nongaussianity assumptions that capture information from higher moments. Our results contribute to the literature on earnings dynamics by allowing models of earnings to have, for example, the permanent income shock of a job change to be linked to the contemporaneous transitory income shock of a relocation bonus. |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2303.08460&r=ecm |
By: | Lukas Hack; Klodiana Istrefi; Matthias Meier |
Abstract: | We propose a novel identification design to estimate the causal effects of systematic monetary policy on the propagation of macroeconomic shocks. The design combines (i) a time-varying measure of systematic monetary policy based on the historical composition of hawks and doves in the Federal Open Market Committee (FOMC) with (ii) an instrument that leverages the mechanical FOMC rotation of voting rights. We apply our design to study the effects of government spending shocks. We find fiscal multipliers between two and three when the FOMC is dovish and below zero when it is hawkish. Narrative evidence from historical FOMC records corroborates our findings |
Keywords: | Systematic monetary policy, FOMC, rotation, government spending |
JEL: | E32 E52 E62 E63 H56 |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:bon:boncrc:crctr224_2023_408&r=ecm |
By: | Antoine Didisheim (Swiss Finance Institute, UNIL); Shikun Ke (Yale School of Management); Bryan T. Kelly (Yale SOM; AQR Capital Management, LLC; National Bureau of Economic Research (NBER)); Semyon Malamud (Ecole Polytechnique Federale de Lausanne; Centre for Economic Policy Research (CEPR); Swiss Finance Institute) |
Abstract: | We theoretically characterize the behavior of machine learning asset pricing models. We prove that expected out-of-sample model performance—in terms of SDF Sharpe ratio and average pricing errors—is improving in model parameterization (or “complexity”). Our results predict that the best asset pricing models (in terms of expected out-of-sample performance) have an extremely large number of factors (more than the number of training observations or base assets). Our empirical findings verify the theoretically predicted “virtue of complexity” in the cross-section of stock returns and find that the best model combines tens of thousands of factors. We also derive the feasible Hansen- Jagannathan (HJ) bound: The maximal Sharpe ratio achievable by a feasible portfolio strategy. The infeasible HJ bound massively overstates the achievable maximal Sharpe ratio due to a complexity wedge that we characterize. |
Keywords: | Portfolio choice, asset pricing tests, optimization, expected returns, predictability |
JEL: | C3 C58 C61 G11 G12 G14 |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2319&r=ecm |