|
on Econometrics |
By: | Fu Ouyang; Thomas Tao Yang |
Abstract: | This paper proposes a new method for estimating high-dimensional binary choice models. The model we consider is semiparametric, placing no distributional assumptions on the error term, allowing for heteroskedastic errors, and permitting endogenous regressors. Our proposed approaches extend the special regressor estimator originally proposed by Lewbel (2000). This estimator becomes impractical in high-dimensional settings due to the curse of dimensionality associated with high-dimensional conditional density estimation. To overcome this challenge, we introduce an innovative data-driven dimension reduction method for nonparametric kernel estimators, which constitutes the main innovation of this work. The method combines distance covariance-based screening with cross-validation (CV) procedures, rendering the special regressor estimation feasible in high dimensions. Using the new feasible conditional density estimator, we address the variable and moment (instrumental variable) selection problems for these models. We apply penalized least squares (LS) and Generalized Method of Moments (GMM) estimators with a smoothly clipped absolute deviation (SCAD) penalty. A comprehensive analysis of the oracle and asymptotic properties of these estimators is provided. Monte Carlo simulations are employed to demonstrate the effectiveness of our proposed procedures in finite sample scenarios. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.07067&r=ecm |
By: | Christis Katsouris |
Abstract: | This paper develops an asymptotic distribution theory for a two-stage instrumentation estimation approach in quantile predictive regressions when both generated covariates and persistent predictors are used. The generated covariates are obtained from an auxiliary quantile regression model and our main interest is the robust estimation and inference of the primary quantile predictive regression in which this generated covariate is added to the set of nonstationary regressors. We find that the proposed doubly IVX estimator is robust to the abstract degree of persistence regardless of the presence of generated regressor obtained from the first stage procedure. The asymptotic properties of the two-stage IVX estimator such as mixed Gaussianity are established while the asymptotic covariance matrix is adjusted to account for the first-step estimation error. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.08218&r=ecm |
By: | Haoge Chang |
Abstract: | This paper considers the estimation of treatment effects in randomized experiments with complex experimental designs, including cases with interference between units. We develop a design-based estimation theory for arbitrary experimental designs. Our theory facilitates the analysis of many design-estimator pairs that researchers commonly employ in practice and provide procedures to consistently estimate asymptotic variance bounds. We propose new classes of estimators with favorable asymptotic properties from a design-based point of view. In addition, we propose a scalar measure of experimental complexity which can be linked to the design-based variance of the estimators. We demonstrate the performance of our estimators using simulated datasets based on an actual network experiment studying the effect of social networks on insurance adoptions. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.06891&r=ecm |
By: | Yingying Dong; Ying-Ying Lee |
Abstract: | Many empirical applications estimate causal effects of a continuous endogenous variable (treatment) using a binary instrument. Estimation is typically done through linear 2SLS. This approach requires a mean treatment change and causal interpretation requires the LATE-type monotonicity in the first stage. An alternative approach is to explore distributional changes in the treatment, where the first-stage restriction is treatment rank similarity. We propose causal estimands that are doubly robust in that they are valid under either of these two restrictions. We apply the doubly robust estimation to estimate the impacts of sleep on well-being. Our results corroborate the usual 2SLS estimates. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2310.18504&r=ecm |
By: | Silvia Goncalves; Michael W. McCracken; Yongxu Yao |
Abstract: | In this paper we develop a block bootstrap approach to out-of-sample inference when real-time data are used to produce forecasts. In particular, we establish its first-order asymptotic validity for West-type (1996) tests of predictive ability in the presence of regular data revisions. This allows the user to conduct asymptotically valid inference without having to estimate the asymptotic variances derived in Clark and McCracken’s (2009) extension of West (1996) when data are subject to revision. Monte Carlo experiments indicate that the bootstrap can provide satisfactory finite sample size and power even in modest sample sizes. We conclude with an application to inflation forecasting that adapts the results in Ang et al. (2007) to the presence of real-time data. |
Keywords: | real-time data; bootstrap; prediction; inference |
JEL: | C53 C12 C52 |
Date: | 2023–11–30 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedlwp:97409&r=ecm |
By: | Yingjie Feng |
Abstract: | This paper studies optimal estimation of large-dimensional nonlinear factor models. The key challenge is that the observed variables are possibly nonlinear functions of some latent variables where the functional forms are left unspecified. A local principal component analysis method is proposed to estimate the factor structure and recover information on latent variables and latent functions, which combines $K$-nearest neighbors matching and principal component analysis. Large-sample properties are established, including a sharp bound on the matching discrepancy of nearest neighbors, sup-norm error bounds for estimated local factors and factor loadings, and the uniform convergence rate of the factor structure estimator. Under mild conditions our estimator of the latent factor structure can achieve the optimal rate of uniform convergence for nonparametric regression. The method is illustrated with a Monte Carlo experiment and an empirical application studying the effect of tax cuts on economic growth. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.07243&r=ecm |
By: | Jonathan Chassot; Michael Creel |
Abstract: | We propose a method to estimate model parameters using temporal convolutional networks (TCNs). By training the TCN on simulated data, we learn the mapping from sample data to the model parameters that were used to generate this data. This mapping can then be used to define exactly identifying moment conditions for the method of simulated moments (MSM) in a purely data-driven manner, alleviating a researcher from the need to specify and select moment conditions. Using several test models, we show by example that this proposal can outperform the maximum likelihood estimator, according to several metrics, for small and moderate sample sizes, and that this result is not simply due to bias correction. To illustrate our proposed method, we apply it to estimate a jump-diffusion model for a financial series. |
Keywords: | temporal convolutional networks, method of simulated moments, jump-diffusion stochastic volatility |
JEL: | C15 C45 C53 C58 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:bge:wpaper:1412&r=ecm |
By: | Daniel Ober-Reynolds |
Abstract: | Many causal parameters depend on a moment of the joint distribution of potential outcomes. Such parameters are especially relevant in policy evaluation settings, where noncompliance is common and accommodated through the model of Imbens & Angrist (1994). This paper shows that the sharp identified set for these parameters is an interval with endpoints characterized by the value of optimal transport problems. Sample analogue estimators are proposed based on the dual problem of optimal transport. These estimators are root-n consistent and converge in distribution under mild assumptions. Inference procedures based on the bootstrap are straightforward and computationally convenient. The ideas and estimators are demonstrated in an application revisiting the National Supported Work Demonstration job training program. I find suggestive evidence that workers who would see below average earnings without treatment tend to see above average benefits from treatment. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.09435&r=ecm |
By: | Daido Kido |
Abstract: | Policymakers often desire a statistical treatment rule (STR) that determines a treatment assignment rule deployed in a future population from available data. With the true knowledge of the data generating process, the average treatment effect (ATE) is the key quantity characterizing the optimal treatment rule. Unfortunately, the ATE is often not point identified but partially identified. Presuming the partial identification of the ATE, this study conducts a local asymptotic analysis and develops the locally asymptotically minimax (LAM) STR. The analysis does not assume the full differentiability but the directional differentiability of the boundary functions of the identification region of the ATE. Accordingly, the study shows that the LAM STR differs from the plug-in STR. A simulation study also demonstrates that the LAM STR outperforms the plug-in STR. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.08958&r=ecm |
By: | Tony Chernis; Niko Hauzenberger; Florian Huber; Gary Koop; James Mitchell |
Abstract: | Bayesian predictive synthesis (BPS) provides a method for combining multiple predictive distributions based on agent/expert opinion analysis theory and encompasses a range of existing density forecast pooling methods. The key ingredient in BPS is a ``synthesis'' function. This is typically specified parametrically as a dynamic linear regression. In this paper, we develop a nonparametric treatment of the synthesis function using regression trees. We show the advantages of our tree-based approach in two macroeconomic forecasting applications. The first uses density forecasts for GDP growth from the euro area's Survey of Professional Forecasters. The second combines density forecasts of US inflation produced by many regression models involving different predictors. Both applications demonstrate the benefits -- in terms of improved forecast accuracy and interpretability -- of modeling the synthesis function nonparametrically. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.12671&r=ecm |
By: | Léopold Simar (Institut de Statistique, Biostatistique et Sciences Actuarielles, Université Catholique de Louvain, Voie du Roman Pays 20, B1348 Louvain-la-Neuve, Belgium); Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia); Shirong Zhao (School of Finance, Dongbei University of Finance and Economics, Dalian, Liaoning 116025) |
Abstract: | The statistical framework for the Malmquist productivity index (MPI) is now welldeveloped and emphasizes the importance of developing such a framework for its alternatives. We try to fill this gap in the literature for another popular measure, known as Hicks–Moorsteen Productivity Index (HMPI). Unlike MPI, the HMPI has a total factor productivity interpretation in the sense of measuring productivity as the ratio of aggregated outputs to aggregated inputs and has other useful advantages over MPI. In this work, we develop a novel framework for statistical inference for HMPI in various contexts: when its components are known or when they are replaced with nonparametric envelopment estimators. This will be done for a particular firm’s HMPI as well as for the simple mean (unweighted) HMPI and the aggregate (weighted) HMPI.Our results further enrich the recent theoretical developments of nonparametric envelopment estimators for the various efficiency and productivity measures. We examine the performance of these theoretical results for the unweighted and weighted mean of HMPI using Monte-Carlo simulations and also provide an empirical illustration. |
Keywords: | Hicks–Moorsteen Productivity Index, Data Envelopment Analysis, Aggregate, Central Limit Theoremney |
JEL: | C12 C14 C43 C67 |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:190&r=ecm |
By: | Andrea Renzetti |
Abstract: | Time-Varying Parameters Vector Autoregressive (TVP-VAR) models are frequently used in economics to capture evolving relationships among the macroeconomic variables. However, TVP-VARs have the tendency of overfitting the data, resulting in inaccurate forecasts and imprecise estimates of typical objects of interests such as the impulse response functions. This paper introduces a Theory Coherent Time-Varying Parameters Vector Autoregressive Model (TC-TVP-VAR), which leverages on an arbitrary theoretical framework derived by an underlying economic theory to form a prior for the time varying parameters. This "theory coherent" shrinkage prior significantly improves inference precision and forecast accuracy over the standard TVP-VAR. Furthermore, the TC-TVP-VAR can be used to perform indirect posterior inference on the deep parameters of the underlying economic theory. The paper reveals that using the classical 3-equation New Keynesian block to form a prior for the TVP- VAR substantially enhances forecast accuracy of output growth and of the inflation rate in a standard model of monetary policy. Additionally, the paper shows that the TC-TVP-VAR can be used to address the inferential challenges during the Zero Lower Bound period. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.11858&r=ecm |
By: | Timo Kuosmanen; Sheng Dai |
Abstract: | Modeling of joint production has proved a vexing problem. This paper develops a radial convex nonparametric least squares (CNLS) approach to estimate the input distance function with multiple outputs. We document the correct input distance function transformation and prove that the necessary orthogonality conditions can be satisfied in radial CNLS. A Monte Carlo study is performed to compare the finite sample performance of radial CNLS and other deterministic and stochastic frontier approaches in terms of the input distance function estimation. We apply our novel approach to the Finnish electricity distribution network regulation and empirically confirm that the input isoquants become more curved. In addition, we introduce the weight restriction to radial CNLS to mitigate the potential overfitting and increase the out-of-sample performance in energy regulation. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.11637&r=ecm |
By: | Jirong Zhuang; Deng Ding; Weiguo Lu; Xuan Wu; Gangnan Yuan |
Abstract: | Machine learning methods, such as Gaussian process regression (GPR), have been widely used in recent years for pricing American options. The GPR is considered as a potential method for estimating the continuation value of an option in the regression-based Monte Carlo method. However, it has some drawbacks, such as high computational cost and unreliability in high-dimensional cases. In this paper, we apply the Deep Kernel Learning with variational inference to the regression-based Monte Carlo method in order to overcome those drawbacks for high-dimensional American option pricing, and test its performance under geometric Brownian motion and Merton's jump diffusion models. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.07211&r=ecm |
By: | Federico A. Bugni; Yulong Wang |
Abstract: | This paper considers inference in first-price and second-price sealed-bid auctions with a large number of symmetric bidders having independent private values. Given the abundance of bidders in each auction, we propose an asymptotic framework in which the number of bidders diverges while the number of auctions remains fixed. This framework allows us to perform asymptotically exact inference on key model features using only transaction price data. Specifically, we examine inference on the expected utility of the auction winner, the expected revenue of the seller, and the tail properties of the valuation distribution. Simulations confirm the accuracy of our inference methods in finite samples. Finally, we also apply them to Hong Kong car license auction data. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.09972&r=ecm |