|
on Econometrics |
By: | Bin Chen; Yuefeng Han; Qiyang Yu |
Abstract: | High-dimensional tensor-valued data have recently gained attention from researchers in economics and finance. We consider the estimation and inference of high-dimensional tensor factor models, where each dimension of the tensor diverges. Our focus is on a factor model that admits CP-type tensor decomposition, which allows for non-orthogonal loading vectors. Based on the contemporary covariance matrix, we propose an iterative simultaneous projection estimation method. Our estimator is robust to weak dependence among factors and weak correlation across different dimensions in the idiosyncratic shocks. We establish an inferential theory, demonstrating both consistency and asymptotic normality under relaxed assumptions. Within a unified framework, we consider two eigenvalue ratio-based estimators for the number of factors in a tensor factor model and justify their consistency. Through a simulation study and two empirical applications featuring sorted portfolios and international trade flows, we illustrate the advantages of our proposed estimator over existing methodologies in the literature. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.17278&r= |
By: | Bertille Antoine; Otilia Boldea; Niccolo Zaccaria |
Abstract: | We consider estimation and inference in a linear model with endogenous regressors where the parameters of interest change across two samples. If the first-stage is common, we show how to use this information to obtain more efficient two-sample GMM estimators than the standard split-sample GMM, even in the presence of near-weak instruments. We also propose two tests to detect change points in the parameters of interest, depending on whether the first-stage is common or not. We derive the limiting distribution of these tests and show that they have non-trivial power even under weaker and possibly time-varying identification patterns. The finite sample properties of our proposed estimators and testing procedures are illustrated in a series of Monte-Carlo experiments, and in an application to the open-economy New Keynesian Phillips curve. Our empirical analysis using US data provides strong support for a New Keynesian Phillips curve with incomplete pass-through and reveals important time variation in the relationship between inflation and exchange rate pass-through. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.17056&r= |
By: | Ruofan Yu; Rong Chen; Han Xiao; Yuefeng Han |
Abstract: | Matrix time series, which consist of matrix-valued data observed over time, are prevalent in various fields such as economics, finance, and engineering. Such matrix time series data are often observed in high dimensions. Matrix factor models are employed to reduce the dimensionality of such data, but they lack the capability to make predictions without specified dynamics in the latent factor process. To address this issue, we propose a two-component dynamic matrix factor model that extends the standard matrix factor model by incorporating a matrix autoregressive structure for the low-dimensional latent factor process. This two-component model injects prediction capability to the matrix factor model and provides deeper insights into the dynamics of high-dimensional matrix time series. We present the estimation procedures of the model and their theoretical properties, as well as empirical analysis of the estimation procedures via simulations, and a case study of New York city taxi data, demonstrating the performance and usefulness of the model. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.05624&r= |
By: | Benjamin Poignard; Manabu Asai |
Abstract: | Building upon the pertinence of the factor decomposition to break the curse of dimensionality inherent to multivariate volatility processes, we develop a factor model-based multivariate stochastic volatility (fMSV) framework that relies on two viewpoints: sparse approximate factor model and sparse factor loading matrix. We propose a two-stage estimation procedure for the fMSV model: the first stage obtains the estimators of the factor model, and the second stage estimates the MSV part using the estimated common factor variables. We derive the asymptotic properties of the estimators. Simulated experiments are performed to assess the forecasting performances of the covariance matrices. The empirical analysis based on vectors of asset returns illustrates that the forecasting performances of the fMSV models outperforms competing conditional covariance models. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.19033&r= |
By: | Diego Fresoli; Pilar Poncela; Esther Ruiz |
Abstract: | In this paper, we propose a computationally simple estimator of the asymptotic covariance matrix of the Principal Components (PC) factors valid in the presence of cross-correlated idiosyncratic components. The proposed estimator of the asymptotic Mean Square Error (MSE) of PC factors is based on adaptive thresholding the sample covariances of the id iosyncratic residuals with the threshold based on their individual variances. We compare the nite sample performance of condence regions for the PC factors obtained using the proposed asymptotic MSE with those of available extant asymptotic and bootstrap regions and show that the former beats all alternative procedures for a wide variety of idiosyncratic cross-correlation structures. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.06883&r= |
By: | Atsushi Inoue; Lutz Kilian |
Abstract: | Several recent studies have expressed concern that the Haar prior typically employed in estimating sign-identified VAR models is driving the prior about the structural impulse responses and hence their posterior. In this paper, we provide evidence that the quantitative importance of the Haar prior for posterior inference has been overstated. How sensitive posterior inference is to the Haar prior depends on the width of the identified set of a given impulse response. We demonstrate that this width depends not only on how much the identified set is narrowed by the identifying restrictions imposed on the model, but also depends on the data through the reduced-form model parameters. Hence, the role of the Haar prior can only be assessed on a case-by-case basis. We show by example that, when the identification is sufficiently tight, posterior inference based on a Gaussian-inverse Wishart-Haar prior provides a reasonably accurate approximation. |
Keywords: | Bayesian VAR; impulse response; sign restrictions; set-identification; Haar prior |
JEL: | C22 C32 C52 E31 |
Date: | 2024–07–09 |
URL: | https://d.repec.org/n?u=RePEc:fip:feddwp:98532&r= |
By: | Marie-Christine D\"uker; David S. Matteson; Ruey S. Tsay; Ines Wilms |
Abstract: | Vector AutoRegressive Moving Average (VARMA) models form a powerful and general model class for analyzing dynamics among multiple time series. While VARMA models encompass the Vector AutoRegressive (VAR) models, their popularity in empirical applications is dominated by the latter. Can this phenomenon be explained fully by the simplicity of VAR models? Perhaps many users of VAR models have not fully appreciated what VARMA models can provide. The goal of this review is to provide a comprehensive resource for researchers and practitioners seeking insights into the advantages and capabilities of VARMA models. We start by reviewing the identification challenges inherent to VARMA models thereby encompassing classical and modern identification schemes and we continue along the same lines regarding estimation, specification and diagnosis of VARMA models. We then highlight the practical utility of VARMA models in terms of Granger Causality analysis, forecasting and structural analysis as well as recent advances and extensions of VARMA models to further facilitate their adoption in practice. Finally, we discuss some interesting future research directions where VARMA models can fulfill their potentials in applications as compared to their subclass of VAR models. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.19702&r= |
By: | Battulga Gankhuu |
Abstract: | Conditional matrix variate student $t$ distribution was introduced by Battulga (2024a). In this paper, we propose a new version of the conditional matrix variate student $t$ distribution. The paper provides EM algorithms, which estimate parameters of the conditional matrix variate student $t$ distributions, including general cases and special cases with Minnesota prior. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.10837&r= |
By: | Mario Martinoli; Raffaello Seri; Fulvio Corsi |
Abstract: | Linking the statistic and the machine learning literature, we provide new general results on the convergence of stochastic approximation schemes and inexact Newton methods. Building on these results, we put forward a new optimization scheme that we call generalized inexact Newton method (GINM). We select $P$ points $\mathcal{P}_{i}\left(\boldsymbol{\theta}^{\left(i\right)}\right)=\left\{ \boldsymbol{\theta}_{1}, \dots, \boldsymbol{\theta}_{P}\right\} $ of the parameter space in a neighborhood of $\boldsymbol{\theta}^{\left(i\right)}$ and we compute the objective function through a (polynomial) regression. Then, we estimate the parameter(s) $\boldsymbol{\theta}$ using inexact Newton methods. We extensively discuss the theoretical and the computational aspects of the GINM. The results apply to both deterministic and stochastic approximation schemes, and are particular effective in the case in which the objective function to be optimized is highly irregular and/or the stochastic equicontinuity hypothesis is violated. Examples are common in dynamic discrete choice models and complex simulation models characterized by nonlinearities and high levels of heterogeneity. The theory is supported by extensive Monte Carlo experiments. |
Keywords: | Optimization, stochastic approximation, Newton-Raphson methods, asymptotic convergence; M-estimation; stochastic equicontinuity |
Date: | 2024–07–23 |
URL: | https://d.repec.org/n?u=RePEc:ssa:lemwps:2024/18&r= |
By: | Michael Sekatchev; Zhengxiang Zhou |
Abstract: | In this project, we propose to explore the Kalman filter's performance for estimating asset prices. We begin by introducing a stochastic mean-reverting processes, the Ornstein-Uhlenbeck (OU) model. After this we discuss the Kalman filter in detail, and its application with this model. After a demonstration of the Kalman filter on a simulated OU process and a discussion of maximum likelihood estimation (MLE) for estimating model parameters, we apply the Kalman filter with the OU process and trailing parameter estimation to real stock market data. We finish by proposing a simple day-trading algorithm using the Kalman filter with the OU process and backtest its performance using Apple's stock price. We then move to the Heston model, a combination of Geometric Brownian Motion and the OU process. Maximum likelihood estimation is commonly used for Heston model parameter estimation, which results in very complex forms. Here we propose an alternative but easier way of parameter estimation, called the method of moments (MOM). After the derivation of these estimators, we again apply this method to real stock data to assess its performance. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.06745&r= |
By: | Attar, Itay (Ben Gurion University); Cohen-Zada, Danny (Ben Gurion University); Elder, Todd E. (Michigan State University) |
Abstract: | Instrumental variables estimators typically must satisfy monotonicity conditions to be interpretable as capturing local average treatment effects. Building on previous research that suggests monotonicity is unlikely to hold in the context of school entrance age effects, we develop an approach for identifying the magnitude of the resulting bias. We also assess the impact on monotonicity bias of bandwidth selection in regression discontinuity (RD) designs, finding that "full sample" instrumental variables estimators may outperform RD in many cases. We argue that our approaches are applicable more broadly to numerous settings in which monotonicity is likely to fail. |
Keywords: | monotonicity, selection, entrance age, regression discontinuity, instrumental variable |
JEL: | C21 C26 C1 I2 I28 |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:iza:izadps:dp17088&r= |
By: | Agnes Norris Keiller; Áureo de Paula; John Van Reenen |
Abstract: | Standard methods for estimating production functions in the Olley and Pakes (1996) tradition require assumptions on input choices. We introduce a new method that exploits (increasingly available) data on a firm’s expectations of its future output and inputs that allows us to obtain consistent production function parameter estimates while relaxing these input demand assumptions. In contrast to dynamic panel methods, our proposed estimator can be implemented on very short panels (including a single cross-section), and Monte Carlo simulations show it outperforms alternative estimators when firms’ material input choices are subject to optimization error. Implementing a range of production function estimators on UK data, we find our proposed estimator yields results that are either similar to or more credible than commonly-used alternatives. These differences are larger in industries where material inputs appear harder to optimize. We show that TFP implied by our proposed estimator is more strongly associated with future jobs growth than existing methods, suggesting that failing to adequately account for input endogeneity may underestimate the degree of dynamic reallocation in the economy. |
Date: | 2024–07–11 |
URL: | https://d.repec.org/n?u=RePEc:azt:cemmap:15/24&r= |
By: | Federico Gatta; Fabrizio Lillo; Piero Mazzarisi |
Abstract: | In financial risk management, Value at Risk (VaR) is widely used to estimate potential portfolio losses. VaR's limitation is its inability to account for the magnitude of losses beyond a certain threshold. Expected Shortfall (ES) addresses this by providing the conditional expectation of such exceedances, offering a more comprehensive measure of tail risk. Despite its benefits, ES is not elicitable on its own, complicating its direct estimation. However, joint elicitability with VaR allows for their combined estimation. Building on this, we propose a new methodology named Conditional Autoregressive Expected Shortfall (CAESar), inspired by the CAViaR model. CAESar handles dynamic patterns flexibly and includes heteroskedastic effects for both VaR and ES, with no distributional assumption on price returns. CAESar involves a three-step process: estimating VaR via CAViaR regression, formulating ES in an autoregressive manner, and jointly estimating VaR and ES while ensuring a monotonicity constraint to avoid crossing quantiles. By employing various backtesting procedures, we show the effectiveness of CAESar through extensive simulations and empirical testing on daily financial data. Our results demonstrate that CAESar outperforms existing regression methods in terms of forecasting performance, making it a robust tool for financial risk management. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.06619&r= |
By: | Rami V. Tabri; Mathew J. Elias |
Abstract: | This paper lays the groundwork for a unifying approach to stochastic dominance testing under survey nonresponse that integrates the partial identification approach to incomplete data and design-based inference for complex survey data. We propose a novel inference procedure for restricted $s$th-order stochastic dominance, tailored to accommodate a broad spectrum of nonresponse assumptions. The method uses pseudo-empirical likelihood to formulate the test statistic and compares it to a critical value from the chi-squared distribution with one degree of freedom. We detail the procedure's asymptotic properties under both null and alternative hypotheses, establishing its uniform validity under the null and consistency against various alternatives. Using the Household, Income and Labour Dynamics in Australia survey, we demonstrate the procedure's utility in a sensitivity analysis of temporal poverty comparisons among Australian households. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.15702&r= |
By: | Victor Chernozhukov; Iv\'an Fern\'andez-Val; Jonas Meier; Aico van Vuuren; Francis Vella |
Abstract: | Rank-rank regressions are widely used in economic research to evaluate phenomena such as intergenerational income persistence or mobility. However, when covariates are incorporated to capture between-group persistence, the resulting coefficients can be difficult to interpret as such. We propose the conditional rank-rank regression, which uses conditional ranks instead of unconditional ranks, to measure average within-group income persistence. This property is analogous to that of the unconditional rank-rank regression that measures the overall income persistence. The difference between conditional and unconditional rank-rank regression coefficients therefore can measure between-group persistence. We develop a flexible estimation approach using distribution regression and establish a theoretical framework for large sample inference. An empirical study on intergenerational income mobility in Switzerland demonstrates the advantages of this approach. The study reveals stronger intergenerational persistence between fathers and sons compared to fathers and daughters, with the within-group persistence explaining 62% of the overall income persistence for sons and 52% for daughters. Families of small size or with highly educated fathers exhibit greater persistence in passing on their economic status. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.06387&r= |
By: | YoonHaeng Hur; Tengyuan Liang |
Abstract: | We introduce a new convexified matching method for missing value imputation and individualized inference inspired by computational optimal transport. Our method integrates favorable features from mainstream imputation approaches: optimal matching, regression imputation, and synthetic control. We impute counterfactual outcomes based on convex combinations of observed outcomes, defined based on an optimal coupling between the treated and control data sets. The optimal coupling problem is considered a convex relaxation to the combinatorial optimal matching problem. We estimate granular-level individual treatment effects while maintaining a desirable aggregate-level summary by properly constraining the coupling. We construct transparent, individual confidence intervals for the estimated counterfactual outcomes. We devise fast iterative entropic-regularized algorithms to solve the optimal coupling problem that scales favorably when the number of units to match is large. Entropic regularization plays a crucial role in both inference and computation; it helps control the width of the individual confidence intervals and design fast optimization algorithms. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.05372&r= |
By: | Joshua C. C. Chan; Davide Pettenuzzo; Aubrey Poon; Dan Zhu |
Abstract: | Conditional forecasts, i.e. projections of a set of variables of interest on the future paths of some other variables, are used routinely by empirical macroeconomists in a number of applied settings. In spite of this, the existing algorithms used to generate conditional forecasts tend to be very computationally intensive, especially when working with large Vector Autoregressions or when multiple linear equality and inequality constraints are imposed at once. We introduce a novel precision-based sampler that is fast, scales well, and yields conditional forecasts from linear equality and inequality constraints. We show in a simulation study that the proposed method produces forecasts that are identical to those from the existing algorithms but in a fraction of the time. We then illustrate the performance of our method in a large Bayesian Vector Autoregression where we simultaneously impose a mix of linear equality and inequality constraints on the future trajectories of key US macroeconomic indicators over the 2020--2022 period. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.02262&r= |
By: | Thorsten Drautzburg (Federal Reserve Bank of Philadelphia); Jesus Fernandez-Villaverde (University of Pennsylvania, NBER, and CEPR); Pablo Guerron-Quintana (Boston College and ESPOL); Dick Oosthuizen (University of Pennsylvania) |
Abstract: | We propose a new tool to filter non-linear dynamic models that does not require the researcher to specify the model fully and can be implemented without solving the model. If two conditions are satisfied, we can use a flexible statistical model and a known measurement equation to back out the hidden states of the dynamic model. The first condition is that the state is sufficiently volatile or persistent to be recoverable. The second condition requires the possibly non-linear measurement to be sufficiently smooth and to map uniquely to the state absent measurement error. We illustrate the method through various simulation studies and an empirical application to a sudden stops model applied to Mexican data. |
Keywords: | filtering, limited information, non-linear model, dynamic equilibrium model, sudden stops |
JEL: | C32 C53 E37 E44 O11 |
Date: | 2024–07–19 |
URL: | https://d.repec.org/n?u=RePEc:pen:papers:24-016&r= |
By: | Elnura Baiaman kyzy (HIAS, Hitotsubashi University, Japan); Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies, GRIPS, Japan; Rimini Centre for Economic Analysis) |
Abstract: | This paper proposes a novel Laplace based solution to nonlinear DSGE models that has a closed form likelihood. We implicitly use a nonlinear approximation to the policy function that is invertible with respect to the shocks, implying that in the approximation the shocks can be recovered uniquely from some of the control variables. Using perturbation methods and a Lagrange inversion formula we are able to calculate the derivatives of the likelihood and construct the Laplace based solution. In contrast with previous likelihood-based approaches, the method used here requires neither the introduction of linear shocks nor simulation to evaluate the likelihood. Using US data we estimate linear and nonlinear variants of a well-known neoclassical growth model with and without time-varying variances. We find that a nonlinear heteroscedastic model has a much better empirical performance. Furthermore, our models allow us to ascertain that the monetary policy shock causes 95% of the time changes in economic uncertainty. |
Keywords: | Economic Uncertainty, Time-Varying Volatility, Risk-Premium, Higher-Order Approximation |
JEL: | E0 C63 |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:rim:rimwps:24-11&r= |
By: | Anil K. Bera; Yannis Bilias |
Abstract: | Rao (1948) introduced the score test statistic as an alternative to the likelihood ratio and Wald test statistics. In spite of the optimality properties of the score statistic shown in Rao and Poti (1946), the Rao score (RS) test remained unnoticed for almost 20 years. Today, the RS test is part of the ``Holy Trinity'' of hypothesis testing and has found its place in the Statistics and Econometrics textbooks and related software. Reviewing the history of the RS test we note that remarkable test statistics proposed in the literature earlier or around the time of Rao (1948) mostly from intuition, such as Pearson (1900) goodness-fit-test, Moran (1948) I test for spatial dependence and Durbin and Watson (1950) test for serial correlation, can be given RS test statistic interpretation. At the same time, recent developments in the robust hypothesis testing under certain forms of misspecification, make the RS test an active area of research in Statistics and Econometrics. From our brief account of the history the RS test we conclude that its impact in science goes far beyond its calendar starting point with promising future research activities for many years to come. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.19956&r= |
By: | Borko Stosic; Tatijana Stosic |
Abstract: | In this work we address the question of the Multifractal detrended cross-correlation analysis method that has been subject to some controversies since its inception almost two decades ago. To this end we propose several new options to deal with negative cross-covariance among two time series, that may serve to construct a more robust view of the multifractal spectrum among the series. We compare these novel options with the proposals already existing in the literature, and we provide fast code in C, R and Python for both new and the already existing proposals. We test different algorithms on synthetic series with an exact analytical solution, as well as on daily price series of ethanol and sugar in Brazil from 2010 to 2023. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.19406&r= |
By: | Anita Behme |
Abstract: | We introduce generalizations of the COGARCH model of Kl\"uppelberg et al. from 2004 and the volatility and price model of Barndorff-Nielsen and Shephard from 2001 to a Markov-switching environment. These generalizations allow for exogeneous jumps of the volatility at times of a regime switch. Both models are studied within the framework of Markov-modulated generalized Ornstein-Uhlenbeck processes which allows to derive conditions for stationarity, formulas for moments, as well as the autocovariance structure of volatility and price process. It turns out that both models inherit various properties of the original models and therefore are able to capture basic stylized facts of financial time-series such as uncorrelated log-returns, correlated squared log-returns and non-existence of higher moments in the COGARCH case. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.05866&r= |