nep-ecm New Economics Papers
on Econometrics
Issue of 2023‒08‒21
twenty-six papers chosen by
Sune Karlsson, Örebro universitet


  1. Testing for the Markov property in time series via deep conditional generative learning By Shi, Chengchun
  2. Identification in Multiple Treatment Models under Discrete Variation By Vishal Kamat; Samuel Norris; Matthew Pecenco
  3. How Much Should We Trust Regional-Exposure Designs? By Jeremy Majerovitz; Karthik Sastry
  4. Sparsified Simultaneous Confidence Intervals for High-Dimensional Linear Models By Xiaorui Zhu; Yichen Qin; Peng Wang
  5. "Johansen Test with Fourier-Type Smooth Nonlinear Trends in Cointegrating Relations" By Takamitsu Kurita; Mototsugu Shintani
  6. Gaussian semiparametric estimation Gaussian semiparametric estimation of two-dimensional intrinsically stationary random fields By Yoshihiro Yajima; Yasumasa Matsuda
  7. Instrumental Variable Estimation with Many Instruments Using Elastic-Net IV By Alena Skolkova
  8. Choice Models and Permutation Invariance By Amandeep Singh; Ye Liu; Hema Yoganarasimhan
  9. Model Averaging with Ridge Regularization By Alena Skolkova
  10. Random Subspace Local Projections By Viet Hoang Dinh; Didier Nibbering; Benjamin Wong
  11. Panel Data Nowcasting: The Case of Price-Earnings Ratios By Andrii Babii; Ryan T. Ball; Eric Ghysels; Jonas Striaukas
  12. Information-Theoretic Time-Varying Density Modeling By Bram van Os
  13. Permutation tests on returns to scale and common production frontiers in nonparametric models By Anders Rønn-Nielsen; Dorte Kronborg; Mette Asmild
  14. Periodic Integration and Seasonal Unit Roots By del Barrio Castro, Tomás; Osborn, Denise R.
  15. Stationarity with Occasionally Binding Constraints By James A. Duffy; Sophocles Mavroeidis; Sam Wycherley
  16. Bayesian Mode Inference for Discrete Distributions in Economics and Finance By Jamie Cross; Lennart Hoogerheide; Paul Labonne; Herman K. van Dijk
  17. Quantile and expectile copula-based hidden Markov regression models for the analysis of the cryptocurrency market By Beatrice Foroni; Luca Merlo; Lea Petrella
  18. Difference-in-differences with Economic Factors and the Case of Housing Returns By Jiyuan Huang; Per Östberg
  19. Share-ratio interpretations of compositional regression models By Dargel, Lukas; Thomas-Agnan, Christine
  20. The link between multiplicative competitive interaction models and compositional data regression with a total By Dargel, Lukas; Thomas-Agnan, Christine
  21. "Generalized Extreme Value Approximation to the CUMSUMQ Test for Constant Unconditional Variance in Heavy-Tailed Time Series". By Josep Lluís Carrion-i-Silvestre; Andreu Sansó
  22. Sparse Modeling Under Grouped Heterogeneity with an Application to Asset Pricing By Lin William Cong; Guanhao Feng; Jingyu He; Junye Li
  23. Supervised Dynamic PCA: Linear Dynamic Forecasting with Many Predictors By Zhaoxing Gao; Ruey S. Tsay
  24. Estimating the roughness exponent of stochastic volatility from discrete observations of the realized variance By Xiyue Han; Alexander Schied
  25. Equivalences between ad hoc strategies and meta-analytic models for dependent effect sizes By Pustejovsky, James E; Chen, Man
  26. Robust Impulse Responses using External Instruments: the Role of Information By Davide Brignone; Alessandro Franconi; Marco Mazzali

  1. By: Shi, Chengchun
    Abstract: The Markov property is widely imposed in analysis of time series data. Correspondingly, testing the Markov property, and relatedly, inferring the order of a Markov model, are of paramount importance. In this article, we propose a nonparametric test for the Markov property in high-dimensional time series via deep conditional generative learning. We also apply the test sequentially to determine the order of the Markov model. We show that the test controls the type-I error asymptotically, and has the power approaching one. Our proposal makes novel contributions in several ways. We utilize and extend state-of-the-art deep generative learning to estimate the conditional density functions, and establish a sharp upper bound on the approximation error of the estimators. We derive a doubly robust test statistic, which employs a nonparametric estimation but achieves a parametric convergence rate. We further adopt sample splitting and cross-fitting to minimize the conditions required to ensure the consistency of the test. We demonstrate the efficacy of the test through both simulations and the three data applications.
    Keywords: deep conditional generative learning; high-dimensional time series; hypothesis testing; Markov property; mixture density network; OUP deal
    JEL: C1
    Date: 2023–06–23
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:119352&r=ecm
  2. By: Vishal Kamat; Samuel Norris; Matthew Pecenco
    Abstract: We develop a method to learn about treatment effects in multiple treatment models with discrete-valued instruments. We allow selection into treatment to be governed by a general class of threshold crossing models that permits multidimensional unobserved heterogeneity. Under a semi-parametric restriction on the distribution of unobserved heterogeneity, we show how a sequence of linear programs can be used to compute sharp bounds for a number of treatment effect parameters when the marginal treatment response functions underlying them remain nonparametric or are additionally parameterized.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.06174&r=ecm
  3. By: Jeremy Majerovitz; Karthik Sastry
    Abstract: Many prominent studies in macroeconomics, labor, and trade use panel data on regions to identify the local effects of aggregate shocks. These studies construct regional-exposure instruments as an observed aggregate shock times an observed regional exposure to that shock. We argue that the most economically plausible source of identification in these settings is uncorrelatedness of observed and unobserved aggregate shocks. Even when the regression estimator is consistent, we show that inference is complicated by cross-regional residual correlations induced by unobserved aggregate shocks. We suggest two-way clustering, two-way heteroskedasticity- and autocorrelation-consistent standard errors, and randomization inference as options to solve this inference problem. We also develop a feasible optimal instrument to improve efficiency. In an application to the estimation of regional fiscal multipliers, we show that the standard practice of clustering by region generates confidence intervals that are too small. When we construct confidence intervals with robust methods, we can no longer reject multipliers close to zero at the 95% level. The feasible optimal instrument more than doubles statistical power; however, we still cannot reject low multipliers. Our results underscore that the precision promised by regional data may disappear with correct inference.
    Keywords: applied econometrics; regional data; shift-share instruments
    JEL: C12 C18 C21 C23 C26 F16 R12
    Date: 2023–07–27
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:96540&r=ecm
  4. By: Xiaorui Zhu; Yichen Qin; Peng Wang
    Abstract: Statistical inference of the high-dimensional regression coefficients is challenging because the uncertainty introduced by the model selection procedure is hard to account for. A critical question remains unsettled; that is, is it possible and how to embed the inference of the model into the simultaneous inference of the coefficients? To this end, we propose a notion of simultaneous confidence intervals called the sparsified simultaneous confidence intervals. Our intervals are sparse in the sense that some of the intervals' upper and lower bounds are shrunken to zero (i.e., $[0, 0]$), indicating the unimportance of the corresponding covariates. These covariates should be excluded from the final model. The rest of the intervals, either containing zero (e.g., $[-1, 1]$ or $[0, 1]$) or not containing zero (e.g., $[2, 3]$), indicate the plausible and significant covariates, respectively. The proposed method can be coupled with various selection procedures, making it ideal for comparing their uncertainty. For the proposed method, we establish desirable asymptotic properties, develop intuitive graphical tools for visualization, and justify its superior performance through simulation and real data analysis.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.07574&r=ecm
  5. By: Takamitsu Kurita (Faculty of Economics, Kyoto Sangyo University); Mototsugu Shintani (Faculty of Economics, The University of Tokyo)
    Abstract: We develop methodology for testing cointegrating rank in vector autoregressive (VAR) models in the presence of Fourier-type smooth nonlinear deterministic trends in cointegrating relations. The limiting distribution of log-likelihood ratio test statistics is derived and approximated limit quantiles are tabulated. A sequential procedure to select cointegrating rank is evaluated by Monte Carlo simulations. Our empirical application to economic data also demonstrates the usefulness of the proposed methodology in a practical context.
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2023cf1216&r=ecm
  6. By: Yoshihiro Yajima; Yasumasa Matsuda
    Abstract: We consider Gaussian semiparametric estimation (GSE) for two-dimensional intrinsically stationary random fields (ISRFs) observed on a regular grid and derive its asymptotic properties. Originally GSE was proposed to estimate long memory time series models in a semiparametric way either for stationary or nonstationary cases. We try an extension of GSE for time series to anisotropic ISRFs observed on two dimensional lattice that include isotropic fractional Brownian fields (FBF) as special cases, which have been employed to describe many physical spatial behaviours. The GSE extended to ISRFs is consistent and has a limiting normal distribution with variance independent of any unknown parameters as sample size goes to infinity, under conditions we specify in this paper. We conduct a computational simulation to compare the performances of it with those of an alternative estimator on the spatial domain.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:toh:dssraa:136&r=ecm
  7. By: Alena Skolkova
    Abstract: Instrumental variables (IV) are commonly applied for identification of treatment effects and subsequent policy evaluation. The use of many informative instruments improves the estimation accuracy. However, dealing with high-dimensional sets of instrumental variables of unknown strength may be complicated and requires model selection or regularization of the first stage regression. Currently, lasso is established as one of the most popular regularization techniques relying on the assumption of approximate sparsity. I investigate the relative performance of the lasso and elastic-net estimators for fitting the first-stage as part of IV estimation. As elastic-net includes a ridge-type penalty in addition to a lasso-type penalty, it generally improves upon lasso in finite samples when correlations among the instrumental variables are not negligible. I show that IV estimators based on the lasso and elastic-net firststage estimates can be asymptotically equivalent. Via a Monte Carlo study I demonstrate the robustness of the sample-split elastic-net IV estimator to deviations from approximate sparsity, and to correlation among possibly high-dimensional instruments. Finally, I provide an empirical example that demonstrates potential improvement in estimation accuracy gained by the use of IV estimators based on elastic-net.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:cer:papers:wp759&r=ecm
  8. By: Amandeep Singh; Ye Liu; Hema Yoganarasimhan
    Abstract: Choice Modeling is at the core of many economics, operations, and marketing problems. In this paper, we propose a fundamental characterization of choice functions that encompasses a wide variety of extant choice models. We demonstrate how nonparametric estimators like neural nets can easily approximate such functionals and overcome the curse of dimensionality that is inherent in the non-parametric estimation of choice functions. We demonstrate through extensive simulations that our proposed functionals can flexibly capture underlying consumer behavior in a completely data-driven fashion and outperform traditional parametric models. As demand settings often exhibit endogenous features, we extend our framework to incorporate estimation under endogenous features. Further, we also describe a formal inference procedure to construct valid confidence intervals on objects of interest like price elasticity. Finally, to assess the practical applicability of our estimator, we utilize a real-world dataset from S. Berry, Levinsohn, and Pakes (1995). Our empirical analysis confirms that the estimator generates realistic and comparable own- and cross-price elasticities that are consistent with the observations reported in the existing literature.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.07090&r=ecm
  9. By: Alena Skolkova
    Abstract: Model averaging is an increasingly popular alternative to model selection. Ridge regression serves a similar purpose as model averaging, i.e. the minimization of mean squared error through shrinkage, though in different ways. In this paper, we propose the ridgeregularized modifications of Mallows model averaging (Hansen, 2007, Econometrica, 75) and heteroskedasticity-robust Mallows model averaging (Liu & Okui, 2013, The Econometrics Journal, 16) to leverage the capabilities of averaging and ridge regularization simultaneously. Via a simulation study, we examine the finite-sample improvements obtained by replacing least-squares with a ridge regression. Ridge-based model averaging is especially useful when one deals with sets of moderately to highly correlated predictors because the underlying ridge regression accommodates correlated predictors without blowing up estimation variance. A toy theoretical example shows that the relative reduction of mean squared error is increasing with the strength of the correlation. We also demonstrate the superiority of the ridge-regularized modifications via empirical examples focused on wages and economic growth.
    Keywords: linear regression, shrinkage, model averaging, ridge regression, Mallows criterion
    JEL: C21 C52
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:cer:papers:wp758&r=ecm
  10. By: Viet Hoang Dinh; Didier Nibbering; Benjamin Wong
    Abstract: We show how random subspace methods can be adapted to estimating local projections with many controls. Random subspace methods have their roots in the machine learning literature and are implemented by averaging over regressions estimated over different combinations of subsets of these controls. We document three key results: (i) Our approach can successfully recover the impulse response function in a Monte Carlo exercise where we simulate data from a real business cycle model with fiscal foresight. (ii) Our results suggest that random subspace methods are more accurate than factor models if the underlying large data set has a factor structure similar to typical macroeconomic data sets such as FRED-MD. (iii) Our approach leads to differences in the estimated impulse response functions relative to standard methods when applied to two widely-studied empirical applications.
    Keywords: Local Projections, Random Subspace, Impulse Response Functions, Large Data Sets
    JEL: C22 E32
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:een:camaaa:2023-34&r=ecm
  11. By: Andrii Babii; Ryan T. Ball; Eric Ghysels; Jonas Striaukas
    Abstract: The paper uses structured machine learning regressions for nowcasting with panel data consisting of series sampled at different frequencies. Motivated by the problem of predicting corporate earnings for a large cross-section of firms with macroeconomic, financial, and news time series sampled at different frequencies, we focus on the sparse-group LASSO regularization which can take advantage of the mixed frequency time series panel data structures. Our empirical results show the superior performance of our machine learning panel data regression models over analysts' predictions, forecast combinations, firm-specific time series regression models, and standard machine learning methods.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.02673&r=ecm
  12. By: Bram van Os (Erasmus University Rotterdam)
    Abstract: We present a comprehensive framework for constructing dynamic density models by combining optimization with concepts from information theory. Specifically, we propose to recursively update a time-varying conditional density by maximizing the log-likelihood contribution of the latest observation subject to a Kullback-Leibler divergence (KLD) regularization centered at the one-step ahead predicted density. The resulting Relative Entropy Adaptive Density (READY) update has attractive optimality properties, is reparametrization invariant and can be viewed as an intuitive regularized estimator of the pseudo-true density. Popular existing models, such as the ARMA(1, 1) and GARCH(1, 1), can be retrieved as special cases. Furthermore, we show that standard score-driven models with inverse Fisher scaling can be derived as convenient local approximations of the READY update. Empirical usefulness is illustrated by the modeling of employment growth and asset volatility.
    Date: 2023–06–29
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20230037&r=ecm
  13. By: Anders Rønn-Nielsen (Center for Statistics, Department of Finance, Copenhagen Business School); Dorte Kronborg (Center for Statistics, Department of Finance, Copenhagen Business School); Mette Asmild (Department of Food and Resource Economics, University of Copenhagen)
    Abstract: Permutation techniques, where one recompute the test statistic over permutations of data, have a long history in statistics and have become increasingly useful as the availability of computational power has increased. Until now, no permutation tests for examining returns to scale assumptions, nor for test of common production possibility sets, when analysing productivity have been available. We develop three novel tests based on permutations of the observations. The first is a test for constant returns to scale. The other two are, respectively, tests for frontier differences and for whether the production possibility sets are nested. All tests are based on data envelopment analysis (DEA) estimates of effciencies and are easily implementable. We show that our suggested permutations of the observations satisfy the necessary randomisation assumptions, and hereby that the sizes of the proposed tests are controlled. The advantages of permutation tests are that they are reliable even for relatively small samples and their size can generally be controlled upwards. We further add a lower bound showing that the proposed tests are very close to being exact. Finally, we show that our tests are consistent and illustrate the rate of convergence in simulation studies.
    Keywords: Permutation tests, Returns to scale, Comparison of production frontiers, Data envelopment analysis (DEA), Size, Consistency.
    JEL: C12 C13 C15 C18 D24 C67
    Date: 2022–08
    URL: http://d.repec.org/n?u=RePEc:foi:wpaper:2022_05&r=ecm
  14. By: del Barrio Castro, Tomás; Osborn, Denise R.
    Abstract: Seasonality is pervasive across a wide range of economic time series and it substantially complicates the analysis of unit root non-stationarity in such series. This paper reviews recent contributions to the literature on non-stationary seasonal processes, focussing on periodically integrated (P I) and seasonally integrated (SI) processes. Whereas an SI process captures seasonal non-stationarity essentially through an annual lag, a P I process has (a restricted form of) seasonally-varying autoregressive coefficients. The fundamental properties of both types of process are compared, noting in particular that a simple SI process observed S times a year has S unit roots, in contrast to the single unit root of a P I process. Indeed, for S > 2 and even (such as processes observed quarterly or monthly), an SI process has a pair of complex-valued unit roots at each seasonal frequency except the Nyquist frequency, where a single real root applies. Consequently, recent literature concerned with testing the unit roots implied by SI processes employs complex-valued unit root processes, and these are discussed in some detail. A key feature of the discussion is to show how the demodulator operator can be used to convert a unit root process at a seasonal frequency to a conventional zero-frequency unit root process, thereby enabling the well-known properties of the latter to be exploited. Further, circulant matrices are introduced and it is shown how they are employed in theoretical analyses to capture the repetitive nature of seasonal processes. Discriminating between SI and P I processes requires care, since testing for unit roots at seasonal frequencies may lead to a P I process (erroneously) appearing to have an SI form, while an application to monthly US industrial production series illustrates how these types of seasonal non-stationarity can be distinguished in practice. Although univariate processes are discussed, the methods considered in the paper can be used to analyze cointegration, including cointegration across different frequencies
    Keywords: Periodic Integration, Seasonal Integration, Vector of Seasons, Circulant Matrices, Demodulator Operator, Industrial Production.
    JEL: C32
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:117935&r=ecm
  15. By: James A. Duffy; Sophocles Mavroeidis; Sam Wycherley
    Abstract: This paper studies a class of multivariate threshold autoregressive models, known as censored and kinked structural vector autoregressions (CKSVAR), which are notably able to accommodate series that are subject to occasionally binding constraints. We develop a set of sufficient conditions for the processes generated by a CKSVAR to be stationary, ergodic, and weakly dependent. Our conditions relate directly to the stability of the deterministic part of the model, and are therefore less conservative than those typically available for general vector threshold autoregressive (VTAR) models. Though our criteria refer to quantities, such as refinements of the joint spectral radius, that cannot feasibly be computed exactly, they can be approximated numerically to a high degree of precision. Our results also permit us to provide a treatment of unit roots and cointegration in the CKSVAR, for the case where the model is configured so as to generate linear cointegration.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.06190&r=ecm
  16. By: Jamie Cross (University of Melbourne); Lennart Hoogerheide (Vrije Universiteit Amsterdam); Paul Labonne (Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam)
    Abstract: Detecting heterogeneity within a population is crucial in many economic and financial applications. Econometrically, this requires a credible determination of multimodality in a given data distribution. We propose a straightforward yet effective technique for mode inference in discrete data distributions which involves fitting a mixture of novel shifted-Poisson distributions. The credibility and utility of our proposed approach is demonstrated through empirical investigations on datasets pertaining to loan default risk and inflation expectations.
    Keywords: Bayesian Inference, Mixture Models, Mode Inference, Multimodality, Shifted-Poisson.
    JEL: C11 C25 C81 C82 E00 D00
    Date: 2023–06–29
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20230038&r=ecm
  17. By: Beatrice Foroni; Luca Merlo; Lea Petrella
    Abstract: The role of cryptocurrencies within the financial systems has been expanding rapidly in recent years among investors and institutions. It is therefore crucial to investigate the phenomena and develop statistical methods able to capture their interrelationships, the links with other global systems, and, at the same time, the serial heterogeneity. For these reasons, this paper introduces hidden Markov regression models for jointly estimating quantiles and expectiles of cryptocurrency returns using regime-switching copulas. The proposed approach allows us to focus on extreme returns and describe their temporal evolution by introducing time-dependent coefficients evolving according to a latent Markov chain. Moreover to model their time-varying dependence structure, we consider elliptical copula functions defined by state-specific parameters. Maximum likelihood estimates are obtained via an Expectation-Maximization algorithm. The empirical analysis investigates the relationship between daily returns of five cryptocurrencies and major world market indices.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.06400&r=ecm
  18. By: Jiyuan Huang (University of Zurich; Swiss Finance Institute); Per Östberg (University of Zurich; Swiss Finance Institute)
    Abstract: This paper studies how to incorporate observable factors in difference-in-differences and document their empirical relevance. We show that even under random assignment directly adding factors with unit-specific loadings into the difference-in-differences estimation results in biased estimates. This bias, which we term the “bad time control problem” arises when the treatment effect covaries with the factor variation. Researchers often control for factor structures by using: (i) unit time trends, (ii) pre-treatment covariates interacted with a time trend and (iii) group-time dummies. We show that all these methods suffer from the bad time control problem and/or omitted factor bias. We propose two solutions to the bad time control problem. To evaluate the relevance of the factor structure we study US housing returns. Adding macroeconomic factors shows that factors have additional explanatory power and estimated factor loadings differ systematically across geographic areas. This results in substantially altered treatment effects.
    Keywords: Difference-in-differences, Factor models, House prices
    JEL: C22 C54 G28 R30
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp2355&r=ecm
  19. By: Dargel, Lukas; Thomas-Agnan, Christine
    Abstract: The interpretation of regression models with compositional vectors as dependent and/or independent variables has been approached with different perspectives. The first approach that appeared in the literature is based on computing the change in the expected value of the dependent variable in coordinate space due to a finite linear change in the independent variable of interest. Considering the fact that these models are non-linear with respect to classical operations of the real space, another approach has been proposed based on infinitesimal increments or derivatives understood in a simplex sense, leading to elasticities or semi-elasticities interpretations in the original space. After briefly reviewing these two points of view and illustrating the second one on real data set, we show that some functions of elasticities or semi-elasticities are constant throughout the sample observations and are therefore natural parameters for the interpretation of these models. We derive approximations of share ratio variations and link them to these parameters. We use a real data set to illustrate each type of interpretation in detail.
    JEL: C39 C69 C87
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:128262&r=ecm
  20. By: Dargel, Lukas; Thomas-Agnan, Christine
    Abstract: This article sheds light on the relationship between compositional data (CoDa) regression models and multiplicative competitive interaction (MCI) models, which are two approaches for modeling shares. We demonstrate that MCI models are special cases of CoDa models and that a reparameterization links both. Recognizing this relation offers mutual benefits for the CoDa and MCI literature, each with its own rich tradition. The CoDa tradition, with its rigorous mathematical foundation, provides additional theoretical guarantees and mathematical tools that we apply to improve the estimation of MCI models. Simultaneously, the MCI model emerged from almost a century-long tradition in marketing research that may enrich the CoDa literature. One aspect is the grounding of the MCI specification in intuitive assumptions on the behavior of individuals. From this basis, the MCI tradition also provides credible justifications for heteroskedastic error structures -- an idea we develop further and that is relevant to many CoDa models beyond the marketing context. Additionally, MCI models have always been interpreted in terms of elasticities, a method only recently revealed in CoDa. Regarding this interpretation, the change from the MCI to the CoDa perspective leads to a decomposition of the influence of the explanatory variables into contributions from relative and absolute information. This decomposition also opens the door for testing hypothesis about the importance of each information type.
    JEL: C01 C39 C50 M31
    Date: 2023–07–20
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:128267&r=ecm
  21. By: Josep Lluís Carrion-i-Silvestre (AQR-IREA Research Group. Departament d’Econometria, Estadística i Economia Aplicada. Universitat de Barcelona. Av. Diagonal, 690. 08034 Barcelona. Spain.); Andreu Sansó (Department d’Economia Aplicada. Universitat de les Illes Balears and MOTIBO Research Group, Balearic Islands Health Research Institute (Idisba).)
    Abstract: This paper focuses on testing the stability of the unconditional variance when the stochastic processes may have heavy-tailed distributions. Finite sample distributions that depend both on the effective sample size and the tail index are approximated using Extreme Value distributions and summarized using response surfaces. A modification of the Iterative Cumulative Sum of Squares (ICSS) algorithm to detect the presence of multiple structural breaks is suggested, adapting the algorithm to the tail index of the underlying distribution of the process. We apply the algorithm to eighty absolute log-exchange rate returns, finding evidence of (i) infinite variance in about a third of the cases, (ii) finite changing unconditional variance for another third of the time series - totalling about one hundred structural breaks - and (iii) finite constant unconditional variance for the remaining third of the time series.
    Keywords: CUMSUMQ test, Unconditional variance, Multiple structural changes, Heavy tails, Generalized Extreme Value distribution. JEL classification: C12, C22.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:ira:wpaper:202309&r=ecm
  22. By: Lin William Cong; Guanhao Feng; Jingyu He; Junye Li
    Abstract: Sparse models, though long preferred and pursued by social scientists, can be ineffective or unstable relative to large models, for example, in economic predictions (Giannone et al., 2021). To achieve sparsity for economic interpretation while exploiting big data for superior empirical performance, we introduce a general framework that jointly clusters observations (via new decision trees) and locally selects variables (with Bayesian priors) for modeling panel data with potential grouped heterogeneity. We derive analytical marginal likelihoods as global split criteria in our Bayesian Clustering Model (BCM), to incorporate economic guidance, address parameter and model uncertainties, and prevent overfitting. We apply BCM to asset pricing and estimate uncommon-factor models for data-driven asset clusters and macroeconomic regimes. We find (i) cross-sectional heterogeneity linked to (non-linear interactions of) return volatility, size, and value, (ii) structural changes in factor relevance predicted by market volatility and valuation, and (iii) MKTRF and SMB as common factors and multiple uncommon factors across characteristics-managed-market-timed clusters. BCM helps explain volatility- or size-related anomalies, exploit within-group tests, and mitigate the “factor zoo” problem. Overall, BCM outperforms benchmark common-factor models in pricing and investments in U.S. equities, e.g., attaining out-of-sample cross-sectional R2s exceeding 25% for multiple clusters and Sharpe ratio of tangency portfolios tripling built from ME-B/M 5 × 5 portfolios.
    JEL: C11 C38 G11 G12
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31424&r=ecm
  23. By: Zhaoxing Gao; Ruey S. Tsay
    Abstract: This paper proposes a novel dynamic forecasting method using a new supervised Principal Component Analysis (PCA) when a large number of predictors are available. The new supervised PCA provides an effective way to bridge the gap between predictors and the target variable of interest by scaling and combining the predictors and their lagged values, resulting in an effective dynamic forecasting. Unlike the traditional diffusion-index approach, which does not learn the relationships between the predictors and the target variable before conducting PCA, we first re-scale each predictor according to their significance in forecasting the targeted variable in a dynamic fashion, and a PCA is then applied to a re-scaled and additive panel, which establishes a connection between the predictability of the PCA factors and the target variable. Furthermore, we also propose to use penalized methods such as the LASSO approach to select the significant factors that have superior predictive power over the others. Theoretically, we show that our estimators are consistent and outperform the traditional methods in prediction under some mild conditions. We conduct extensive simulations to verify that the proposed method produces satisfactory forecasting results and outperforms most of the existing methods using the traditional PCA. A real example of predicting U.S. macroeconomic variables using a large number of predictors showcases that our method fares better than most of the existing ones in applications. The proposed method thus provides a comprehensive and effective approach for dynamic forecasting in high-dimensional data analysis.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.07689&r=ecm
  24. By: Xiyue Han; Alexander Schied
    Abstract: We consider the problem of estimating the roughness of the volatility in a stochastic volatility model that arises as a nonlinear function of fractional Brownian motion with drift. To this end, we introduce a new estimator that measures the so-called roughness exponent of a continuous trajectory, based on discrete observations of its antiderivative. We provide conditions on the underlying trajectory under which our estimator converges in a strictly pathwise sense. Then we verify that these conditions are satisfied by almost every sample path of fractional Brownian motion (with drift). As a consequence, we obtain strong consistency theorems in the context of a large class of rough volatility models. Numerical simulations show that our estimation procedure performs well after passing to a scale-invariant modification of our estimator.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.02582&r=ecm
  25. By: Pustejovsky, James E; Chen, Man
    Abstract: Meta-analyses of educational research findings frequently involve statistically dependent effect size estimates. Meta-analysts have often addressed dependence issues using ad hoc approaches that involve modifying the data to conform to the assumptions of models for independent effect size estimates, such as aggregating estimates to obtain one summary estimate per study, conducting separate analyses of distinct subgroups of estimates, or combinations thereof. We demonstrate that these ad hoc approaches correspond exactly to certain multivariate models for dependent effect sizes. Specifically, we describe classes of multivariate random effects models that have likelihoods equivalent to those of models for effect sizes that have been averaged by study, classified into subgroups, or both. The equivalence also applies to robust variance estimation methods.
    Date: 2023–07–27
    URL: http://d.repec.org/n?u=RePEc:osf:metaar:pw54r&r=ecm
  26. By: Davide Brignone; Alessandro Franconi; Marco Mazzali
    Abstract: External-instrument identification leads to biased responses when the shock is not invertible and the measurement error is present. We propose to use this identification strategy in a structural Dynamic Factor Model, which we call Proxy DFM. In a simulation analysis, we show that the Proxy DFM always successfully retrieves the true impulse responses, while the Proxy SVAR systematically fails to do so when the model is either misspecified, does not include all relevant information, or the measurement error is present. In an application to US monetary policy, the Proxy DFM shows that a tightening shock is unequivocally contractionary, with deteriorations in domestic demand, labor, credit, housing, exchange, and financial markets. This holds true for all raw instruments available in the literature. The variance decomposition analysis highlights the importance of monetary policy shocks in explaining economic fluctuations, albeit at different horizons.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.06145&r=ecm

This nep-ecm issue is ©2023 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.