nep-ecm New Economics Papers
on Econometrics
Issue of 2021‒03‒08
twenty-six papers chosen by
Sune Karlsson
Örebro universitet

  1. Moment tests of independent components By Dante Amengual; Gabriele Fiorentini; Enrique Sentana
  2. Factorisable Multitask Quantile Regression By Chao, Shih-Kang; Härdle, Wolfgang Karl; Yuan, Ming
  3. Revisiting Estimation Methods for Spatial Econometric Interaction Models By Dargel, Lukas
  4. Detecting possibly frequent change-points: Wild Binary Segmentation 2 and steepest-drop model selection By Fryzlewicz, Piotr
  5. Time-varying state correlations in state space models and their estimation via indirect inference By Caterina Schiavoni; Siem Jan Koopman; Franz Palm; Stephan Smeekes; Jan van den Brakel
  6. Robust Estimation of Integrated Volatility By Li, M. Z.; Linton, O.
  7. Cross-Fitting and Averaging for Machine Learning Estimation of Heterogeneous Treatment Effects By Jacob, Daniel
  8. Permutation Tests at Nonparametric Rates By Marinho Bertanha; EunYi Chung
  9. Inference of breakpoints in high-dimensional time series By Chen, Likai; Wang, Weining; Wu, Wei Biao
  10. Debiased Kernel Methods By Rahul Singh
  11. Doubly-Adaptive Thompson Sampling for Multi-Armed and Contextual Bandits By Maria Dimakopoulou; Zhimei Ren; Zhengyuan Zhou
  12. Approximate Bayes factors for unit root testing By Magris Martin; Iosifidis Alexandros
  13. Estimation and Inference by Stochastic Optimization: Three Examples By Jean-Jacques Forneron; Serena Ng
  14. Estimation of Heuristic Switching in Behavioral Macroeconomic Models By Kukacka, Jiri; Sacht, Stephen
  15. Data Analytics Driven Controlling: bridging statistical modeling and managerial intuition By Khowaja, Kainat; Saef, Danial; Sizov, Sergej; Härdle, Wolfgang Karl
  16. Concordance and value information criteria for optimal treatment decision By Shi, Chengchun; Song, R; Lu, W
  17. A multilevel structural equation model for the interrelationships between multiple latent dimensions of childhood socio‐economic circumstances, partnership transitions and mid‐life health By Zhu, Yajing; Steele, Fiona; Moustaki, Irini
  18. The Gender Pay Gap Revisited with Big Data: Do Methodological Choices Matter? By Strittmatter, Anthony; Wunsch, Conny
  19. How wealthy are the rich? By Schulz, Jan; Milaković, Mishael
  20. State Heterogeneity Analysis of Financial Volatility Using High-Frequency Financial Data By Dohyun Chun; Donggyu Kim
  21. Overnight GARCH-It\^o Volatility Models By Donggyu Kim; Yazhen Wang
  22. Asset Pricing Using Block-Cholesky GARCH and Time-Varying Betas By Stefano Grassi; Francesco Violante
  23. Monte-Carlo-Evaluation von Instrumentenvariablenschätzern By Auer, Benjamin R.; Rottmann, Horst
  24. Design Flaw of the Synthetic Control Method By Kuosmanen, Timo; Zhou, Xun; Eskelinen, Juha; Malo, Pekka
  25. Nonlinear Impulse Response Function for Dichotomous Models By Quentin LAJAUNIE
  26. Cointegrated Solutions of Unit-Root VARs: An Extended Representation Theorem By Mario Faliva; Maria Grazia Zoia

  1. By: Dante Amengual (CEMFI, Centro de Estudios Monetarios y Financieros); Gabriele Fiorentini (Università di Firenze and RCEA); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: We propose simple specification tests for independent component analysis and structural vector autoregressions with non-Gaussian shocks that check the normality of a single shock and the potential cross-sectional dependence among several of them. Our tests compare the integer (product) moments of the shocks in the sample with their population counterparts. Importantly, we explicitly consider the sampling variability resulting from using shocks computed with consistent parameter estimators. We study the finite sample size of our tests in extensive simulation exercises and discuss some bootstrap procedures. We also show that our tests have non-negligible power against a variety of empirically plausible alternatives.
    Keywords: Covariance, co-skewness, co-kurtosis, finite normal mixtures, normality tests, pseudo maximum likelihood estimators, structural vector autoregressions.
    JEL: C32 C46 C52
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2021_2102&r=all
  2. By: Chao, Shih-Kang; Härdle, Wolfgang Karl; Yuan, Ming
    Abstract: A multivariate quantile regression model with a factor structure is proposed to study data with many responses of interest. The factor structure is allowed to vary with the quantile levels, which makes our framework more flexible than the classical factor models. The model is estimated with the nuclear norm regularization in order to accommodate the high dimensionality of data, but the incurred optimization problem can only be efficiently solved in an approximate manner by off-the-shelf optimization methods. Such a scenario is often seen when the empirical risk is non-smooth or the numerical procedure involves expensive subroutines such as singular value decompo- sition. To ensure that the approximate estimator accurately estimates the model, non-asymptotic bounds on error of the the approximate estimator is established. For implementation, a numerical procedure that provably marginalizes the approximate error is proposed. The merits of our model and the proposed numerical procedures are demonstrated through Monte Carlo experiments and an application to finance involving a large pool of asset returns.
    Keywords: Factor model,quantile regression,non-asymptotic analysis,multivariate regression,nuclear norm regularization
    JEL: C13 C38 C61 G17
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020004&r=all
  3. By: Dargel, Lukas
    Abstract: Taking advantage of a generalization of the matrix formulation introduced by LeSage and Pace (2008), this article presents improvements in the computational performance and flexibility of three estimators of spatial econometric interaction models. By generalizing computational techniques for the evaluation of the likelihood function and also for the Hessian matrix the maximum likelihood estimator (MLE) achieves computation times that are not much longer than those of an ordinary least-squares (OLS) regression. The restructured likelihood also improves the performance of the Bayesian Markov chain Monte Carlo (MCMC) estimator considerably. Finally, the spatial two-stage least-squares (S2SLS) estimator presented in this article is the first one that exploits the efficiency gains of the matrix formulation. In addition to the computational improvements of the three estimation methods this article presents a new solution to the issue of defining the feasible parameter space that allows to verify the consistency of the spatial econometric interaction model with a minimal computational burden. All of these developments indicate that the spatial econometric alternative to the traditional gravity model has become an increasingly mature option and should eventually be considered a standard modeling approach for origin-destination flow problems.
    Keywords: Origin-destination flows; Cross-sectional dependence; Maximum likelihood;; Two-stage least-squares; Bayesian Markov chain Monte Carlo
    JEL: C01 C21 C63
    Date: 2021–02–24
    URL: http://d.repec.org/n?u=RePEc:tse:wpaper:125334&r=all
  4. By: Fryzlewicz, Piotr
    Abstract: Many existing procedures for detecting multiple change-points in data sequences fail in frequent-change-point scenarios. This article proposes a new change-point detection methodology designed to work well in both infrequent and frequent change-point settings. It is made up of two ingredients: one is “Wild Binary Segmentation 2” (WBS2), a recursive algorithm for producing what we call a ‘complete’ solution path to the change-point detection problem, i.e. a sequence of estimated nested models containing 0 , … , T- 1 change-points, where T is the data length. The other ingredient is a new model selection procedure, referred to as “Steepest Drop to Low Levels” (SDLL). The SDLL criterion acts on the WBS2 solution path, and, unlike many existing model selection procedures for change-point problems, it is not penalty-based, and only uses thresholding as a certain discrete secondary check. The resulting WBS2.SDLL procedure, combining both ingredients, is shown to be consistent, and to significantly outperform the competition in the frequent change-point scenarios tested. WBS2.SDLL is fast, easy to code and does not require the choice of a window or span parameter.
    Keywords: segmentation; break detection; jump detection; randomized algorithms; adaptive algorithms; multiscale methods; EP/ L014246/1
    JEL: C1
    Date: 2020–12–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:103430&r=all
  5. By: Caterina Schiavoni (Maastricht University); Siem Jan Koopman (Vrije Universiteit Amsterdam); Franz Palm (Maastricht University); Stephan Smeekes (Maastricht University); Jan van den Brakel (Maastricht University)
    Abstract: Statistics Netherlands uses a state space model to estimate the Dutch unemployment by using monthly series about the labour force surveys (LFS). More accurate estimates of this variable can be obtained by including auxiliary information in the model, such as the univariate administrative series of claimant counts. Legislative changes and economic crises may affect the relation between survey-based and auxiliary series. This time-changing relationship is captured by a time-varying correlation parameter in the covariance matrix of the transition equation’s error terms. We treat the latter parameter as a state variable, which makes the state space model become nonlinear and therefore its estimation by Kalman filtering and maximum likelihood infeasible. We therefore propose an indirect inference approach to estimate the static parameters of the model, which employs cubic splines for the auxiliary model, and a bootstrap filter method to estimate the time-varying correlation together with the other state variables of the model. We conduct a Monte Carlo simulation study that shows that our proposed methodology is able to correctly estimate both the time-constant parameters and the state vector of the model. Empirically we find that the financial crisis of 2008 triggered a deeper and more prolonged deviation between the survey-based and the claimant counts series, than a legislative change in 2015. Promptly tackling such changes, which our proposed method does, results in more realistic real-time unemployment estimates.
    Keywords: bootstrap filter, cubic splines, indirect inference, nonlinear state space, time-varying parameter, unemployment
    JEL: J64 C22 C32
    Date: 2021–02–24
    URL: http://d.repec.org/n?u=RePEc:tin:wpaper:20210020&r=all
  6. By: Li, M. Z.; Linton, O.
    Abstract: We introduce a new method to estimate the integrated volatility (IV) based on noisy high-frequency data. Our method employs the ReMeDI approach introduced by Li and Linton (2021a) to estimate the moments of the microstructure noise and thereby eliminate their influence, and the pre-averaging method to target the volatility parameter. The method is robust: it can be applied when the efficient price exhibits stochastic volatility and jumps, the observation times are random and endogenous, and the noise process is nonstationary, autocorrelated and dependent on the efficient price. We derive the limit distribution for the proposed estimators under infill asymptotics in a general setting. Our simulation and empirical studies demonstrate the robustness, accuracy and computational efficiency of our estimators compared to several alternatives recently proposed in the literature.
    Date: 2021–02–24
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:2115&r=all
  7. By: Jacob, Daniel
    Abstract: We investigate the finite sample performance of sample splitting, cross-fitting and averaging for the estimation of the conditional average treatment effect. Recently proposed methods, so-called meta- learners, make use of machine learning to estimate different nuisance functions and hence allow for fewer restrictions on the underlying structure of the data. To limit a potential overfitting bias that may result when using machine learning methods, cross- fitting estimators have been proposed. This includes the splitting of the data in different folds to reduce bias and averaging over folds to restore efficiency. To the best of our knowledge, it is not yet clear how exactly the data should be split and averaged. We employ a Monte Carlo study with different data generation processes and consider twelve different estimators that vary in sample-splitting, cross-fitting and averaging procedures. We investigate the performance of each estimator independently on four different meta-learners: the doubly-robust-learner, R-learner, T-learner and X-learner. We find that the performance of all meta-learners heavily depends on the procedure of splitting and averaging. The best performance in terms of mean squared error (MSE) among the sample split estimators can be achieved when applying cross-fitting plus taking the median over multiple different sample-splitting iterations. Some meta-learners exhibit a high variance when the lasso is included in the ML methods. Excluding the lasso decreases the variance and leads to robust and at least competitive results.
    Keywords: causal inference,sample splitting,cross-fitting,sample averaging,machine learning,simulation study
    JEL: C01 C14 C31 C63
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020014&r=all
  8. By: Marinho Bertanha; EunYi Chung
    Abstract: Classical two-sample permutation tests for equality of distributions have exact size in finite samples, but they fail to control size for testing equality of parameters that summarize each distribution. This paper proposes permutation tests for equality of parameters that are estimated at root-n or slower rates. Our general framework applies to both parametric and nonparametric models, with two samples or one sample split into two subsamples. Our tests have correct size asymptotically while preserving exact size in finite samples when distributions are equal. They have no loss in local-asymptotic power compared to tests that use asymptotic critical values. We propose confidence sets with correct coverage in large samples that also have exact coverage in finite samples if distributions are equal up to a transformation. We apply our theory to four commonly-used hypothesis tests of nonparametric functions evaluated at a point. Lastly, simulations show good finite sample properties of our tests.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.13638&r=all
  9. By: Chen, Likai; Wang, Weining; Wu, Wei Biao
    Abstract: For multiple change-points detection of high-dimensional time series, we provide asymptotic theory concerning the consistency and the asymptotic distribution of the breakpoint statistics and estimated break sizes. The theory backs up a simple two- step procedure for detecting and estimating multiple change-points. The proposed two-step procedure involves the maximum of a MOSUM (moving sum) type statistics in the rst step and a CUSUM (cumulative sum) re nement step on an aggregated time series in the second step. Thus, for a xed time-point, we can capture both the biggest break across di erent coordinates and aggregating simultaneous breaks over multiple coordinates. Extending the existing high-dimensional Gaussian approximation theorem to dependent data with jumps, the theory allows us to characterize the size and power of our multiple change-point test asymptotically. Moreover, we can make inferences on the breakpoints estimates when the break sizes are small. Our theoretical setup incorporates both weak temporal and strong or weak cross-sectional dependence and is suitable for heavy-tailed innovations. A robust long-run covariance matrix estimation is proposed, which can be of independent interest. An application on detecting structural changes of the U.S. unemployment rate is considered to illustrate the usefulness of our method.
    Keywords: multiple change points detection,temporal and cross-sectional dependence,Gaussian approximation,inference of break locations
    JEL: C00
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020019&r=all
  10. By: Rahul Singh
    Abstract: I propose a practical procedure based on bias correction and sample splitting to calculate confidence intervals for functionals of generic kernel methods, i.e. nonparametric estimators learned in a reproducing kernel Hilbert space (RKHS). For example, an analyst may desire confidence intervals for functionals of kernel ridge regression or kernel instrumental variable regression. The framework encompasses (i) evaluations over discrete domains, (ii) treatment effects of discrete treatments, and (iii) incremental treatment effects of continuous treatments. For the target quantity, whether it is (i)-(iii), I prove pointwise root-n consistency, Gaussian approximation, and semiparametric efficiency by finite sample arguments. I show that the classic assumptions of RKHS learning theory also imply inference.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.11076&r=all
  11. By: Maria Dimakopoulou; Zhimei Ren; Zhengyuan Zhou
    Abstract: To balance exploration and exploitation, multi-armed bandit algorithms need to conduct inference on the true mean reward of each arm in every time step using the data collected so far. However, the history of arms and rewards observed up to that time step is adaptively collected and there are known challenges in conducting inference with non-iid data. In particular, sample averages, which play a prominent role in traditional upper confidence bound algorithms and traditional Thompson sampling algorithms, are neither unbiased nor asymptotically normal. We propose a variant of a Thompson sampling based algorithm that leverages recent advances in the causal inference literature and adaptively re-weighs the terms of a doubly robust estimator on the true mean reward of each arm -- hence its name doubly-adaptive Thompson sampling. The regret of the proposed algorithm matches the optimal (minimax) regret rate and its empirical evaluation in a semi-synthetic experiment based on data from a randomized control trial of a web service is performed: we see that the proposed doubly-adaptive Thompson sampling has superior empirical performance to existing baselines in terms of cumulative regret and statistical power in identifying the best arm. Further, we extend this approach to contextual bandits, where there are more sources of bias present apart from the adaptive data collection -- such as the mismatch between the true data generating process and the reward model assumptions or the unequal representations of certain regions of the context space in initial stages of learning -- and propose the linear contextual doubly-adaptive Thompson sampling and the non-parametric contextual doubly-adaptive Thompson sampling extensions of our approach.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.13202&r=all
  12. By: Magris Martin; Iosifidis Alexandros
    Abstract: This paper introduces a feasible and practical Bayesian method for unit root testing in financial time series. We propose a convenient approximation of the Bayes factor in terms of the Bayesian Information Criterion as a straightforward and effective strategy for testing the unit root hypothesis. Our approximate approach relies on few assumptions, is of general applicability, and preserves a satisfactory error rate. Among its advantages, it does not require the prior distribution on model's parameters to be specified. Our simulation study and empirical application on real exchange rates show great accordance between the suggested simple approach and both Bayesian and non-Bayesian alternatives.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.10048&r=all
  13. By: Jean-Jacques Forneron; Serena Ng
    Abstract: This paper illustrates two algorithms designed in Forneron & Ng (2020): the resampled Newton-Raphson (rNR) and resampled quasi-Newton (rqN) algorithms which speed-up estimation and bootstrap inference for structural models. An empirical application to BLP shows that computation time decreases from nearly 5 hours with the standard bootstrap to just over 1 hour with rNR, and only 15 minutes using rqN. A first Monte-Carlo exercise illustrates the accuracy of the method for estimation and inference in a probit IV regression. A second exercise additionally illustrates statistical efficiency gains relative to standard estimation for simulation-based estimation using a dynamic panel regression example.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.10443&r=all
  14. By: Kukacka, Jiri; Sacht, Stephen
    Abstract: This paper offers a simulation-based method for the estimation of heuristic switching in nonlinear macroeconomic models. Heuristic switching is an important feature of modeling strategy since it uses simple decision rules of boundedly rational heterogeneous agents. The simulation study shows that the proposed simulated maximum likelihood method identifies the behavioral effects that stay hidden for standard econometric approaches. In the empirical application, we estimate the structural and behavioral parameters of the US economy. We are especially able to reliably identify the intensity of choice that governs the models' nonlinear dynamics.
    Keywords: Behavioral Heuristics,Heuristic Switching Model,Intensity of Choice,Simulated Maximum Likelihood
    JEL: C53 D83 E12 E32
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:zbw:cauewp:202101&r=all
  15. By: Khowaja, Kainat; Saef, Danial; Sizov, Sergej; Härdle, Wolfgang Karl
    Abstract: Strategic planning in a corporate environment is often based on experience and intuition, although internal data is usually available and can be a valuable source of information. Predicting merger & acquisition (M&A) events is at the heart of strategic management, yet not sufficiently motivated by data analytics driven controlling. One of the main obstacles in using e.g. count data time series for M&A seems to be the fact that the intensity of M&A is time varying at least in certain business sectors, e.g. communications. We propose a new automatic procedure to bridge this obstacle using novel statistical methods. The proposed approach allows for a selection of adaptive windows in count data sets by detecting significant changes in the intensity of events. We test the efficacy of the proposed method on a simulated count data set and put it into action on various M&A data sets. It is robust to aberrant behaviour and generates accurate forecasts for the evaluated business sectors. It also provides guidance for an a-priori selection of fixed windows for forecasting. Furthermore, it can be generalized to other business lines, e.g. for managing supply chains, sales forecasts, or call center arrivals, thus giving managers new ways for incorporating statistical modeling in strategic planning decisions.
    JEL: C00
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:irtgdp:2020026&r=all
  16. By: Shi, Chengchun; Song, R; Lu, W
    Abstract: Personalized medicine is a medical procedure that receives considerable scientific and commercial attention. The goal of personalized medicine is to assign the optimal treatment regime for each individual patient, according to his/her personal prognostic information. When there are a large number of pretreatment variables, it is crucial to identify those important variables that are necessary for treatment decision making. In this paper, we study two information criteria: the concordance and value information criteria, for variable selection in optimal treatment decision making. We consider both fixedp and high dimensional settings, and show our information criteria are consistent in model/tuning parameter selection. We further apply our information criteria to four estimation approaches, including robust learning, concordance-assisted learning, penalized A-learning, and sparse concordance-assisted learning, and demonstrate the empirical performance of our methods by simulations.
    Keywords: concordance and value information criteria; optimal treatment regime; tuning parameter selection; variable selection
    JEL: C1
    Date: 2021–02–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:102105&r=all
  17. By: Zhu, Yajing; Steele, Fiona; Moustaki, Irini
    Abstract: We propose a multilevel structural equation model to investigate the interrelationships between childhood socio-economic circumstances, partnership formation and stability, and mid-life health, using data from the 1958 British birth cohort. The structural equation model comprises latent class models that characterize the patterns of change in four dimensions of childhood socio-economic circumstances and a joint regression model that relates these categorical latent variables to partnership transitions in adulthood and mid-life health, while allowing for informative dropout. The model can be extended to handle multiple outcomes of mixed types and at different levels in a hierarchical data structure.
    Keywords: event histroy analysis; non-ignorable dropout; latent variable model; multilevel model; 3-step approach
    JEL: C1 N0
    Date: 2020–06–01
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:103104&r=all
  18. By: Strittmatter, Anthony (University of St. Gallen); Wunsch, Conny (University of Basel)
    Abstract: The vast majority of existing studies that estimate the average unexplained gender pay gap use unnecessarily restrictive linear versions of the Blinder-Oaxaca decomposition. Using a notably rich and large data set of 1.7 million employees in Switzerland, we investigate how the methodological improvements made possible by such big data affect estimates of the unexplained gender pay gap. We study the sensitivity of the estimates with regard to i) the availability of observationally comparable men and women, ii) model flexibility when controlling for wage determinants, and iii) the choice of different parametric and semi- parametric estimators, including variants that make use of machine learning methods. We find that these three factors matter greatly. Blinder-Oaxaca estimates of the unexplained gender pay gap decline by up to 39% when we enforce comparability between men and women and use a more flexible specification of the wage equation. Semi-parametric matching yields estimates that when compared with the Blinder-Oaxaca estimates, are up to 50% smaller and also less sensitive to the way wage determinants are included.
    Keywords: gender inequality, gender pay gap, common support, model specification, matching estimator, machine learning
    JEL: J31 C21
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp14128&r=all
  19. By: Schulz, Jan; Milaković, Mishael
    Abstract: Underreporting and undersampling biases in top tail wealth, although widely acknowledged, have not been statistically quantified so far, essentially because they are not readily observable. Here we exploit the functional form of power law-like regimes in top tail wealth to derive analytical expressions for these biases, and employ German microdata from a popular survey and rich list to illustrate that tiny differences in non-response rates lead to tail wealth estimates that differ by an order of magnitude, in our case ranging from one to nine trillion euros. Underreporting seriously compounds the problem, and we find that the estimation of totals in scale-free systems oftentimes tends to be spurious. Our findings also suggest that recent debates on the existence of scale- or type-dependence in returns to wealth are ill-posed because the available data cannot discriminate between scale- or typedependence on the one hand, and statistical biases on the other. Yet both economic theory and mathematical formalism indicate that sampling and reporting biases are more plausible explanations for the observed data than scale- or type-dependence.
    Keywords: Wealth inequality,stochastic growth,differential non-response,Hill estimator,tail index bias
    JEL: C46 C81 D31
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:bamber:166&r=all
  20. By: Dohyun Chun; Donggyu Kim
    Abstract: Recently, to account for low-frequency market dynamics, several volatility models, employing high-frequency financial data, have been developed. However, in financial markets, we often observe that financial volatility processes depend on economic states, so they have a state heterogeneous structure. In this paper, to study state heterogeneous market dynamics based on high-frequency data, we introduce a novel volatility model based on a continuous Ito diffusion process whose intraday instantaneous volatility process evolves depending on the exogenous state variable, as well as its integrated volatility. We call it the state heterogeneous GARCH-Ito (SG-Ito) model. We suggest a quasi-likelihood estimation procedure with the realized volatility proxy and establish its asymptotic behaviors. Moreover, to test the low-frequency state heterogeneity, we develop a Wald test-type hypothesis testing procedure. The results of empirical studies suggest the existence of leverage, investor attention, market illiquidity, stock market comovement, and post-holiday effect in S&P 500 index volatility.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.13404&r=all
  21. By: Donggyu Kim; Yazhen Wang
    Abstract: Various parametric volatility models for financial data have been developed to incorporate high-frequency realized volatilities and better capture market dynamics. However, because high-frequency trading data are not available during the close-to-open period, the volatility models often ignore volatility information over the close-to-open period and thus may suffer from loss of important information relevant to market dynamics. In this paper, to account for whole-day market dynamics, we propose an overnight volatility model based on It\^o diffusions to accommodate two different instantaneous volatility processes for the open-to-close and close-to-open periods. We develop a weighted least squares method to estimate model parameters for two different periods and investigate its asymptotic properties. We conduct a simulation study to check the finite sample performance of the proposed model and method. Finally, we apply the proposed approaches to real trading data.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.13467&r=all
  22. By: Stefano Grassi (University of Rome 'Tor Vergata', Department of Economics and Finance, Facoltà di Economia, and CREATES); Francesco Violante (CREST, GENES, ENSAE Paris, Institut Polytechnique de Paris, and CREATES)
    Abstract: Starting from the Cholesky-GARCH model, recently proposed by Darolles, Francq, and Laurent (2018), the paper introduces the Block-Cholesky GARCH (BC-GARCH). This new model adapts in a natural way to the asset pricing framework. After deriving conditions for stationarity, uniform invertibility and beta tracking, we investigate the finite sample properties of a variety of maximum likelihood estimators suited for the BC-GARCH by means of an extensive Monte Carlo experiment. We illustrate the usefulness of the BC-GARCH in two empirical applications. The first tests for the presence of beta spillovers in a bivariate system in the context of the Fama and French (1993) three factor framework. The second empirical application consists of a large scale exercise exploring the cross-sectional variation of expected returns for 40 industry portfolios.
    Keywords: Cholesky decomposition, Multivariate GARCH, Asset Pricing, Time Varying Beta,Two Pass Regression
    JEL: C12 C22 C58 G12 G13
    Date: 2021–03–01
    URL: http://d.repec.org/n?u=RePEc:aah:create:2021-05&r=all
  23. By: Auer, Benjamin R.; Rottmann, Horst
    Abstract: Dieser Beitrag illustriert mittels Monte-Carlo-Simulation die Eigenschaften des OLS- und des IV-Schätzers, wenn die erklärende Variable im einfachen linearen Regressionsmodell endogen, d. h. mit dem Störterm des Modells korreliert ist. Insbesondere werden dabei die Verzerrung des OLS-Schätzers und die Konsistenz des IV-Schätzers aufgezeigt sowie der Einfluss schwacher Instrumente verdeutlicht.
    Keywords: Monte-Carlo-Simulation,OLS-Schätzung,IV-Schätzung,Endogenität,schwacheInstrumente
    Date: 2020
    URL: http://d.repec.org/n?u=RePEc:zbw:hawdps:79&r=all
  24. By: Kuosmanen, Timo; Zhou, Xun; Eskelinen, Juha; Malo, Pekka
    Abstract: Synthetic control method (SCM) identifies causal treatment effects by constructing a counterfactual treatment unit as a convex combination of donors in the control group, such that the weights of donors and predictors are jointly optimized during the pre-treatment period. This paper demonstrates that the true optimal solution to the SCM problem is typically a corner solution where all weight is assigned to a single predictor, contradicting the intended purpose of predictors. To address this inherent design flaw, we propose to determine the predictor weights and donor weights separately. We show how the donor weights can be optimized when the predictor weights are given, and consider alternative data-driven approaches to determine the predictor weights. Re-examination of the two original empirical applications to Basque terrorism and California's tobacco control program demonstrates the complete and utter failure of the existing SCM algorithms and illustrates our proposed remedies.
    Keywords: Causal e�ects; Comparative case studies; Policy impact assessment; Treatment e�ect models
    JEL: C54 C61 C71
    Date: 2021–02–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:106328&r=all
  25. By: Quentin LAJAUNIE
    Keywords: , Impulse response functions, Dichotomous model, Recession prediction, Economic cycles
    Date: 2021
    URL: http://d.repec.org/n?u=RePEc:leo:wpaper:2852&r=all
  26. By: Mario Faliva; Maria Grazia Zoia
    Abstract: This paper establishes an extended representation theorem for unit-root VARs. A specific algebraic technique is devised to recover stationarity from the solution of the model in the form of a cointegrating transformation. Closed forms of the results of interest are derived for integrated processes up to the 4-th order. An extension to higher-order processes turns out to be within the reach on an induction argument.
    Date: 2021–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2102.10626&r=all

This nep-ecm issue is ©2021 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.