|
on Econometrics |
By: | Jaedo Choi (Univ of Michigan); Jin Seo Cho (Yonsei Univ); Hyungsik Roger Moon (Univ of Southern California) |
Abstract: | This study provides an econometric methodology to test for a linear structural relationship among economic variables. For this, we propose the so-called distance-difference (DD) test statistic and show that it has omnibus power against arbitrary nonlinear structural relationships. If the DD test statistic rejects the linear model hypothesis, a sequential testing procedure assisted by the DD test statistic can consistently estimate the degree of polynomial function that arbitrarily approximates the nonlinear structural equation. Using extensive Monte Carlo simulations, we confirm the DD test’s finite sample properties and compare its performance with the sequential testing procedure assisted by the J-test statistic and moment selection criteria. Finally, we empirically investigate the structural relationship between the log wage and work experience years using Card’s (1995) National Longitudinal Survey data and affirm their inferential results by our methodology. |
Keywords: | GMM estimation; Model linearity testing; Model specification testing; Gaussian stochastic process; Sequential testing procedure; Wage equation |
JEL: | C12 C13 C26 C52 J24 J31 |
URL: | http://d.repec.org/n?u=RePEc:yon:wpaper:2020rwp-162&r=all |
By: | Greg Lewis; Vasilis Syrgkanis |
Abstract: | We consider the estimation of treatment effects in settings when multiple treatments are assigned over time and treatments can have a causal effect on future outcomes. We formulate the problem as a linear state space Markov process with a high dimensional state and propose an extension of the double/debiased machine learning framework to estimate the dynamic effects of treatments. Our method allows the use of arbitrary machine learning methods to control for the high dimensional state, subject to a mean square error guarantee, while still allowing parametric estimation and construction of confidence intervals for the dynamic treatment effect parameters of interest. Our method is based on a sequential regression peeling process, which we show can be equivalently interpreted as a Neyman orthogonal moment estimator. This allows us to show root-n asymptotic normality of the estimated causal effects. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.07285&r=all |
By: | Bernd Funovits |
Abstract: | This paper deals with parameterisation, identifiability, and maximum likelihood (ML) estimation of possibly non-invertible structural vector autoregressive moving average (SVARMA) models driven by independent and non-Gaussian shocks. We introduce a new parameterisation of the MA polynomial matrix based on the Wiener-Hopf factorisation (WHF) and show that the model is identified in this parametrisation for a generic set in the parameter space (when certain just-identifying restrictions are imposed). When the SVARMA model is driven by Gaussian errors, neither the static shock transmission matrix, nor the location of the determinantal zeros of the MA polynomial matrix can be identified without imposing further identifying restrictions on the parameters. We characterise the classes of observational equivalence with respect to second moment information at different stages of the modelling process. Subsequently, cross-sectional and temporal independence and non-Gaussianity of the shocks is used to solve these identifiability problems and identify the true root location of the MA polynomial matrix as well as the static shock transmission matrix (up to permutation and scaling).Typically imposed identifying restrictions on the shock transmission matrix as well as on the determinantal root location are made testable. Furthermore, we provide low level conditions for asymptotic normality of the ML estimator. The estimation procedure is illustrated with various examples from the economic literature and implemented as R-package. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.04346&r=all |
By: | Giuseppe Brandi; T. Di Matteo |
Abstract: | Scaling and multiscaling financial time series have been widely studied in the literature. The research on this topic is vast and still flourishing. One way to analyse the scaling properties of time series is through the estimation of scaling exponents. These exponents are recognized as being valuable measures to discriminate between random, persistent, and anti-persistent behaviours in time series. In the literature, several methods have been proposed to study the multiscaling property and in this paper we use the generalized Hurst exponent (GHE). On the base of this methodology, we propose a novel statistical procedure to robustly estimate and test the multiscaling property and we name it RNSGHE. This methodology, together with a combination of t-tests and F-tests to discriminated between real and spurious scaling. Moreover, we also introduce a new methodology to estimate the optimal aggregation time used in our methodology. We numerically validate our procedure on simulated time series using the Multifractal Random Walk (MRW) and then apply it to real financial data. We also present results for times series with and without anomalies and we compute the bias that such anomalies introduce in the measurement of the scaling exponents. Finally, we show how the use of proper scaling and multiscaling can ameliorate the estimation of risk measures such as Value at Risk (VaR). We also propose a methodology based on Monte Carlo simulation, that we name Multiscaling Value at Risk (MSVaR), which takes into account the statical properties of multiscaling time series. We show that by using this statistical procedure in combination with the robustly estimated multiscaling exponents, the one year forecasted MSVaR mimics the VaR on the annual data for the majority of the stocks analysed. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.04164&r=all |
By: | Yunmi Kim (Univ of Seoul); Lijuan Huo (Beijing Institute of Technology); Tae-Hwan Kim (Yonsei Univ) |
Abstract: | Quantile regression has become a standard modern econometric method because of its capability to investigate the relationship between economic variables at various quantiles. The econometric method of Markov-switching regression is also considered important because it can deal with structural models or time-varying parameter models flexibly. A combination of these two methods, known as “Markov-switching quantile regression (MSQR),†has recently been proposed. Liu (2016) and Liu and Luger (2017) propose MSQR models using the Bayesian approach whereas Ye et al.’s (2016) proposal for MSQR models is based on the classical approach. In our study, we extend the results of Ye et al. (2016). First, we propose an efficient estimation method based on the expectation-maximization algorithm. In our second extension, we adopt the quasi-maximum likelihood approach to estimate the proposed MSQR models unlike the maximum likelihood approach that Ye et al. (2016) use. Our simulation results confirm that the proposed expectationmaximization estimation method for MSQR models works quite well at all quantiles, even with sample sizes as small as 200. |
Keywords: | Quantile regression; Markov-switching; Structural breaks; Quasi-maximum likelihood estimation; EM algorithm. |
JEL: | C21 C24 |
URL: | http://d.repec.org/n?u=RePEc:yon:wpaper:2020rwp-166&r=all |
By: | Niko Hauzenberger; Florian Huber; Luca Onorante |
Abstract: | Conjugate priors allow for fast inference in large dimensional vector autoregressive (VAR) models but, at the same time, introduce the restriction that each equation features the same set of explanatory variables. This paper proposes a straightforward means of post-processing posterior estimates of a conjugate Bayesian VAR to effectively perform equation-specific covariate selection. Compared to existing techniques using shrinkage alone, our approach combines shrinkage and sparsity in both the VAR coefficients and the error variance-covariance matrices, greatly reducing estimation uncertainty in large dimensions while maintaining computational tractability. We illustrate our approach by means of two applications. The first application uses synthetic data to investigate the properties of the model across different data-generating processes, the second application analyzes the predictive gains from sparsification in a forecasting exercise for US data. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.08760&r=all |
By: | James A. Duffy; Jerome R. Simons |
Abstract: | It has been known since Elliott (1998) that efficient methods of inference on cointegrating relationships break down when autoregressive roots are near but not exactly equal to unity. This paper addresses this problem within the framework of a VAR with non-unit roots. We develop a characterisation of cointegration, based on the impulse response function implied by the VAR, that remains meaningful even when roots are not exactly unity. Under this characterisation, the long-run equilibrium relationships between the series are identified with a subspace associated to the largest characteristic roots of the VAR. We analyse the asymptotics of maximum likelihood estimators of this subspace, thereby generalising Johansen's (1995) treatment of the cointegrated VAR with exactly unit roots. Inference is complicated by nuisance parameter problems similar to those encountered in the context of predictive regressions, and can be dealt with by approaches familiar from that setting. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.08092&r=all |
By: | Giulia Carallo; Roberto Casarin; Christian P. Robert |
Abstract: | This paper introduces a new stochastic process with values in the set Z of integers with sign. The increments of process are Poisson differences and the dynamics has an autoregressive structure. We study the properties of the process and exploit the thinning representation to derive stationarity conditions and the stationary distribution of the process. We provide a Bayesian inference method and an efficient posterior approximation procedure based on Monte Carlo. Numerical illustrations on both simulated and real data show the effectiveness of the proposed inference. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.04470&r=all |
By: | Tae-Hwan Kim (Yonsei Univ); Dong Jin Lee (Sangmyung Univ); Paul Mizen (Univ of Nottingham) |
Abstract: | This paper presents a new method to analyze the e¤ect of shocks on time series using quantile impulse response function (QIRF). While conventional impulse response analysis is re- stricted to evaluation using the conditional mean function, here, we propose an alternative impulse response analysis that traces the e¤ect of economic shocks on the conditional quantile function. By changing the quantile index over the unit interval, it is possible to measure the e¤ect of shocks on the entire conditional distribution of a given variable in our framework. Therefore we can observe the complete distributional consequences of policy interventions, especially at the upper and lower tails of the distribution as well as at the mean. Using the new approach, it becomes possible to evaluate two distinct features, namely, (i) the degree of uncertainty of a shock by measuring how the dispersion of the conditional distribution is changed after a shock, and (ii) the asymmetric e¤ect of a shock by comparing the responses to an impulse at the lower tails with those at the upper tails of the conditional distribution. None of these features can be observed in the conventional impulse response analysis exclusively based on the conditional mean function. In addition to proposing the QIRF, our second contribution is to present a new way to jointly estimate a system of multiple quantile functions. Our proposed system quantile estimator is obtained by extending the result of Jun and Pinkse (2009) to the time series context. We illustrate the QIRF on a VAR model in a manner similar to Romer and Romer (2004) in order to assess the impact of a monetary policy shock on the US economy. |
Keywords: | Quantile vector autoregression; monetary policy shock; quantile impulse response function; structural vector autoregression |
JEL: | C32 C51 |
URL: | http://d.repec.org/n?u=RePEc:yon:wpaper:2020rwp-164&r=all |
By: | Yunmi Kim (Univ of Seoul); Douglas Stone (Allianz Global Investors); Tae-Hwan Kim (Yonsei Univ) |
Abstract: | It is important for investors to know not only the style of a fund manager in whom they are interested, but also whether this style is constant or changing through time. The style is now easily identified by the so-called style regression. However, there has been no formal and statistically valid method to test for a change in manager style when the two typically imposed restrictions (sum-to-one and non-negativity) are jointly present in style analysis. In this study, we apply and extend the results of Andrews (1997a, 1997b, 1999, 2000) to develop a valid testing procedure for the possibility wherein the location of any possible change does not need to be specified and the case of multiple shifts is accommodated. When our proposed test is applied to the Fidelity Magellan Fund, it is revealed that the fund’s style changed at least twice between 1988 and 2017. |
Keywords: | Structural Shift; Boundary Parameter; Maximum Chow Test; Style Regression |
JEL: | C12 C18 |
URL: | http://d.repec.org/n?u=RePEc:yon:wpaper:2020rwp-165&r=all |
By: | Yong Song; Tomasz Wo\'zniak |
Abstract: | Markov switching models are a popular family of models that introduces time-variation in the parameters in the form of their state- or regime-specific values. Importantly, this time-variation is governed by a discrete-valued latent stochastic process with limited memory. More specifically, the current value of the state indicator is determined only by the value of the state indicator from the previous period, thus the Markov property, and the transition matrix. The latter characterizes the properties of the Markov process by determining with what probability each of the states can be visited next period, given the state in the current period. This setup decides on the two main advantages of the Markov switching models. Namely, the estimation of the probability of state occurrences in each of the sample periods by using filtering and smoothing methods and the estimation of the state-specific parameters. These two features open the possibility for improved interpretations of the parameters associated with specific regimes combined with the corresponding regime probabilities, as well as for improved forecasting performance based on persistent regimes and parameters characterizing them. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.03598&r=all |
By: | Wenjing Wang; Minjing Tao |
Abstract: | Multivariate volatility modeling and forecasting are crucial in financial economics. This paper develops a copula-based approach to model and forecast realized volatility matrices. The proposed copula-based time series models can capture the hidden dependence structure of realized volatility matrices. Also, this approach can automatically guarantee the positive definiteness of the forecasts through either Cholesky decomposition or matrix logarithm transformation. In this paper we consider both multivariate and bivariate copulas; the types of copulas include Student's t, Clayton and Gumbel copulas. In an empirical application, we find that for one-day ahead volatility matrix forecasting, these copula-based models can achieve significant performance both in terms of statistical precision as well as creating economically mean-variance efficient portfolio. Among the copulas we considered, the multivariate-t copula performs better in statistical precision, while bivariate-t copula has better economical performance. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.08849&r=all |
By: | Chi Chen; Li Zhao; Wei Cao; Jiang Bian; Chunxiao Xing |
Abstract: | Nowadays, machine learning methods have been widely used in stock prediction. Traditional approaches assume an identical data distribution, under which a learned model on the training data is fixed and applied directly in the test data. Although such assumption has made traditional machine learning techniques succeed in many real-world tasks, the highly dynamic nature of the stock market invalidates the strict assumption in stock prediction. To address this challenge, we propose the second-order identical distribution assumption, where the data distribution is assumed to be fluctuating over time with certain patterns. Based on such assumption, we develop a second-order learning paradigm with multi-scale patterns. Extensive experiments on real-world Chinese stock data demonstrate the effectiveness of our second-order learning paradigm in stock prediction. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.06878&r=all |
By: | Masahiro Kato; Masatoshi Uehara; Shota Yasui |
Abstract: | We consider the evaluation and training of a new policy for the evaluation data by using the historical data obtained from a different policy. The goal of off-policy evaluation (OPE) is to estimate the expected reward of a new policy over the evaluation data, and that of off-policy learning (OPL) is to find a new policy that maximizes the expected reward over the evaluation data. Although the standard OPE and OPL assume the same distribution of covariate between the historical and evaluation data, there often exists a problem of a covariate shift, i.e., the distribution of the covariate of the historical data is different from that of the evaluation data. In this paper, we derive the efficiency bound of OPE under a covariate shift. Then, we propose doubly robust and efficient estimators for OPE and OPL under a covariate shift by using an estimator of the density ratio between the distributions of the historical and evaluation data. We also discuss other possible estimators and compare their theoretical properties. Finally, we confirm the effectiveness of the proposed estimators through experiments. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.11642&r=all |
By: | Zanetti Chini, Emilio |
Abstract: | We introduce a new time series model for public consumption expenditure, tax revenues and real income that is capable to incorporate oscillations characterized by asymmetric phase and duration (or dynamic asymmetry). A specific-to-general econometric strategy is implemented in order to exclude the null hypotheses that these variable are linear or symmetric and, consequently, to ensure that these can be parsimoniously modelled. The U.S. postwar data suggest that the dynamic asymmetry -- either in cycle, either in trend -- is effectively a reasonable hypothesis for government expenditure and tax revenue, but also that a simple vector model unifying the (different) nonlinearities of each single series is unfeasible. Such an \textquotedblleft Occam-razor\textquotedblright \hspace{1pt} failure hinders econometricians in building impulse responses for calculation of fiscal multiplier and is here circumvented via empirical indexes. |
Keywords: | Nonlinearities, Spending, Modelling, Multiplier, Testing, Selection. |
JEL: | C1 C12 C22 E32 E4 E42 |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:98499&r=all |
By: | Lajos Horv\'ath (Department of Mathematics, University of Utah); Zhenya Liu (School of Finance, Renmin University of China; CERGAM, Aix--Marseille University); Shanglin Lu (School of Finance, Renmin University of China) |
Abstract: | We propose a sequential monitoring scheme to find structural breaks in real estate markets. The changes in the real estate prices are modeled by a combination of linear and autoregressive terms. The monitoring scheme is based on a detector and a suitably chosen boundary function. If the detector crosses the boundary function, a structural break is detected. We provide the asymptotics for the procedure under the stability null hypothesis and the stopping time under the change point alternative. Monte Carlo simulation is used to show the size and the power of our method under several conditions. We study the real estate markets in Boston, Los Angeles and at the national U.S. level. We find structural breaks in the markets, and we segment the data into stationary segments. It is observed that the autoregressive parameter is increasing but stays below 1. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.04101&r=all |
By: | Atkinson, Anthony B. |
Abstract: | The Box-Cox power transformation family for non-negative responses in linear models has a long and interesting history in both statistical practice and theory, which we summarize. The relationship between generalized linear models and log transformed data is illustrated. Extensions investigated include the transform both sides model and the Yeo-Johnson transformation for observations that can be positive or negative. The paper also describes an extended Yeo-Johnson transformation that allows positive and negative responses to have different power transformations. Analyses of data show this to be necessary. Robustness enters in the fan plot for which the forward search provides an ordering of the data. Plausible transformations are checked with an extended fan plot. These procedures are used to compare parametric power transformations with nonparametric transformations produced by smoothing. |
Keywords: | ACE; AVAS; constructed variable; extended Yeo-Johnson transformation; forward search; linked plots; robust methods |
JEL: | C1 |
Date: | 2020–02–07 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:103537&r=all |
By: | Daniel Buncic |
Abstract: | Holston, Laubach and Williams' (2017) estimates of the natural rate of interest are driven by the downward trending behaviour of `other factor' $z_{t}$. I show that their implementation of Stock and Watson's (1998) Median Unbiased Estimation (MUE) to determine the size of $\lambda_{z}$ is unsound. It cannot recover the ratio of interest $\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$ from MUE required for the estimation of the full structural model. This failure is due to their Stage 2 model being incorrectly specified. More importantly, the MUE procedure that they implement spuriously amplifies the estimate of $\lambda _{z}$. Using a simulation experiment, I show that their MUE procedure generates excessively large estimates of $\lambda _{z}$ when applied to data simulated from a model where the true $\lambda _{z}$ is equal to zero. Correcting their Stage 2 MUE procedure leads to a substantially smaller estimate of $\lambda _{z}$, and a more subdued downward trending influence of `other factor' $z_{t}$ on the natural rate. This correction is quantitatively important. With everything else remaining the same in the model, the natural rate of interest is estimated to be 1.5% at the end of 2019:Q2; that is, three times the 0.5% estimate obtained from Holston et al.'s (2017) original Stage 2 MUE implementation. I also discuss various other issues that arise in their model of the natural rate that make it unsuitable for policy analysis. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.11583&r=all |
By: | Oksana Bashchenko; Alexis Marchal |
Abstract: | We develop a methodology for detecting asset bubbles using a neural network. We rely on the theory of local martingales in continuous-time and use a deep network to estimate the diffusion coefficient of the price process more accurately than the current estimator, obtaining an improved detection of bubbles. We show the outperformance of our algorithm over the existing statistical method in a laboratory created with simulated data. We then apply the network classification to real data and build a zero net exposure trading strategy that exploits the risky arbitrage emanating from the presence of bubbles in the US equity market from 2006 to 2008. The profitability of the strategy provides an estimation of the economical magnitude of bubbles as well as support for the theoretical assumptions relied on. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.06405&r=all |