Econometrics
http://lists.repec.orgmailman/listinfo/nep-ecm
Econometrics
2017-09-17
Kernel-based inference in time-varying coefficient models with multiple integrated regressors
http://d.repec.org/n?u=RePEc:msh:ebswps:2017-11&r=ecm
This paper studies nonlinear cointegrating models with time-varying coecients and multiple nonstationary regressors using classic kernel smoothing methods to estimate the coecient functions. Extending earlier work on nonstationary kernel regression to take account of practical features of the data, we allow the regressors to be cointegrated and to embody a mixture of stochastic and deterministic trends, complications which result in asymptotic degeneracy of the kernel-weighted signal matrix. To address these complications new local and global rotation techniques are introduced to transform the covariate space to accommodate multiple scenarios of induced degeneracy. Under certain regularity conditions we derive asymptotic results that differ substantially from existing kernel regression asymptotics, leading to new limit theory under multiple convergence rates. For the practically important case of endogenous nonstationary regressors we propose a fully-modified kernel estimator whose limit distribution theory corresponds to the prototypical pure (i.e., exogenous covariate) cointegration case, thereby facilitating inference using a generalized Wald-type test statistic. These results substantially generalize econometric estimation and testing techniques in the cointegration literature to accommodate time variation and complications of co-moving regressors. Finally an empirical illustration to aggregate US data on consumption, income, and interest rates is provided.
Degui Li
Peter CB Phillips
Jiti Gao
cointegration, FM-kernel estimation, generalized Wald test, global rotation, kernel degeneracy, local rotation, super-consistency, time-varying coecients.
2017
Inference on a Semiparametric Model with Global Power Law and Local Nonparametric Trends
http://d.repec.org/n?u=RePEc:msh:ebswps:2017-10&r=ecm
This paper studies a model with both a parametric global trend and a nonparametric local trend. This model may be of interest in a number of applications in economics, finance, ecology, and geology. The model nests the parametric global trend model considered in Phillips (2007) and Robinson (2012), and the nonparametric local trend model. We first propose two hypothesis tests to detect whether either of the special cases are appropriate. For the case where both null hypotheses are rejected, we propose an estimation method to capture both aspects of the time trend. We establish consistency and some distribution theory in the presence of a large sample. Moreover, we examine the proposed hypothesis tests and estimation methods through both simulated and real data examples. Finally, we discuss some potential extensions and issues when modelling time effects.
Jiti Gao
Oliver Linton
Bin Peng
global mean sea level, nonparametric kernel estimation, nonstationarity.
2017
Which panel data estimator should I use? A corrigendum and extension
http://d.repec.org/n?u=RePEc:zbw:ifwedp:201758&r=ecm
This study uses Monte Carlo experiments to produce new evidence on the performance of a wide range of panel data estimators. It focuses on estimators that are readily available in statistical software packages such as Stata and Eviews, and for which the number of crosssectional units (N) and time periods (T) are small to moderate in size. The goal is to develop practical guidelines that will enable researchers to select the best estimator for a given type of data. It extends a previous study on the subject (Reed and Ye, Which panel data estimator should I use? Applied Economics, 2011), and modifies their recommendations. The new recommendations provide a (virtually) complete decision tree: When it comes to choosing an estimator for efficiency, it uses the size of the panel dataset (N and T) to guide the researcher to the best estimator. When it comes to choosing an estimator for hypothesis testing, it identifies one estimator as superior across all the data scenarios included in the study. An unusual finding is that researchers should use different estimators for estimating coefficients and testing hypotheses. The authors present evidence that bootstrapping allows one to use the same estimator for both.
Moundigbaye, Mantobaye
Rea, William S.
Reed, W. Robert
Panel data estimators,Monte Carlo simulation,PCSE,Parks model
2017
Multi-step non- and semi-parametric predictive regressions for short and long horizon stock return prediction
http://d.repec.org/n?u=RePEc:msh:ebswps:2017-13&r=ecm
In this paper, we propose three new predictive models: the multi-step nonparametric predictive regression model and the multi-step additive predictive regression model, in which the predictive variables are locally stationary time series; and the multi-step time-varying coefficient predictive regression model, in which the predictive variables are stochastically nonstationary. We also establish the estimation theory and asymptotic properties for these models in the short horizon and long horizon case. To evaluate the effectiveness of these models, we investigate their capability of stock return prediction. The empirical results show that all of these models can substantially outperform the traditional linear predictive regression model in terms of both in-sample and out-of-sample performance. In addition, we find that these models can always beat the historical mean model in terms of in-sample fitting, and also for some cases in terms of the out-of-sample forecasting.
Tingting Cheng
Jiti Gao
Oliver Linton
Kernel estimator, locally stationary process, series estimator, stock return prediction.
2017
Point Optimal Testing with Roots That Are Functionally Local to Unity
http://d.repec.org/n?u=RePEc:cwl:cwldpp:3007&r=ecm
Limit theory for regressions involving local to unit roots (LURs) is now used extensively in time series econometric work, establishing power properties for unit root and cointegration tests, assisting the construction of uniform confidence intervals for autoregressive coefficients, and enabling the development of methods robust to departures from unit roots. The present paper shows how to generalize LUR asymptotics to cases where the localized departure from unity is a time varying function rather than a constant. Such a functional local unit root (FLUR) model has much greater generality and encompasses many cases of additional interest, including structural break formulations that admit subperiods of unit root, local stationary and local explosive behavior within a given sample. Point optimal FLUR tests are constructed in the paper to accommodate such cases. It is shown that against FLUR\ alternatives, conventional constant point optimal tests can have extremely low power, particularly when the departure from unity occurs early in the sample period. Simulation results are reported and some implications for empirical practice are examined.
Anna Bykhovskaya
Peter C. B. Phillips
Functional local unit root, Local to unity, Uniform confidence interval, Unit root model
2017-09
A Bootstrap Approach for Generalized Autocontour Testing. Implications for VIX Forecast Densities
http://d.repec.org/n?u=RePEc:ucr:wpaper:201709&r=ecm
We propose an extension of the Generalized Autocontour (G-ACR) tests for dynamic specification of in-sample conditional densities and for evaluation of out-of-sample forecast densities. The new tests are based on probability integral transforms (PITs) computed from bootstrap conditional densities that incorporate parameter uncertainty. Then, the parametric specification of the conditional moments can be tested without relying on any parametric error distribution yet exploiting distributional properties of the variable of interest. We show that the finite sample distribution of the bootstrapped G-ACR (BG-ACR) tests are well approximated using standard asymptotic distributions. Furthermore, the proposed tests are easy to implement and are accompanied by graphical tools that provide information about the potential sources of misspecification. We apply the BG-ACR tests to the Heterogeneous Autoregressive (HAR) model and the Multiplicative Error Model (MEM) of the U.S. volatility index VIX. We find strong evidence against the parametric assumptions of the conditional densities, i.e. normality in the HAR model and semi non-parametric Gamma (GSNP) in the MEM. In both cases, the true conditional density seems to be more skewed to the right and more peaked than either normal or GSNP densities, with location, variance and skewness changing over time. The preferred specification is the heteroscedastic HAR model with bootstrap conditional densities of the log-VIX.
Gloria Gonzalez-Rivera
Joao Henrique Mazzeu
Esther Ruiz
Helena Veiga
Distribution Uncertainty; Model Evaluation; Parameter Uncertainty; PIT; VIX; HAR model; Multiplicative Error Model
2017-07
Testing if the market microstructure noise is a function of the limit order book
http://d.repec.org/n?u=RePEc:arx:papers:1709.02502&r=ecm
In this paper, we build tests for the presence of error in a model where the market microstructure noise is a known parametric function of the limit order book. The tests compare two novel and distinct quasi-maximum likelihood estimators of volatility, where the related model includes an additive error in the market microstructure noise or not. The limit theory is investigated in a general nonparametric framework. When there is no error in the model, we provide a consistent estimator of the efficient price based on maximum likelihood estimation of the parameter. Furthermore, we show that realized volatility remains efficient when performed on the estimated price rather than on the efficient price.
Simon Clinet
Yoann Potiron
2017-09
A time varying parameter structural model of the UK economy
http://d.repec.org/n?u=RePEc:boe:boeewp:0677&r=ecm
e estimate a time varying parameter structural macroeconomic model of the UK economy, using a Bayesian local likelihood methodology. This enables us to estimate a large open-economy DSGE model over a sample that comprises several different regimes and an incomplete set of data. Our estimation identifies a gradual shift to a monetary policy regime characterised by a marked increase in the responsiveness of monetary policy to inflation alongside a decrease in the level of trend inflation down to the 2% target level. The time varying model also performs remarkably well in forecasting and delivers statistically significant accuracy improvements for most variables and horizons in both point and density forecast performance compared to the standard fixed parameter version.
Petrova, Katerina
Kapetanios, George
Masolo, Riccardo
Waldron, Matthew
DSGE models; Bayesian methods; local likelihood; time varying parameters; forecasting
2017-09-08
Clustering Space-Time Series: A Flexible STAR Approach
http://d.repec.org/n?u=RePEc:cns:cnscwp:201707&r=ecm
The STAR model is widely used to represent the dynamics of a certain variable recorded at several locations at the same time. Its advantages are often discussed in terms of parsimony with respect to space-time VAR structures because it considers a single coefficient for each time and spatial lag. This hypothesis can be very strong; we add a certain degree of flexibility to the STAR model, providing the possibility for coefficients to vary in groups of locations. The new class of models is compared to the classical STAR and the space-time VAR by simulations and an application.
E. Otranto
M. Mucciardi
clustering;forecasting;spaceâ€“time models;spatial weight matrix
2017
Forecasting Market Risk of Portfolios: Copula-Markov Switching Multifractal Approach
http://d.repec.org/n?u=RePEc:cqe:wpaper:6617&r=ecm
This paper proposes a new methodology for modeling and forecasting market risks of portfolios. It is based on a combination of copula functions and Markov switching multifractal (MSM) processes. We assess the performance of the copula-MSM model by computing the value at risk of a portfolio composed of the NASDAQ composite index and the S&P 500. Using the likelihood ratio (LR) test by Christofferrsen (1998), the GMM duration-based test by Candelon et al. (2011) and the superior predictive ability (SPA) test by Hansen (2005) we evaluate the predictive ability of the copula-MSM model and compare it to other common approaches such as historical simulation, variance-covariance, Risk-Metrics, copula-GARCH and constant conditional correlation GARCH (CCCGARCH) models. We find that the copula-MSM model is more robust, provides the best fit and outperforms the other models in terms of forecasting accuracy and VaR prediction.
Mawuli Segnon
Mark Trede
Copula, Multifractal processes, GARCH, VaR, Backtesting, SPA
2017-09
Testing for State-Dependent Predictive Ability
http://d.repec.org/n?u=RePEc:ris:albaec:2017_009&r=ecm
This paper proposes a new test for comparing the out-of sample forecasting performance of two competing models for situations in which the predictive content may be state-dependent (for example, expansion and recession states or low and high volatility states). To apply this test the econometrician is not required to observe when the underlying states shift. The test is simple to implement and accommodates several different cases of interest. An out-of-sample forecasting exercise for US output growth using real-time data illustrates the improvement of this test over previous approaches to perform forecast comparison.
Fossati, Sebastian
Forecast Evaluation; Testing; Regime Switching; Structural Change
2017-09-06
Construction and visualization of optimal confidence sets for frequentist distributional forecasts
http://d.repec.org/n?u=RePEc:msh:ebswps:2017-9&r=ecm
The focus of this paper is on the quantification of sampling variation in frequentist probabilistic forecasts. We propose a method of constructing confidence sets that respects the functional nature of the forecast distribution, and use animated graphics to visualize the impact of parameter uncertainty on the location, dispersion and shape of the distribution. The confidence sets are derived via the inversion of a Wald test and are asymptotically uniformly most accurate and, hence, optimal in this sense. A wide range of linear and non-linear time series models - encompassing long memory, state space and mixture specifications - is used to demonstrate the procedure, based on artificially generated data. An empirical example in which distributional forecasts of both financial returns and its stochastic volatility are produced is then used to illustrate the practical importance of accommodating sampling variation in the manner proposed.
David Harris
Gael M. Martin
Indeewara Perera
Don S. Poskitt
probabilistic forecasts, asymptotically uniformly most accurate confidence regions, time series models, animated graphics, realized volatility, heterogeneous autoregressive model.
2017
Multivariate Density Modeling for Retirement Finance
http://d.repec.org/n?u=RePEc:arx:papers:1709.04070&r=ecm
Prior to the financial crisis mortgage securitization models increased in sophistication as did products built to insure against losses. Layers of complexity formed upon a foundation that could not support it and as the foundation crumbled the housing market followed. That foundation was the Gaussian copula which failed to correctly model failure-time correlations of derivative securities in duress. In retirement, surveys suggest the greatest fear is running out of money and as retirement decumulation models become increasingly sophisticated, large financial firms and robo-advisors may guarantee their success. Similar to an investment bank failure the event of retirement ruin is driven by outliers and correlations in times of stress. It would be desirable to have a foundation able to support the increased complexity before it forms however the industry currently relies upon similar Gaussian (or lognormal) dependence structures. We propose a multivariate density model having fixed marginals that is tractable and fits data which are skewed, heavy-tailed, multimodal, i.e., of arbitrary complexity allowing for a rich correlation structure. It is also ideal for stress-testing a retirement plan by fitting historical data seeded with black swan events. A preliminary section reviews all concepts before they are used and fully documented C/C++ source code is attached making the research self-contained. Lastly, we take the opportunity to challenge existing retirement finance dogma and also review some recent criticisms of retirement ruin probabilities and their suggested replacement metrics.
Christopher J. Rook
2017-09
Boundary Limit Theory for Functional Local to Unity Regression
http://d.repec.org/n?u=RePEc:cwl:cwldpp:3008&r=ecm
This paper studies functional local unit root models (FLURs) in which the autoregressive coefficient may vary with time in the vicinity of unity. We extend conventional local to unity (LUR) models by allowing the localizing coefficient to be a function which characterizes departures from unity that may occur within the sample in both stationary and explosive directions. Such models enhance the flexibility of the LUR framework by including break point, trending, and multi-directional departures from unit autoregressive coefficients. We study the behavior of this model as the localizing function diverges, thereby determining the impact on the time series and on inference from the time series as the limits of the domain of definition of the autoregressive coefficient are approached. This boundary limit theory enables us to characterize the asymptotic form of power functions for associated unit root tests against functional alternatives. Both sequential and simultaneous limits (as the sample size and localizing coefficient diverge) are developed. We find that asymptotics for the process, the autoregressive estimate, and its $t$ statistic have boundary limit behavior that differs from standard limit theory in both explosive and stationary cases. Some novel features of the boundary limit theory are the presence of a segmented limit process for the time series in the stationary direction and a degenerate process in the explosive direction. These features have material implications for autoregressive estimation and inference which are examined in the paper.
Anna Bykhovskaya
Peter C. B. Phillips
Boundary asymptotics, Functional local unit root, Local to unity, Sequential limits, Simultaneous limits, Unit root model
2017-09
Reliable estimation of random coefficient logit demand models
http://d.repec.org/n?u=RePEc:zbw:dicedp:267&r=ecm
The differentiated demand model of Berry, Levinsohn and Pakes (1995) is widely used in empirical economic research. Previous literature has demonstrated numerical instabilities of the corresponding GMM estimator that give a wide range of parameter estimates and economic implications depending on technical details such as the choice of optimization algorithm, starting values, and convergence criteria. We show that these instabilities are mainly driven by numerical approximation errors of the moment function which is not analytically available. With accurate approximation, the estimator is well-behaved. We also discuss approaches to mitigate the computational burden of accurate approximation and provide code for download.
Brunner, Daniel
Heiss, Florian
Romahn, André
Weiser, Constantin
2017
Generalised Wald type Test of nonlinear restrictions
http://d.repec.org/n?u=RePEc:crb:wpaper:2017-13&r=ecm
This paper proposes a generalised Wald type tests to test the hypothesis of the nonlinear restrictions. We circumvent the problem of singularity of the co-variance matrix associated with the usual Wald test by proposing a generalised inverse procedure, and an alternative simple procedure which can be approximated by a suitable chi-square distribution. New threshold value is derived to estimate the rank of the covariance matrix .
Zaka RATSIMALAHELO
nonlinear restrictions, de?cient rank, singular covariance matrix, generalised Wald test.
2017-09
The fade away of an initial bias in longitudinal surveys
http://d.repec.org/n?u=RePEc:zbw:fubsbe:201725&r=ecm
We propose a new view of initial nonresponse bias in longitudinal surveys. Under certain conditions, an initial bias may "fade-away" over consecutive waves. This effect is discussed in a Markovian framework. A general contraction theorem for time inhomogeneous Markov chains is presented. The result is that two chains with different starting distributions will eventually converge to equal state distributions. Two conditions are required: transition probabilities must be equal for respondents and nonrespondents, and attrition in later panel waves must not depend on the state of the individuals. The theory is applied to a German survey on social benefit recipience. Minor deviations from assumptions are shown to have only a negligible impact on the strength of the fade-away effect. Results from other European surveys indicate that the fade-away effect is present in them, as well. Extensions are pointed out.
Alho, Juha
Müller, Gerrit
Pflieger, Verena
Rendtel, Ulrich
panel surveys,panel attrition,nonresponse bias,Markov chains,steady state distribution
2017
Stationarity and Invertibility of a Dynamic Correlation Matrix
http://d.repec.org/n?u=RePEc:tin:wpaper:20170082&r=ecm
One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have purportedly been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions of the DCC model. The derivation of DCC from a vector random coefficient moving average process raises three important issues, as follows: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions, especially stationarity and invertibility, should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters. Several new results are also derived for univariate models, including a novel conditional volatility model expressed in terms of standardized shocks rather than returns shocks, as well as the associated stationarity and invertibility conditions.
Michael mcAleer
Dynamic conditional correlation; dynamic conditional covariance; vector random coefficient moving average; stationarity; invertibility; asymptotic properties.
2017-09-06