Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research
2019-02-18
Resuscitating the co-fractional model of Granger (1986)
http://d.repec.org/n?u=RePEc:aah:create:2019-02&r=ore
We study the theoretical properties of the model for fractional cointegration proposed by Granger (1986), namely the FVECM_{d,b}. First, we show that the stability of any discretetime stochastic system of the type Pi(L)Y_t = e_t can be assessed by means of the argument principle under mild regularity condition on Pi(L), where L is the lag operator. Second, we prove that, under stability, the FVECM_{d,b} allows for a representation of the solution that demonstrates the fractional and co-fractional properties and we find a closed-form expression for the impulse response functions. Third, we prove that the model is identified for any combination of number of lags and cointegration rank, while still being able to generate polynomial co-fractionality. In light of these properties, we show that the asymptotic properties of the maximum likelihood estimator reconcile with those of the FCVAR_{d,b} model studied in Johansen and Nielsen (2012). Finally, an empirical illustration is provided.
Federico Carlini
Paolo Santucci de Magistris
Fractional cointegration, Granger representation theorem, Stability, Identification, Impulse Response Functions, Profile Maximum Likelihood
2019-01-02
Estimating Multiple Breaks in Nonstationary Autoregressive Models
http://d.repec.org/n?u=RePEc:pra:mprapa:92074&r=ore
Chong (1995) and Bai (1997) proposed a sample splitting method to estimate a multiple-break model. However, their studies focused on stationary time series models, where the identification of the first break depends on the magnitude and the duration of the break, and a testing procedure is needed to assist the estimation of the remaining breaks in subsamples split by the break points found earlier. In this paper, we focus on nonstationary multiple-break autoregressive models. Unlike the stationary case, we show that the duration of a break does not affect if it will be identified first. Rather, it depends on the stochastic order of magnitude of signal strength of the break under the case of constant break magnitude and also the square of the magnitude of the break under the case of shrinking break magnitude. Since the subsamples usually have different stochastic orders in nonstationary autoregressive models with breaks, one can therefore determine which break will be identified first. We apply this finding to the models proposed in Phillips and Yu (2011), Phillips et al. (2011) and Phillips et al. (2015a, 2015b). We provide an estimation procedure as well as the asymptotic theory for the model.
Pang, Tianxiao
Du, Lingjie
Chong, Terence Tai Leung
Change point, Financial bubble, Least squares estimator, Mildly explosive, Mildly integrated.
2018-08-23
Integer-valued stochastic volatility
http://d.repec.org/n?u=RePEc:pra:mprapa:91962&r=ore
We propose a novel class of count time series models, the mixed Poisson integer-valued stochastic volatility models. The proposed specification, which can be considered as an integer-valued analogue of the discrete-time stochastic volatility model, encompasses a wide range of conditional distributions of counts. We study its probabilistic structure and develop an easily adaptable Markov chain Monte Carlo algorithm, based on the Griddy-Gibbs approach that can accommodate any conditional distribution that belongs to that class. We demonstrate that by considering the cases of Poisson and negative binomial distributions. The methodology is applied to simulated and real data.
Aknouche, Abdelhakim
Dimitrakopoulos, Stefanos
Touche, Nassim
Griddy-Gibbs, Markov chain Monte Carlo, mixed Poisson parameter-driven models, stochastic volatility, Integer-valued GARCH.
2019-02-04
A Horse Race in High Dimensional Space
http://d.repec.org/n?u=RePEc:rtv:ceisrp:452&r=ore
In this paper, we study the predictive power of dense and sparse estimators in a high dimensional space. We propose a new forecasting method, called Elastically Weighted Principal Components Analysis (EWPCA) that selects the variables, with respect to the target variable, taking into account the collinearity among the data using the Elastic Net soft thresholding. Then, we weight the selected predictors using the Elastic Net regression coefficient, and we finally apply the principal component analysis to the new “elastically” weighted data matrix. We compare this method to common benchmark and other methods to forecast macroeconomic variables in a data-rich environment, dived into dense representation, such as Dynamic Factor Models and Ridge regressions and sparse representations, such as LASSO regression. All these models are adapted to take into account the linear dependency of the macroeconomic time series. Moreover, to estimate the hyperparameters of these models, including the EWPCA, we propose a new procedure called “brute force”. This method allows us to treat all the hyperparameters of the model uniformly and to take the longitudinal feature of the time-series data into account. Our findings can be summarized as follows. First, the “brute force” method to estimate the hyperparameters is more stable and gives better forecasting performances, in terms of MSFE, than the traditional criteria used in the literature to tune the hyperparameters. This result holds for all samples sizes and forecasting horizons. Secondly, our two-step forecasting procedure enhances the forecasts’ interpretability. Lastly, the EWPCA leads to better forecasting performances, in terms of mean square forecast error (MSFE), than the other sparse and dense methods or naïve benchmark, at different forecasts horizons and sample sizes.
Paolo Andreini
Donato Ceci
Variable selection,High-dimensional time series,Dynamic factor models,Shrinkage methods,Cross-validation
2019-02-14
Testing for Changes in Forecasting Performance
http://d.repec.org/n?u=RePEc:bos:wpaper:wp2019-003&r=ore
We consider the issue of forecast failure (or breakdown) and propose methods to assess retrospectively whether a given forecasting model provides forecasts which show evidence of changes with respect to some loss function. We adapt the classical structural change tests to the forecast failure context. First, we recommend that all tests should be carried with a fixed scheme to have best power. This ensures a maximum difference between the fitted in and out-of-sample means of the losses and avoids contamination issues under the rolling and recursive schemes. With a fixed scheme, Giacomini and Rossiâ€™s (2009) (GR) test is simply a Wald test for a one-time change in the mean of the total (the in-sample plus out-of-sample) losses at a known break date, say m, the value that separates the in and out-of-sample periods. To alleviate this problem, we consider a variety of tests: maximizing the GR test over values of m within a pre-specified range; a Double sup-Wald (DSW) test which for each m performs a sup-Wald test for a change in the mean of the out-of-sample losses and takes the maximum of such tests over some range; we also propose to work directly with the total loss series to define the Total Loss sup-Wald (TLSW) and Total Loss UDmax (TLUD) tests. Using theoretical analyses and simulations, we show that with forecasting models potentially involving lagged dependent variables, the only tests having a monotonic power function for all data-generating processes considered are the DSW and TLUD tests, constructed with a fixed forecasting window scheme. Some explanations are provided and empirical applications illustrate the relevance of our findings in practice.
Pierre Perron
Yohei Yamamoto
forecast breakdown, non-monotonic power, structural change, out-of-sample forecast
2018-05
Density Forecasting
http://d.repec.org/n?u=RePEc:bzn:wpaper:bemps59&r=ore
This paper reviews different methods to construct density forecasts and to aggregate forecasts from many sources. Density evaluation tools to measure the accuracy of density forecasts are reviewed and calibration methods for improving the accuracy of forecasts are presented. The manuscript provides some numerical simulation tools to approximate predictive densities with a focus on parallel computing on graphical process units. Some simple examples are proposed to illustrate the methods.
Federico Bassetti
Roberto Casarin
Francesco Ravazzolo
Density forecasting, density combinations, density evaluation, boot-strapping, Bayesian inference, Monte Carlo simulations, GPU computing
2019-02
Semiparametrically efficient estimation of the average linear regression function
http://d.repec.org/n?u=RePEc:ifs:cemmap:62/18&r=ore
Let Y be an outcome of interest, X a vector of treatment measures, and W a vector of pre-treatment control variables. Here X may include (combinations of) continuous, discrete, and/or non-mutually exclusive “treatments”. Consider the linear regression of Y onto X in a subpopulation homogenous in W = w (formally a conditional linear predictor). Let b0 (w) be the coefficient vector on X in this regression. We introduce a semiparametrically efficient estimate of the average ß0 = E [b0 (W)]. When X is binary-valued (multi-valued) our procedure recovers the (a vector of) average treatment effect(s). When X is continuously-valued, or consists of multiple non-exclusive treatments, our estimand coincides with the average partial effect (APE) of X on Y when the underlying potential response function is linear in X, but otherwise heterogenous across agents. When the potential response function takes a general nonlinear/heterogenous form, and X is continuously-valued, our procedure recovers a weighted average of the gradient of this response across individuals and values of X. We provide a simple, and semiparametrically efficient, method of covariate adjustment for settings with complicated treatment regimes. Our method generalizes familiar methods of covariate adjustment used for program evaluation as well as methods of semiparametric regression (e.g., the partially linear regression model).
Bryan S. Graham
Cristine Campos de Xavier Pinto
Conditional Linear Predictor, Causal Inference, Average Treatment Effect, Propensity Score, Semiparametric Efficiency, Semiparametric Regression
2018-11-07
Synthetic Difference In Differences
http://d.repec.org/n?u=RePEc:nbr:nberwo:25532&r=ore
We present a new perspective on the Synthetic Control (SC) method as a weighted least squares regression estimator with time fixed effects and unit weights. This perspective suggests a generalization with two way (both unit and time) fixed effects, and both unit and time weights, which can be interpreted as a unit and time weighted version of the standard Difference In Differences (DID) estimator. We find that this new Synthetic Difference In Differences (SDID) estimator has attractive properties compared to the SC and DID estimators. Formally we show that our approach has double robustness properties: the SDID estimator is consistent under a wide variety of weighting schemes given a well-specified fixed effects model, and SDID is consistent with appropriately penalized SC weights when the basic fixed effects model is misspecified and instead the true data generating process involves a more general low-rank structure (e.g., a latent factor model). We also present results that justify standard inference based on weighted DID regression. Further generalizations include unit and time weighted factor models.
Dmitry Arkhangelsky
Susan Athey
David A. Hirshberg
Guido W. Imbens
Stefan Wager
2019-02
Identification Versus Misspecification in New Keynesian Monetary Policy Models
http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0362&r=ore
In this paper, we study identification and misspecification problems in standard closed and open-economy empirical New-Keynesian DSGE models used in monetary policy analysis. We find that problems with model misspecification still appear to be a first-order issue in monetary DSGE models, and argue that it is problems with model misspecification that may bene.t the most from moving from a classical to a Bayesian framework. We also argue that lack of identification should neither be ignored nor be assumed to affect all DSGE models. Fortunately, identification problems can be readily assessed on a case-by-case basis, by applying recently developed pre-tests of identification.
Adolfson, Malin
Laséen, Stefan
Lindé, Jesper
Ratto, Marco
Bayesian estimation; Monte-Carlo methods; Maximum Likelihood Estimation; DSGE Model; Closed economy; Open economy
2018-11-01
Improved density and distribution function estimation
http://d.repec.org/n?u=RePEc:ifs:cemmap:47/18&r=ore
Given additional distributional information in the form of moment restrictions, kernel density and distribution function estimators with implied generalised empirical likelihood probabilities as weights achieve a reduction in variance due to the systematic use of this extra information. The particular interest here is the estimation of the density or distribution functions of (generalised) residuals in semi-parametric models defined by a finite number of moment restrictions. Such estimates are of great practical interest, being potentially of use for diagnostic purposes, including tests of parametric assumptions on an error distribution, goodness-of-fit tests or tests of overidentifying moment restrictions. The paper gives conditions for the consistency and describes the asymptotic mean squared error properties of the kernel density and distribution estimators proposed in the paper. A simulation study evaluates the small sample performance of these estimators.
Vitaliy Oryshchenko
Richard J. Smith
Moment conditions, residuals, mean squared error, bandwidth
2018-07-23
Generalised Anderson-Rubin statistic based inference in the presence of a singular moment variance matrix
http://d.repec.org/n?u=RePEc:ifs:cemmap:05/19&r=ore
The particular concern of this paper is the construction of a confidence region with pointwise asymptotically correct size for the true value of a parameter of interest based on the generalized Anderson-Rubin (GAR) statistic when the moment variance matrix is singular. The large sample behaviour of the GAR statistic is analysed using a Laurent series expansion around the points of moment variance singularity. Under a condition termed first order moment singularity the GAR statistic is shown to possess a limiting chi-square distribution on parameter sequences converging to the true parameter value. Violation, however, of this condition renders the GAR statistic unbounded asymptotically. The paper details an appropriate discretisation of the parameter space to implement a feasible GAR-based confidence region that contains the true parameter value with pointwise asymptotically correct size. Simulation evidence is provided that demonstrates the efficacy of the GAR-based approach to moment-based inference described in this paper.
Nicky L. Grant
Richard J. Smith
Laurent series expansion; moment indicator; parameter sequences; singular moment matrix
2019-01-30
Sup-ADF-style bubble-detection methods under test
http://d.repec.org/n?u=RePEc:cqe:wpaper:7819&r=ore
In this paper we analyze the performance of supremum augmented Dickey-Fuller (SADF), generalized SADF (GSADF), and backward SADF (BSADF) tests, as introduced by Phillips et al. (International Economic Review 56:1043-1078, 2015) for detecting and date-stamping financial bubbles. In Monte Carlo simulations, we show that the SADF and GSADF tests may reveal substantial size distortions under typical financial-market characteristics (like the empirically well-documented leverage effect). We consider the rational bubble specification suggested by Rotermann and Wilfl ing (Applied Economics Letters 25:1091-1096, 2018) that is able to generate realistic stock-price dynamics (in terms of level trajectories and volatility paths). Simulating stock-price trajectories that contain these parametric bubbles, we demonstrate that the SADF and GSADF tests can have extremely low power under a wide range of bubble-parameter constellations. In an empirical analysis, we use NASDAQ data covering a time-span of 45 years and find that the outcomes of the bubble date-stamping procedure (based on the BSADF test) are sensitive to the data-frequency chosen by the econometrician.
Verena Monschang
Bernd Wilfling
Stock markets, present-value model, rational bubble, explosiveness, SADF and GSADF tests, bubble detection, date-stamping
2019-02