Operations Research
http://lists.repec.org/mailman/listinfo/nep-ore
Operations Research2014-11-22Walter FrischOptimal Formulations for Nonlinear Autoregressive Processes
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140103&r=ore
We develop optimal formulations for nonlinear autoregressive models by representing them as linear autoregressive models with time-varying temporal dependence coefficients. We propose a parameter updating scheme based on the score of the predictive likelihood function at each time point. The resulting time-varying autoregressive model is formulated as a nonlinear autoregressive model and is compared with threshold and smooth-transition autoregressive models. We establish the information theoretic optimality of the score driven nonlinear autoregressive process and the asymptotic theory for maximum likelihood parameter estimation. The performance of our model in extracting the time-varying or the nonlinear dependence for finite samples is studied in a Monte Carlo exercise. In our empirical study we present the in-sample and out-of-sample performances of our model for a weekly time series of unemployment insurance claims.Francisco Blasques, Siem Jan Koopman, Andr� Lucas2014-08-11Asymptotic theory; Dynamic models, Observation driven time series models; Smooth-transition model; Time-Varying Parameters; Treshold autoregressive modelParticle learning for Bayesian non-parametric Markov Switching Stochastic Volatility model
http://d.repec.org/n?u=RePEc:cte:wsrepe:ws142819&r=ore
This paper designs a Particle Learning (PL) algorithm for estimation of Bayesian nonparametric Stochastic Volatility (SV) models for financial data. The performance of this particle method is then compared with the standard Markov Chain Monte Carlo (MCMC) methods for non-parametric SV models. PL performs as well as MCMC, and at the same time allows for on-line type inference. The posterior distributions are updated as new data is observed, which is prohibitively costly using MCMC. Further, a new non-parametric SV model is proposed that incorporates Markov switching jumps.The proposed model is estimated by using PL and tested on simulated data. Finally, the performance of the two non-parametric SV models, with and without Markov switching, is compared by using real financial time series. The results show that including a Markov switching specification provides higher predictive power in the tails of the distribution.Audrone Virbickaite, Hedibert F. Lopes, Concepcion Ausín, Pedro Galeano2014-10Dirichlet Process Mixture, Markov Switching, MCMC, Particle Learning, Stochastic Volatility, Sequential Monte CarloScore Driven exponentially Weighted Moving Average and Value-at-Risk Forecasting
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140092&r=ore
We present a simple new methodology to allow for time variation in volatilities using a recursive updating scheme similar to the familiar RiskMetrics approach. We update parameters using the score of the forecasting distribution rather than squared lagged observations. This allows the parameter dynamics to adapt automatically to any non-normal data features and robustifies the subsequent volatility estimates. Our new approach nests several extensions to the exponentially weighted moving average (EWMA) scheme as proposed earlier. Our approach also easily handles extensions to dynamic higher-order moments or other choices of the preferred forecasting distribution. We apply our method to Value-at-Risk forecasting with Student's t distributions and a time varying degrees of freedom parameter and show that the new method is competitive to or better than earlier methods for volatility forecasting of individual stock returns and exchange rates.Andr� Lucas, Xin Zhang2014-07-22dynamic volatilities, time varying higher order moments, integrated generalized autoregressive score models, Exponential Weighted Moving Average (EWMA), Value-at-Risk (VaR)A One Line Derivation of DCC: Application of a Vector Random Coefficient Moving Average Process
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140087&r=ore
One of the most widely-used multivariate conditional volatility models is the dynamic conditional correlation (or DCC) specification. However, the underlying stochastic process to derive DCC has not yet been established, which has made problematic the derivation of asymptotic properties of the Quasi-Maximum Likelihood Estimators (QMLE). To date, the statistical properties of the QMLE of the DCC parameters have been derived under highly restrictive and unverifiable regularity conditions. The paper shows that the DCC model can be obtained from a vector random coefficient moving average process, and derives the stationarity and invertibility conditions. The derivation of DCC from a vector random coefficient moving average process raises three important issues: (i) demonstrates that DCC is, in fact, a dynamic conditional covariance model of the returns shocks rather than a dynamic conditional correlation model; (ii) provides the motivation, which is presently missing, for standardization of the conditional covariance model to obtain the conditional correlation model; and (iii) shows that the appropriate ARCH or GARCH model for DCC is based on the standardized shocks rather than the returns shocks. The derivation of the regularity conditions should subsequently lead to a solid statistical foundation for the estimates of the DCC parameters.Christian M. Hafner, Michael McAleer2014-07-11Dynamic conditional correlation, dynamic conditional covariance, vector random coefficient moving average, stationarity, invertibility, asymptotic propertiesNonlinear time series analysis of annual temperatures concerning the global Earth climate
http://d.repec.org/n?u=RePEc:pra:mprapa:59140&r=ore
This paper presents results concerning the nonlinear analysis of the mean annual value temperature time series corresponding to the Earth’s global climate for the time period of 713 – 2004. The nonlinear analysis consists of the application of several filtering methods, the estimation of geometrical and dynamical characteristics in the reconstructed phase space, techniques of discrimination between nonlinear low dimensional and linear high dimensional (stochastic) dynamics and tests for serial dependence and nonlinear structure. All study results converge to the conclusion of nonlinear stochastic and complex nature of the global earth climate.Halkos, George, Tsilika, Kyriaki2014Nonlinear dynamics; Correlation dimension, Lyapunov exponent; Mutual information function; Chaos.On the Invertibility of EGARCH
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140096&r=ore
Of the two most widely estimated univariate asymmetric conditional volatility models, the exponential GARCH (or EGARCH) specification can capture asymmetry, which refers to the different effects on conditional volatility of positive and negative effects of equal magnitude, and leverage, which refers to the negative correlation between the returns shocks and subsequent shocks to volatility. However, the statistical properties of the (quasi-) maximum likelihood estimator (QMLE) of the EGARCH parameters are not available under general conditions, but only for special cases under highly restrictive and unverifiable conditions. A limitation in the development of asymptotic properties of the QMLE for EGARCH is the lack of an invertibility condition for the returns shocks underlying the model. It is shown in this paper that the EGARCH model can be derived from a stochastic process, for which the invertibility conditions can be stated simply and explicitly. This will be useful in re-interpreting the existing properties of the QMLE of the EGARCH parameters.Guillaume Gaetan Martinet, Michael McAleer2014-07-25Leverage, asymmetry, existence, stochastic process, asymptotic properties, invertibilityLow Frequency and Weighted Likelihood Solutions for Mixed Frequency Dynamic Factor Models
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140105&r=ore
The multivariate analysis of a panel of economic and financial time series with mixed frequencies is a challenging problem. The standard solution is to analyze the mix of monthly and quarterly time series jointly by means of a multivariate dynamic model with a monthly time index: artificial missing values are inserted for the intermediate months of the quarterly time series. In this paper we explore an alternative solution for a class of dynamic factor models that is specified by means of a low frequency quarterly time index. We show that there is no need to introduce artificial missing values while the high frequency (monthly) information is preserved and can still be analyzed. We also provide evidence that the analysis based on a low frequency specification can be carried out in a computationally more efficient way. A comparison study with existing mixed frequency procedures is presented and discussed. Furthermore, we modify the method of maximum likelihood in the context of a dynamic factor model. We introduce variable-specific weights in the likelihood function to let some variable equations be of more importance during the estimation process. We derive the asymptotic properties of the weighted maximum likelihood estimator and we show that the estimator is consistent and asymptotically normal. We also verify the weighted estimation method in a Monte Carlo study to investigate the effect of differen t choices for the weights in different scenarios. Finally, we empirically illustrate the new developments for the extraction of a coincident economic indicator from a small panel of mixed frequency economic time series.Francisco Blasques, Siem Jan Koopman, Max Mallee2014-08-11Asymptotic theory, Forecasting, Kalman filter, Nowcasting, State spaceOptimal hedging with the cointegrated vector autoregressive model
http://d.repec.org/n?u=RePEc:aah:create:2014-40&r=ore
We derive the optimal hedging ratios for a portfolio of assets driven by a Cointegrated Vector Autoregressive model (CVAR) with general cointegration rank. Our hedge is optimal in the sense of minimum variance portfolio. We consider a model that allows for the hedges to be cointegrated with the hedged asset and among themselves. We find that the minimum variance hedge for assets driven by the CVAR, depends strongly on the portfolio holding period. The hedge is defined as a function of correlation and cointegration parameters. For short holding periods the correlation impact is predominant. For long horizons, the hedge ratio should overweight the cointegration parameters rather then short-run correlation information. In the infinite horizon, the hedge ratios shall be equal to the cointegrating vector. The hedge ratios for any intermediate portfolio holding period should be based on the weighted average of correlation and cointegration parameters. The results are general and can be applied for any portfolio of assets that can be modeled by the CVAR of any rank and order.Søren Johansen, Lukasz Gatarek2014-09-18hedging, cointegration, minimum variance portfolioStructural Stability of the Generalized Taylor Rule
http://d.repec.org/n?u=RePEc:pra:mprapa:58737&r=ore
This paper analyzes the dynamical properties of monetary models with regime switching. We start with the analysis of the evolution of inflation when policy is guided by a simple monetary rule where coefficients switch with the policy regime. We rule out the possibility of a Hopf bifurcation and demonstrate the existence of a period doubling bifurcation. As a result, a small change in the parameters (e.g., a more active policy response) can lead to a drastic change in the path of inflation. We demonstrate that while the New Keynesian model with a current-looking Taylor rule is not prone to bifurcations, a hybrid rule exhibits the same pattern of period doubling bifurcations as the basic setup.Barnett, William A., Duzhak, Evgeniya A.2014New Keynesian, Taylor Rule, regime switching, bifurcation analysis, structural stability.The credibility of Hong Kong’s currency board system: Looking through the prism of MS-VAR models with time-varying transition probabilities
http://d.repec.org/n?u=RePEc:hhs:bofitp:2014_015&r=ore
This paper takes seriously the idea that the coefficients of a VAR and the variance of shocks may be time-varying and so employs a Markov regime-switching VAR model to describe and analyse the time-varying credibility of Hong Kong’s currency board system. The endogenously estimated discrete regime shifts are made dependent on macroeconomic fundamentals. This enables us to determine which changes in macroeconomic variables can trigger switches between the low and high credibility regimes. We carry out extensive testing to search for the most appropriate specification of the Markov regime-switching model. We find strong evidence of regime switching behaviour that portrays the timevarying nature of credibility in the historical data. Our own conditional volatility index provides anticipatory signals and amplifies the regime-switching transition probabilities.Blagov, Boris, Funke , Michael2014-08-25Markov regime-switching VAR; exchange rate regime credibility; Hong KongOn an Estimation Method for an Alternative Fractionally Cointegrated Model
http://d.repec.org/n?u=RePEc:dgr:uvatin:20140052&r=ore
In this paper we consider the Fractional Vector Error Correction model proposed in Avarucci (2007), which is characterized by a richer lag structure than models proposed in Granger (1986) and Johansen (2008, 2009). We discuss the identification issues of the model of Avarucci (2007), following the ideas in Carlini and Santucci de Magistris (2014) for the model of Johansen (2008, 2009). We propose a 4-step estimation procedure that is based on the switching algorithm employed in Carlini and Mosconi (2014) and the GLS procedure in Mosconi and Paruolo (2014). The proposed procedure provides estimates of the long run parameters of the fractionally cointegrated system that are consistent and unbiased, which we demonstrate by a Monte Carlo experiment.Federico Carlini, Katarzyna Lasak2014-05-01Error correction model, Gaussian VAR model, Fractional Cointegration, Estimation algorithm, Maximum likelihood estimation, Switching Algorithm, Reduced Rank RegressionRanking Alternative Non-Combinable Prospects: A Stochastic Dominance Based Route to the Second Best Solution
http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-520&r=ore
The problem considered here is that of dealing with the "incompleteness" property of Stochastic Dominance Orderings by quantifying the extent to which distributions differ when there is no dominant distribution at a given order. For example consider a policymaker's choice problem when facing a set of distinct, non-combinable policy options. When policies are not combinable, the classic comparative static or first best solution to the choice problem is not available. The approach proposed here is an elaboration of a technique employed in the optimal statistical testing literature. It is supposed that policies could be combined so that the ideal first best "stochastically dominant" optimal envelope policy outcome is constructed under the policymaker's given imperative. Then the second best policy whose outcome most closely approximates this ideal is selected by employing a statistic that measures proximity of alternative policies to that ideal. The statistic is shown to obey an Independence of Irrelevant Alternatives proposition. The paper concludes with 3 illustrative examples of its use.Gordon Anderson, Teng Wah Leo2014-10-20Policy Choice, Stochastic DominanceIs Real Per Capita State Personal Income Stationary? New Nonlinear, Asymmetric Panel-Data Evidence
http://d.repec.org/n?u=RePEc:pre:wpaper:201462&r=ore
This paper re-examines the stochastic properties of US State real per capita personal income, using new panel unit-root procedures. The new developments incorporate non-linearity, asymmetry, and cross-sectional correlation within panel data estimation. Including nonlinearity and asymmetry finds that 43 states exhibit stationary real per capita personal income whereas including only nonlinearity produces the 42 states that exhibit stationarity. Stated differently, we find that 2 states exhibit nonstationary real per capita personal income when considering nonlinearity, asymmetry, and cross-sectional dependence.Furkan Emirmahmutoglu, Rangan Gupta, Stephen M. Miller, Tolga Omay2014-10Nonlinear, Panel Unit Root, Asymmetry, Cross-Sectional Dependence, Sieve BootstrapEvaluating Option Pricing Model Performance Using Model Uncertainty
http://d.repec.org/n?u=RePEc:crf:wpaper:14-06&r=ore
The objective of this paper is to evaluate option pricing performance on the cross sectional level. For this purpose, we propose a statistical framework, in which we in particular account for the uncertainty associated with the reported pricing performance. Instead of a single figure, we determine an entire probability distribution function for the loss function that is used to measure option pricing performance. This methodology enables us to visualize the effect of parameter uncertainty on the reported pricing performance. Using a data driven approach, we confirm previous evidence that standard volatility models with clustering and leverage effects are sufficient for the option pricing purpose. In addition, we demonstrate that there is short-term persistence but long-term heterogeneity in crosssectional option pricing information. This finding has two important implications. First, it justifies the practitioner’s routine to refrain from time series approaches, and instead estimate option pricing models on a cross-section by cross-section basis. Second, the long term heterogeneity in option prices pinpoints the importance of measuring, comparing and testing option pricing model for each cross-section separately. To our knowledge no statistical testing framework has been applied to a single cross-section of option prices before. We propose a methodology that addresses this need. The proposed framework can be applied to a broad set of models and data. In the empirical part of the paper, we show by means of example, an application that uses a discrete time volatility model on S&P 500 European options.Thorsten Lehnert, Gildas Blanchard, Dennis Bams2014option pricing, cross-section, estimation risk, parameter uncertainty,specification test, bootstrappingModeling Systematic Risk and Point-in-Time Probability of Default under the Vasicek Asymptotic Single Risk Factor Model Framework
http://d.repec.org/n?u=RePEc:pra:mprapa:59025&r=ore
Systematic risk has been a focus for stress testing and risk capital assessment. Under the Vasicek asymptotic single risk factor model framework, entity default risk for a risk homogeneous portfolio divides into two parts: systematic and entity specific. While entity specific risk can be modelled by a probit or logistic model using a relatively short period of portfolio historical data, modeling of systematic risk is more challenging. In practice, most default risk models do not fully or dynamically capture systematic risk. In this paper, we propose an approach to modeling systematic and entity specific risks by parts and then aggregating together analytically. Systematic risk is quantified and modelled by a multifactor Vasicek model with a latent residual, a factor accounting for default contagion and feedback effects. The asymptotic maximum likelihood approach for parameter estimation for this model is equivalent to least squares linear regression. Conditional entity PDs for scenario tests and through-the-cycle entity PD all have analytical solutions. For validation, we model the point-in-time entity PD for a commercial portfolio, and stress the portfolio default risk by shocking the systematic risk factors. Rating migration and portfolio loss are assessed.Yang, Bill Huajian2014-03-18point-in-time PD, through-the-cycle PD, Vasicek model, systematic risk, entity specific risk, stress testing, rating migration, scenario lossHow Large are Firing Costs? A Cross-Country Study
http://d.repec.org/n?u=RePEc:pra:mprapa:58762&r=ore
This paper provides evidence for the size of firing costs for eight countries. In contrast to the existing literature, we use the optimality conditions obtained in a search and matching model to find a reduced form equation for firing costs. We find that our estimates are slightly larger compared with other studies. Finally, we offer three explanations for the observed cross-country patterns.Wesselbaum, Dennis2014-06-23Employment protection, Firing costs, Optimality conditions.The Term Structures of Co-entropy in International Financial Markets
http://d.repec.org/n?u=RePEc:ecl:ohidic:2013-17&r=ore
We propose a new entropy-based correlation measure (co-entropy) to evaluate the performance of international asset pricing models. Co-entropy summarizes in a single number the extent of co-dependence between two variables beyond normality. We document that the co-entropy of international stochastic discount factors (SDFs) can be decomposed into a series of entropy-based correlations of permanent and transitory components of the SDFs. We derive bounds and restrictions on co-entropies of these components, which we then use to analyze the composition of co-dependence of international SDFs. A large cross-section of countries is employed to provide empirical evidence on the entropy-based correlations at various horizons. We find that the co-entropy of the transitory components is always sizably smaller than the co-entropy of the permanent components, with the latter usually being very close to one. Furthermore, the entropy based correlation of transitory components of SDFs increases with the investment horizon, which features an upward sloping term structure of co-entropies. We confront several state-of-the-art international finance models with these empirical regularities, and find that existing models cannot account for the composition of codependence at all horizons.Chabi-Yo, Fousseni, Colacito, Riccardo2013-11