nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒01‒30
28 papers chosen by
Sune Karlsson
Orebro University

  1. Integrated Modified OLS Estimation and Fixed-b Inference for Cointegrating Regressions By Vogelsang, Timothy J.; Wagner, Martin
  2. Consistent Estimation of the Fixed Effects Ordered Logit Model By Baetschmann, Gregori; Staub, Kevin; Winkelmann, Rainer
  3. Forecasting Covariance Matrices: A Mixed Frequency Approach By Roxana Halbleib; Valeri Voev
  4. Forecasts in a Slightly Misspecified Finite Order VAR By Ulrich K. Müller; James H. Stock
  5. Asymptotics for the conditional-sum-of-squares estimator in fractional time series models By Morten Ørregaard Nielsen
  6. Prediction-based estimating functions: review and new developments By Michael Sørensen
  7. A Note on Estimating Wishart Autoagressive Model By Roxana Halbleib
  8. Asymmetric Baxter-King filter By Buss, Ginters
  9. A Grouped Factor Model By Chen, Pu
  10. The Discrete–Continuous Correspondence for Frequency-Limited Arma Models and the Hazards of Oversampling By David Stephen Pollock
  11. Sensitivity Analysis of SAR Estimators By Liu, Shuangzhe; Polasek, Wolfgang; Sellner, Richard
  12. An Alternative Solution to the Autoregressivity Paradox in Time Series Analysis By Gianluca Cubadda; Umberto Triacca
  13. Testing the local volatility assumption: a statistical approach By Mark Podolskij; Mathieu Rosenbaum
  14. Asymptotic Distribution of JIVE in a Heteroskedastic IV Regression with Many Instruments By Chao, Swanson, Hausman, Newey, and Woutersen
  15. Band-Limited Stochastic Processes in Discrete and Continuous Time By David Stephen Pollock
  16. A Tale of 3 Cities: Model Selection in Over-, Exact, and Under-specified Equations By Jennifer L. Castle; David F. Hendry
  17. Identifying Trend and Age Effects in Sickness Absence from Individual Data: Some Econometric Problems By Biørn, Erik
  18. The dynamics of real exchange rates - A reconsideration By Heinen, Florian; Kaufmann, Hendrik; Sibbertsen, Philipp
  19. Forecasting Multivariate Volatility Using the VARFIMA Model on Realized Covariance Cholesky Factors By Roxana Halbleib; Valerie Voev
  20. Alternative Methods of Seasonal Adjustment By David Stephen Pollock; Emi Mise
  21. Cyclical Dynamics of Industrial Production and Employment: Markov Chain-based Estimates and Tests By Sumru Altug; Baris Tan; Gozde Gencer
  22. Transfer Functions By David Stephen Pollock
  23. Using Model Selection Algorthims to Obtain Reliable Coefficient Estimates By Jennifer Castle; Xiaochuan Qin; W. Robert Reed
  24. Bayesian Analysis of a Triple-Threshold GARCH Model with Application in Chinese Stock Market By Zhu, Junjun; Xie, Shiyu
  25. A Copula-GARCH Model for Macro Asset Allocation of a Portfolio with Commodities: an Out-of-Sample Analysis By Luca RICCETTI
  26. The sensitivity of the Scaled Model of Error with respect to the choice of the correlation parameters: A Simulation Study By Graziani, Rebecca; Keilman, Nico
  27. Statistical Signal Extraction and Filtering: Notes for the Ercim Tutorial, December 9th 2010 By David Stephen Pollock
  28. On the Order of Magnitude of Sums of Negative Powers of Integrated Processes By Pötscher, Benedikt M.

  1. By: Vogelsang, Timothy J. (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria); Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: This paper is concerned with parameter estimation and inference in a cointegrating regression, where as usual endogenous regressors as well as serially correlated errors are considered. We propose a simple, new estimation method based on an augmented partial sum (integration) transformation of the regression model. The new estimator is labeled Integrated Modified Ordinary Least Squares (IM-OLS). IM-OLS is similar in spirit to the fully modified approach of Phillips and Hansen (1990) with the key difference that IM-OLS does not require estimation of long run variance matrices and avoids the need to choose tuning parameters (kernels, bandwidths, lags). Inference does require that a long run variance be scaled out, and we propose traditional and fixed-b methods for obtaining critical values for test statistics. The properties of IM-OLS are analyzed using asymptotic theory and finite sample simulations. IM-OLS performs well relative to other approaches in the literature.
    Keywords: Bandwidth, cointegration, fixed-b asymptotics, Fully Modified OLS, IM-OLS, kernel
    JEL: C31 C32
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:263&r=ecm
  2. By: Baetschmann, Gregori (University of Zurich); Staub, Kevin (University of Zurich); Winkelmann, Rainer (University of Zurich)
    Abstract: The paper re-examines existing estimators for the panel data fixed effects ordered logit model, proposes a new one, and studies the sampling properties of these estimators in a series of Monte Carlo simulations. There are two main findings. First, we show that some of the estimators used in the literature are inconsistent, and provide reasons for the inconsistency. Second, the new estimator is never outperformed by the others, seems to be substantially more immune to small sample bias than other consistent estimators, and is easy to implement. The empirical relevance is illustrated in an application to the effect of unemployment on life satisfaction.
    Keywords: ordered response, panel data, correlated heterogeneity, incidental parameters
    JEL: C23 C25 J28 J64
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp5443&r=ecm
  3. By: Roxana Halbleib (European Center for Advanced Research in Economics and Statistics (ECARES), Université libre de Bruxelles, Solvay Brussels School of Economics and Management and CoFE); Valeri Voev (School of Economics and Management, Aarhus University and CREATES)
    Abstract: This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with high-frequency data with correlation forecasts based on daily data. This new approach allows for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information.
    Keywords: Volatility forecasting, High-frequency data, Realized variance
    JEL: C32 C53 G11
    Date: 2011–01–18
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-03&r=ecm
  4. By: Ulrich K. Müller; James H. Stock
    Abstract: We propose a Bayesian procedure for exploiting small, possibly long-lag linear predictability in the innovations of a finite order autoregression. We model the innovations as having a log-spectral density that is a continuous mean-zero Gaussian process of order 1/√T. This local embedding makes the problem asymptotically a normal-normal Bayes problem, resulting in closed-form solutions for the best forecast. When applied to data on 132 U.S. monthly macroeconomic time series, the method is found to improve upon autoregressive forecasts by an amount consistent with the theoretical and Monte Carlo calculations.
    JEL: C11 C22 C32
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:16714&r=ecm
  5. By: Morten Ørregaard Nielsen (Queen's University and CREATES)
    Abstract: This paper proves consistency and asymptotic normality for the conditional-sum-of-squares (CSS) estimator in fractional time series models. The models are parametric and quite general. The novelty of the consistency result is that it applies to an arbitrarily large set of admissible parameter values, for which the objective function does not converge uniformly in probablity thus making the proof much more challenging than usual. The neighborhood around the critical point where uniform convergence fails is handled using a truncation argument. The only other consistency proof for such models that applies to an arbitrarily large set of admissible parameter values appears to be Hualde and Robinson (2010), who require all moments of the innovation process to exist. In contrast, the present proof requires only a few moments of the innovation process to be finite (four in the simplest case). Finally, all arguments, assumptions, and proofs in this paper are stated entirely in the time domain, which is somewhat remarkable for this literature.
    Keywords: Asymptotic normality, conditional-sum-of-squares estimator, consistency, fractional integration, fractional time series, likelihood inference, long memory, nonstationary, uniform convergence
    JEL: C22
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:qed:wpaper:1259&r=ecm
  6. By: Michael Sørensen (University of Copenhagen and CREATES)
    Abstract: The general theory of prediction-based estimating functions for stochastic process models is reviewed and extended. Particular attention is given to optimal estimation, asymptotic theory and Gaussian processes. Several examples of applications are presented. In particular partial observation of a systems of stochastic differential equations is discussed. This includes diffusions observed with measurement errors, integrated diffusions, stochastic volatility models, and hypoelliptic stochastic differential equations. The Pearson diffusions, for which explicit optimal prediction-based estimating functions can be found, are briefly presented.
    Keywords: Aasymptotic normality, consistency, diffusion with measurement errors, Gaussian process, integrated diffusion, linear predictors, non-Markovian models, optimal estimating function, partially observed system, Pearson diffusion.
    JEL: C22 C51
    Date: 2011–01–19
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-05&r=ecm
  7. By: Roxana Halbleib
    Abstract: This note solves the puzzle of estimating degenerate Wishart Autoagressive processes, introduced by Gourieroux, Jasiak and Sufana (2009)to model multivariate stochastic volatility. It derives the asymptotic and empirical properties of the Method of Moment estimator of the Wishart degrees of freedom subject to different stationarity asumptions and specific distributional settings of the underlying processes.
    Keywords: Wishart autoagressive process; asymptotic properties; realized covariance; log-normal distribution
    JEL: C32 C46 C51
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/73606&r=ecm
  8. By: Buss, Ginters
    Abstract: The paper proposes an extension of the symmetric Baxter-King band pass filter to an asymmetric Baxter-King filter. The optimal correction scheme of the ideal filter weights is the same as in the symmetric version, i.e, cut the ideal filter at the appropriate length and add a constant to all filter weights to ensure zero weight on zero frequency. Since the symmetric Baxter-King filter is unable to extract the desired signal at the very ends of the series, the extension to an asymmetric filter is useful whenever the real time estimation is needed. The paper uses Monte Carlo simulation to compare the proposed filter's properties in extracting business cycle frequencies to the ones of the original Baxter-King filter and Christiano-Fitzgerald filter. Simulation results show that the asymmetric Baxter-King filter is superior to the asymmetric default specification of Christiano-Fitzgerald filter in real time signal extraction exercises.
    Keywords: real time estimation; Christiano-Fitzgerald filter; Monte Carlo simulation; band pass filter
    JEL: C13 C22 C15
    Date: 2011–01–17
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28176&r=ecm
  9. By: Chen, Pu
    Abstract: In this paper we present a grouped factor model that is designed to explore grouped structures in factor models. We develop an econometric theory consisting of a consistent classification rule to assign variables to their respective groups and a class of consistent model selection criteria to determine the number of groups as well as the number of factors in each group. As a result, we propose a procedure to estimate grouped factor models, in which the unknown number of groups, the unknown relationship between variables to their groups as well as the unknown number of factors in each group are statistically determined based on observed data. The procedure can help to estimate common factor that are pervasive across all groups and group-specific factors that are pervasive only in the respective groups. Simulations show that our proposed estimation procedure has satisfactory finite sample properties.
    Keywords: Factor Models; Generalized Principal Component Analysis; Model Selection
    JEL: C63 C22
    Date: 2010–10–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28083&r=ecm
  10. By: David Stephen Pollock
    Abstract: Discrete-time ARMA processes can be placed in a one-to-one correspondence with a set of continuous-time processes that are bounded in frequency by the Nyquist value of ? radians per sample period. It is well known that, if data are sampled from a continuous process of which the maximum frequency exceeds the Nyquist value, then there will be a problem of aliasing. However, if the sampling is too rapid, then other problems will arise that will cause the ARMA estimates to be severely biased. The paper reveals the nature of these problems and it shows how they may be overcome. It is argued that the estimation of macroeconomic processes may be compromised by a failure to take account of their limits in frequency.
    Keywords: Stochastic Differential Equations; Band-Limited Stochastic Processes; Oversampling
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:11/14&r=ecm
  11. By: Liu, Shuangzhe (University of Canberra, Canberra, Australia); Polasek, Wolfgang (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria); Sellner, Richard (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria)
    Abstract: Estimators of spatial autoregressive (SAR) models depend in a highly non-linear way on the spatial correlation parameter and least squares (LS) estimators cannot be computed in closed form. We first compare two simple LS estimators by distance and covariance properties and then we study the local sensitivity behavior of these estimators using matrix derivatives. These results allow us to calculate the Taylor approximation of the least squares estimator in the spatial autoregression (SAR) model up to the second order. Using Kantorovich inequalities, we compare the covariance structure of the two estimators and we derive efficiency comparisons by upper bounds. Finally, we demonstrate our approach by an example for GDP and employment in 239 European NUTS2 regions. We find a good approximation behavior of the SAR estimator, evaluated around the non-spatial LS estimators. These results can be used as a basis for diagnostic tools to explore the sensitivity of spatial estimators.
    Keywords: Spatial autoregressive models, least squares estimators, sensitivity analysis, Taylor Approximations, Kantorovich inequality
    JEL: C11 C15 C52 E17 R12
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:ihs:ihsesp:262&r=ecm
  12. By: Gianluca Cubadda (:Faculty of Economics, University of Rome "Tor Vergata"); Umberto Triacca (Università dell'Aquila)
    Abstract: This note concerns with the marginal models associated with a given vector autoregressive model. In particular, it is shown that a reduction in the orders of the univariate ARMA marginal models can be determined by the presence of variables integrated with different orders. The concepts and methods of the paper are illustrated via an empirical investigation of the low-frequency properties of hours worked in the US.
    Keywords: VAR Models; ARIMA Models; Final Equations
    JEL: C32
    Date: 2011–01–24
    URL: http://d.repec.org/n?u=RePEc:rtv:ceisrp:184&r=ecm
  13. By: Mark Podolskij (University of Heidelberg and CREATES); Mathieu Rosenbaum (École Polytechnique Paris)
    Abstract: In practice, the choice of using a local volatility model or a stochastic volatility model is made according to their respective ability to fit implied volatility surfaces. In this paper, we adopt an opposite point of view. Indeed, based on historical data, we design a statistical procedure aiming at testing the assumption of a local volatility model for the price dynamics, against the alternative of a stochastic volatility model.
    Keywords: Local Volatility Models, Stochastic Volatility Models, Test Statistics, Semi-Martingales, Limit Theorems.
    JEL: C10 C13 C14
    Date: 2011–01–13
    URL: http://d.repec.org/n?u=RePEc:aah:create:2011-04&r=ecm
  14. By: Chao, Swanson, Hausman, Newey, and Woutersen
    Abstract: This paper derives the limiting distributions of alternative jackknife IV (JIV) estimators and gives formulae for accompanying consistent standard errors in the presence of heteroskedasticity and many instruments. The asymptotic framework includes the many instrument sequence of Bekker (1994) and the many weak instrument sequence of Chao and Swanson (2005). We show that JIV estimators are asymptotically normal and that standard errors are consistent provided that \frac{\sqrt{K_{n}}}{n} \to \infty as n \to \infty, where K_n and r_n denote, respectively, the number of instruments and the concentration parameter. This is in contrast to the asymptotic behavior of such classical IV estimators as LIML, B2SLS, and 2SLS, all of which are inconsistent in the presence of heteroskedasticity, unless \frac{K_n}{r_n}\to 0. We also show that the rate of convergence and the form of the asymptotic covariance matrix of the JIV estimators will in general depend on the strength of the instruments as measured by the relative orders of magnitude of r_n and K_n.
    Date: 2010–10
    URL: http://d.repec.org/n?u=RePEc:jhu:papers:567&r=ecm
  15. By: David Stephen Pollock
    Abstract: A theory of band-limited linear stochastic processes is described and it is related to the familiar theory of ARMA models in discrete time. By ignoring the limitation on the frequencies of the forcing function, in the process of fitting a conventional ARMA model, one is liable to derive estimates that are severely biased. If the maximum frequency in the sampled data is less than the Nyquist value, then the underlying continuous function can be reconstituted by sinc function or Fourier interpolation. The estimation biases can be avoided by re-sampling the continuous process at a rate corresponding to the maximum frequency of the forcing function. Then, there is a direct correspondence between the parameters of the band-limited ARMA model and those of an equivalent continuous-time process.
    Keywords: Stochastic Differential Equations; Band-Limited Stochastic Processes; Aliasing and Interference
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:11/11&r=ecm
  16. By: Jennifer L. Castle; David F. Hendry
    Abstract: Model selection from a general unrestricted model (GUM) can potentially confront three very different environments: over-, exact, and under-specification of the data generation process (DGP). In the first, and most-studied setting, the DGP is nested in the GUM, and the main role of general-to-specific (Gets) selection is to eliminate the irrelevant variables while retaining the relevant. In an exact specification, the theory formulation is precisely correct and can always be retained by ‘forcing’ during selection, but is nevertheless embedded in a broader model where possible omissions, breaks, non-linearity, or data contamination are checked. The most realistic case is where some aspects of the relevant DGP are correctly included, but some are omitted, leading to under-specification. We review the analysis of model selection procedures which allow for many relevant effects, but inadvertently omit others, yet irrelevant variables are also included in the GUM, and exploit the ability of automatic procedures to handle more variables than observations, and consequentially tackle perfect collinearity. Considering all of the possibilities - where it is not known which one obtains in practice - reveals that model selection can excel relative to just fitting a prior specification, yet has very low costs when an exact specification is correctly postulated initially.
    Keywords: Model selection, congruence, mis-specification, impulse-indicator saturation, Autometrics
    JEL: C51 C22
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:523&r=ecm
  17. By: Biørn, Erik (Dept. of Economics, University of Oslo)
    Abstract: When using data from individuals who are in the labour force to disentangle the empirical relevance of cohort, age and time effects for sickness absence, the inference may be biased, affected by sorting-out mechanisms. One reason is unobserved heterogeneity potentially affecting both health status and ability to work, which can bias inference because the individuals entering the data set are conditional on being in the labour force. Can this sample selection be adequately handled by attaching unobserved heterogeneity to non-structured fixed effects? In the paper we examine this issue and discuss the econometric setup for identifying from such data time effects in sickness absence. The inference and interpretation problem is caused, on the one hand, by the occurrence of time, cohort and age effects also in the labour market participation, on the other hand by correlation between unobserved heterogeneity in health status and in ability to work. We show that running panel data regressions, ordinary or logistic, of sickness absence data on certain covariates, when neglecting this sample selection, is likely to obscure the interpretation of the results, except in certain, not particularly realistic, cases. However, the fixed individual effects approach is more robust in this respect than an approach controlling for fixed cohort effects only.
    Keywords: Sickness absence; health-labour interaction; cohort-age-time problem; self-selection; latent heterogeneity; bivariate censoring; truncated binormal distribution; panel data
    JEL: C23 C25 I38 J22
    Date: 2010–12–18
    URL: http://d.repec.org/n?u=RePEc:hhs:osloec:2010_020&r=ecm
  18. By: Heinen, Florian; Kaufmann, Hendrik; Sibbertsen, Philipp
    Abstract: While it is widely agreed that Purchasing Power Parity (PPP) holds as a long-run concept the specific dynamic driving the process is largely build upon a priori economic belief rather than a thorough statistical modeling procedure. The two prevailing time series models, i.e. the exponential smooth transition autoregressive (ESTAR) model and the Markov switching autoregressive (MSAR) model, are both able to support the PPP as a long-run concept. However, the dynamic behavior of real exchange rates implied by these two models is very different and leads to different economic interpretations. In this paper we approach this problem by offering a bootstrap based testing procedure to discriminate between these two rival models. We further study the small sample performance of the test. In an application we analyze several major real exchange rates to shed light on the question which model best describes these processes. This allows us to draw a conclusion about the driving forces of real exchange rates.
    Keywords: Nonlinearities, Markov switching, Smooth transition, Specification testing, Real exchange rates
    JEL: C12 C15 C22 C52 F31
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:han:dpaper:dp-463&r=ecm
  19. By: Roxana Halbleib; Valerie Voev
    Abstract: This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. By modelling the Cholesky factors of the covariance matrices, the model generates positive definite, but biased covariance forecasts. In this paper, we provide empirical evidence that parsimonious versions of the model generate the best covariance forecasts in the absence of bias correction. Moreover, we show by means of stochastic dominance tests that any risk averse investor, regardless of the type of utility function or return distribution, would be better-off from using this model than from using some standard approaches.
    Keywords: Forecasting; Fractional integration; Stochastic dominance; Portfolio optimization; Realized covariance
    JEL: C32 C53 G11
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/73585&r=ecm
  20. By: David Stephen Pollock; Emi Mise
    Abstract: Alternative methods for the seasonal adjustment of economic data are described that operate in the time domain and in the frequency domain. The time-domain method, which employs a classical comb filter, mimics the effects of the model-based procedures of the SEATS–TRAMO and STAMP programs. The frequency-domain method eliminates the sinusoidal elements of which, in the judgment of the user, the seasonal component is composed. It is proposed that, in some circumstances, seasonal adjustment is best achieved by eliminating all elements in excess of the frequency that marks the upper limit of the trend-cycle component of the data. It is argued that the choice of the method seasonal adjustment is liable to affect the determination of the turning points of the business cycle.
    Keywords: Wiener–Kolmogorov Filtering; Frequency-Domain Methods; The Trend-Cycle Component
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:11/12&r=ecm
  21. By: Sumru Altug (Koç University and CEPR); Baris Tan (Koç University); Gozde Gencer (Yapikredi Bank)
    Abstract: This paper characterizes the business cycle as a recurring Markov chain for a broad set of developed and developing countries. The objective is to understand differences in cyclical phenomena across a broad range of countries based on the behavior of two key economic times series – industrial production and employment. The Markov chain approach is a parsimonious approach that allows us to examine the cyclical dynamics of different economic time series using limited judgment on the issue. Time homogeneity and time dependence tests are implemented to determine the stationarity and dependence properties of the series. Univariate processes for industrial production and employment growth are estimated individually and a composite indicator that combines information on these series is also constructed. Tests of equality of the estimated Markov chains across countries are also implemented to identify similarities and differences in the cyclical dynamics of the relevant series.
    Keywords: Markov chain models, economic indicators, cross-country analysis
    JEL: C22 E32 E37
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:koc:wpaper:1101&r=ecm
  22. By: David Stephen Pollock
    Abstract: In statistical time-series analysis, signal processing and control engineering, a transfer function is a mathematical relationship between a numerical input to a dynamic system and the resulting output. The theory of transfer functions describes how the input/output relationship is affected by the structure of the transfer function. The theory of the transfer functions of linear time-invariant (LTI) systems has been available for many years. It was developed originally in connection with electrical and mechanical systems described in continuous time. The basic theory can be attributed largely to Oliver Heaviside (1850–1925) [3] [4]. With the advent of digital signal processing, the emphasis has shifted to discretetime representations. These are also appropriate to problems in statistical time-series analysis, where the data are in the form of sequences of stochastic values sampled at regular intervals.
    Keywords: Impulse response; Frequency response; Spectral density
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:11/15&r=ecm
  23. By: Jennifer Castle; Xiaochuan Qin; W. Robert Reed (University of Canterbury)
    Abstract: This review surveys a number of common Model Selection Algorithms (MSAs), discusses how they relate to each other, and identifies factors that explain their relative performances. At the heart of MSA performance is the trade-off between Type I and Type II errors. Some relevant variables will be mistakenly excluded, and some irrelevant variables will be retained by chance. A successful MSA will find the optimal trade-off between the two types of errors for a given data environment. Whether a given MSA will be successful in a given environment depends on the relative costs of these two types of errors. We use Monte Carlo experimentation to illustrate these issues. We confirm that no MSA does best in all circumstances. Even the worst MSA in terms of overall performance – the strategy of including all candidate variables – sometimes performs best (viz., when all candidate variables are relevant). We also show how (i) the ratio of relevant to total candidate variables and (ii) DGP noise affect relative MSA performance. Finally, we discuss a number of issues complicating the task of MSAs in producing reliable coefficient estimates.
    Keywords: Model selection algorithms; Information Criteria; General-to-Specific modeling; Bayesian Model Averaging; Portfolio Models; AIC; SIC; AICc; SICc; Monte Carlo Analysis; Autometrics
    JEL: C52 C15
    Date: 2011–01–01
    URL: http://d.repec.org/n?u=RePEc:cbt:econwp:11/03&r=ecm
  24. By: Zhu, Junjun; Xie, Shiyu
    Abstract: We construct one triple-threshold GARCH model to analyze the asymmetric response of mean and conditional volatility. In parameter estimation, we apply Griddy-Gibbs sampling method, which require less work in selection of starting values and pre-run. As we apply this model in Chinese stock market, we find that 12-days-average return plays an important role in defining different regimes. While the down regime is characterized by negative 12-days-average return, the up regime has positive 12-days-average return. The conditional mean responds differently between down and up regime. In down regime, the return at date t is affected negatively by lag 2 negative return, while in up regime the return responds significantly to both positive and negative lag 1 past return. Moreover, our model shows that volatility reacts asymmetrically to positive and negative innovations, and this asymmetric reaction varies between down and up regimes. In down regime, volatility becomes more volatile when negative innovation impacts the market than when positive one does, while in up regime positive innovation leads to more volatile market than negative one.
    Keywords: Threshold; Griddy-Gibbs sampling; MCMC method; GARCH
    JEL: G15 C22 C11
    Date: 2010–06–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28195&r=ecm
  25. By: Luca RICCETTI (Universita' Politecnica delle Marche, Dipartimento di Economia)
    Abstract: Many authors have suggested that the mean-variance criterion, conceived by Markowitz (1952), is not optimal for asset allocation, because the investor expected utility function is better proxied by a function that uses higher moments and because returns are distributed in a non-Normal way, being asymmetric and/or leptokurtic, so the mean-variance criterion can not correctly proxy the expected utility with non-Normal returns. In Riccetti (2010) I apply a simple GARCH-copula model and I find that copulas are not useful for choosing among stock indices, but they can be useful in a macro asset allocation model, that is, for choosing the stock and the bond composition of portfolios. In this paper I apply that GARCH-copula model for the macro asset allocation of portfolios containing a commodity component. I find that the copula model appears useful and better than the mean-variance one for the macro asset allocation also in presence of a commodity index, even if it is not better than GARCH models on independent univariate series, probably because of the low correlation of the commodity index returns to the stock, the bond and the exchange rate returns.
    Keywords: Portfolio Choice
    JEL: C52 C53 G11
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:anc:wpaper:355&r=ecm
  26. By: Graziani, Rebecca (Department of Decision Sciences, Bocconi University, Milano); Keilman, Nico (Dept. of Economics, University of Oslo)
    Abstract: The Scaled Model of Error has gained considerable popularity during the past ten years as a device for computing probabilistic population forecasts of the cohort-component type. In this report we investigate how sensitive probabilistic population forecasts produced by means of the Scaled Model of Error are for small changes in the correlation parameters. We consider changes in the correlation of the age-specific fertility forecast error increments across time and age, and changes in the correlation of the age-specific mortality forecast error increments across time, age and sex. Next we analyse the impact of such changes on the forecasts of the Total Fertility Rate and of the Male and Female Life Expectancies respectively. For age specific fertility we find that the correlation across ages has only limited impact on the uncertainty in the Total Fertility Rate. As a consequence, annual numbers of births will be little affected. The autocorrelation in error increments is an important parameter, in particular in the long run. Also, the autocorrelation in error increments for age specific mortality is important. It has a large effect on long run uncertainty in life expectancy values, and hence on the uncertainty around the elderly population in the future. In empirical applications of the Scaled Model of Error, one should give due attention to a correct estimation of these two parameters.
    Keywords: Scaled model of error; Stochastic population forecast; Probabilistic cohort component model; Sensitivity; Correlation
    JEL: C15 C49 C63 J40
    Date: 2010–11–23
    URL: http://d.repec.org/n?u=RePEc:hhs:osloec:2010_022&r=ecm
  27. By: David Stephen Pollock
    Abstract: These notes have been written to accompany a tutorial session held at the London School of Economics as a prelude to the ERCIM conference of December 2010.
    Date: 2010–12
    URL: http://d.repec.org/n?u=RePEc:lec:leecon:11/13&r=ecm
  28. By: Pötscher, Benedikt M.
    Abstract: Bounds on the order of magnitude of sums of negative powers of integrated processes are derived.
    Keywords: integrated proesses; sums of negative powers; order of magnitude; martingale transform
    JEL: C22
    Date: 2011–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:28287&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.