
on Econometrics 
By:  Vogelsang, Timothy J. (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria); Wagner, Martin (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria) 
Abstract:  This paper is concerned with parameter estimation and inference in a cointegrating regression, where as usual endogenous regressors as well as serially correlated errors are considered. We propose a simple, new estimation method based on an augmented partial sum (integration) transformation of the regression model. The new estimator is labeled Integrated Modified Ordinary Least Squares (IMOLS). IMOLS is similar in spirit to the fully modified approach of Phillips and Hansen (1990) with the key difference that IMOLS does not require estimation of long run variance matrices and avoids the need to choose tuning parameters (kernels, bandwidths, lags). Inference does require that a long run variance be scaled out, and we propose traditional and fixedb methods for obtaining critical values for test statistics. The properties of IMOLS are analyzed using asymptotic theory and finite sample simulations. IMOLS performs well relative to other approaches in the literature. 
Keywords:  Bandwidth, cointegration, fixedb asymptotics, Fully Modified OLS, IMOLS, kernel 
JEL:  C31 C32 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:263&r=ecm 
By:  Baetschmann, Gregori (University of Zurich); Staub, Kevin (University of Zurich); Winkelmann, Rainer (University of Zurich) 
Abstract:  The paper reexamines existing estimators for the panel data fixed effects ordered logit model, proposes a new one, and studies the sampling properties of these estimators in a series of Monte Carlo simulations. There are two main findings. First, we show that some of the estimators used in the literature are inconsistent, and provide reasons for the inconsistency. Second, the new estimator is never outperformed by the others, seems to be substantially more immune to small sample bias than other consistent estimators, and is easy to implement. The empirical relevance is illustrated in an application to the effect of unemployment on life satisfaction. 
Keywords:  ordered response, panel data, correlated heterogeneity, incidental parameters 
JEL:  C23 C25 J28 J64 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5443&r=ecm 
By:  Roxana Halbleib (European Center for Advanced Research in Economics and Statistics (ECARES), Université libre de Bruxelles, Solvay Brussels School of Economics and Management and CoFE); Valeri Voev (School of Economics and Management, Aarhus University and CREATES) 
Abstract:  This paper proposes a new method for forecasting covariance matrices of financial returns. The model mixes volatility forecasts from a dynamic model of daily realized volatilities estimated with highfrequency data with correlation forecasts based on daily data. This new approach allows for flexible dependence patterns for volatilities and correlations, and can be applied to covariance matrices of large dimensions. The separate modeling of volatility and correlation forecasts considerably reduces the estimation and measurement error implied by the joint estimation and modeling of covariance matrix dynamics. Our empirical results show that the new mixing approach provides superior forecasts compared to multivariate volatility specifications using single sources of information. 
Keywords:  Volatility forecasting, Highfrequency data, Realized variance 
JEL:  C32 C53 G11 
Date:  2011–01–18 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201103&r=ecm 
By:  Ulrich K. Müller; James H. Stock 
Abstract:  We propose a Bayesian procedure for exploiting small, possibly longlag linear predictability in the innovations of a finite order autoregression. We model the innovations as having a logspectral density that is a continuous meanzero Gaussian process of order 1/√T. This local embedding makes the problem asymptotically a normalnormal Bayes problem, resulting in closedform solutions for the best forecast. When applied to data on 132 U.S. monthly macroeconomic time series, the method is found to improve upon autoregressive forecasts by an amount consistent with the theoretical and Monte Carlo calculations. 
JEL:  C11 C22 C32 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:16714&r=ecm 
By:  Morten Ørregaard Nielsen (Queen's University and CREATES) 
Abstract:  This paper proves consistency and asymptotic normality for the conditionalsumofsquares (CSS) estimator in fractional time series models. The models are parametric and quite general. The novelty of the consistency result is that it applies to an arbitrarily large set of admissible parameter values, for which the objective function does not converge uniformly in probablity thus making the proof much more challenging than usual. The neighborhood around the critical point where uniform convergence fails is handled using a truncation argument. The only other consistency proof for such models that applies to an arbitrarily large set of admissible parameter values appears to be Hualde and Robinson (2010), who require all moments of the innovation process to exist. In contrast, the present proof requires only a few moments of the innovation process to be finite (four in the simplest case). Finally, all arguments, assumptions, and proofs in this paper are stated entirely in the time domain, which is somewhat remarkable for this literature. 
Keywords:  Asymptotic normality, conditionalsumofsquares estimator, consistency, fractional integration, fractional time series, likelihood inference, long memory, nonstationary, uniform convergence 
JEL:  C22 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:qed:wpaper:1259&r=ecm 
By:  Michael Sørensen (University of Copenhagen and CREATES) 
Abstract:  The general theory of predictionbased estimating functions for stochastic process models is reviewed and extended. Particular attention is given to optimal estimation, asymptotic theory and Gaussian processes. Several examples of applications are presented. In particular partial observation of a systems of stochastic differential equations is discussed. This includes diffusions observed with measurement errors, integrated diffusions, stochastic volatility models, and hypoelliptic stochastic differential equations. The Pearson diffusions, for which explicit optimal predictionbased estimating functions can be found, are briefly presented. 
Keywords:  Aasymptotic normality, consistency, diffusion with measurement errors, Gaussian process, integrated diffusion, linear predictors, nonMarkovian models, optimal estimating function, partially observed system, Pearson diffusion. 
JEL:  C22 C51 
Date:  2011–01–19 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201105&r=ecm 
By:  Roxana Halbleib 
Abstract:  This note solves the puzzle of estimating degenerate Wishart Autoagressive processes, introduced by Gourieroux, Jasiak and Sufana (2009)to model multivariate stochastic volatility. It derives the asymptotic and empirical properties of the Method of Moment estimator of the Wishart degrees of freedom subject to different stationarity asumptions and specific distributional settings of the underlying processes. 
Keywords:  Wishart autoagressive process; asymptotic properties; realized covariance; lognormal distribution 
JEL:  C32 C46 C51 
Date:  2010–12 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/73606&r=ecm 
By:  Buss, Ginters 
Abstract:  The paper proposes an extension of the symmetric BaxterKing band pass filter to an asymmetric BaxterKing filter. The optimal correction scheme of the ideal filter weights is the same as in the symmetric version, i.e, cut the ideal filter at the appropriate length and add a constant to all filter weights to ensure zero weight on zero frequency. Since the symmetric BaxterKing filter is unable to extract the desired signal at the very ends of the series, the extension to an asymmetric filter is useful whenever the real time estimation is needed. The paper uses Monte Carlo simulation to compare the proposed filter's properties in extracting business cycle frequencies to the ones of the original BaxterKing filter and ChristianoFitzgerald filter. Simulation results show that the asymmetric BaxterKing filter is superior to the asymmetric default specification of ChristianoFitzgerald filter in real time signal extraction exercises. 
Keywords:  real time estimation; ChristianoFitzgerald filter; Monte Carlo simulation; band pass filter 
JEL:  C13 C22 C15 
Date:  2011–01–17 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:28176&r=ecm 
By:  Chen, Pu 
Abstract:  In this paper we present a grouped factor model that is designed to explore grouped structures in factor models. We develop an econometric theory consisting of a consistent classification rule to assign variables to their respective groups and a class of consistent model selection criteria to determine the number of groups as well as the number of factors in each group. As a result, we propose a procedure to estimate grouped factor models, in which the unknown number of groups, the unknown relationship between variables to their groups as well as the unknown number of factors in each group are statistically determined based on observed data. The procedure can help to estimate common factor that are pervasive across all groups and groupspecific factors that are pervasive only in the respective groups. Simulations show that our proposed estimation procedure has satisfactory finite sample properties. 
Keywords:  Factor Models; Generalized Principal Component Analysis; Model Selection 
JEL:  C63 C22 
Date:  2010–10–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:28083&r=ecm 
By:  David Stephen Pollock 
Abstract:  Discretetime ARMA processes can be placed in a onetoone correspondence with a set of continuoustime processes that are bounded in frequency by the Nyquist value of ? radians per sample period. It is well known that, if data are sampled from a continuous process of which the maximum frequency exceeds the Nyquist value, then there will be a problem of aliasing. However, if the sampling is too rapid, then other problems will arise that will cause the ARMA estimates to be severely biased. The paper reveals the nature of these problems and it shows how they may be overcome. It is argued that the estimation of macroeconomic processes may be compromised by a failure to take account of their limits in frequency. 
Keywords:  Stochastic Differential Equations; BandLimited Stochastic Processes; Oversampling 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:11/14&r=ecm 
By:  Liu, Shuangzhe (University of Canberra, Canberra, Australia); Polasek, Wolfgang (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria); Sellner, Richard (Department of Economics and Finance, Institute for Advanced Studies, Vienna, Austria) 
Abstract:  Estimators of spatial autoregressive (SAR) models depend in a highly nonlinear way on the spatial correlation parameter and least squares (LS) estimators cannot be computed in closed form. We first compare two simple LS estimators by distance and covariance properties and then we study the local sensitivity behavior of these estimators using matrix derivatives. These results allow us to calculate the Taylor approximation of the least squares estimator in the spatial autoregression (SAR) model up to the second order. Using Kantorovich inequalities, we compare the covariance structure of the two estimators and we derive efficiency comparisons by upper bounds. Finally, we demonstrate our approach by an example for GDP and employment in 239 European NUTS2 regions. We find a good approximation behavior of the SAR estimator, evaluated around the nonspatial LS estimators. These results can be used as a basis for diagnostic tools to explore the sensitivity of spatial estimators. 
Keywords:  Spatial autoregressive models, least squares estimators, sensitivity analysis, Taylor Approximations, Kantorovich inequality 
JEL:  C11 C15 C52 E17 R12 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:262&r=ecm 
By:  Gianluca Cubadda (:Faculty of Economics, University of Rome "Tor Vergata"); Umberto Triacca (Università dell'Aquila) 
Abstract:  This note concerns with the marginal models associated with a given vector autoregressive model. In particular, it is shown that a reduction in the orders of the univariate ARMA marginal models can be determined by the presence of variables integrated with different orders. The concepts and methods of the paper are illustrated via an empirical investigation of the lowfrequency properties of hours worked in the US. 
Keywords:  VAR Models; ARIMA Models; Final Equations 
JEL:  C32 
Date:  2011–01–24 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:184&r=ecm 
By:  Mark Podolskij (University of Heidelberg and CREATES); Mathieu Rosenbaum (École Polytechnique Paris) 
Abstract:  In practice, the choice of using a local volatility model or a stochastic volatility model is made according to their respective ability to fit implied volatility surfaces. In this paper, we adopt an opposite point of view. Indeed, based on historical data, we design a statistical procedure aiming at testing the assumption of a local volatility model for the price dynamics, against the alternative of a stochastic volatility model. 
Keywords:  Local Volatility Models, Stochastic Volatility Models, Test Statistics, SemiMartingales, Limit Theorems. 
JEL:  C10 C13 C14 
Date:  2011–01–13 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201104&r=ecm 
By:  Chao, Swanson, Hausman, Newey, and Woutersen 
Abstract:  This paper derives the limiting distributions of alternative jackknife IV (JIV) estimators and gives formulae for accompanying consistent standard errors in the presence of heteroskedasticity and many instruments. The asymptotic framework includes the many instrument sequence of Bekker (1994) and the many weak instrument sequence of Chao and Swanson (2005). We show that JIV estimators are asymptotically normal and that standard errors are consistent provided that \frac{\sqrt{K_{n}}}{n} \to \infty as n \to \infty, where K_n and r_n denote, respectively, the number of instruments and the concentration parameter. This is in contrast to the asymptotic behavior of such classical IV estimators as LIML, B2SLS, and 2SLS, all of which are inconsistent in the presence of heteroskedasticity, unless \frac{K_n}{r_n}\to 0. We also show that the rate of convergence and the form of the asymptotic covariance matrix of the JIV estimators will in general depend on the strength of the instruments as measured by the relative orders of magnitude of r_n and K_n. 
Date:  2010–10 
URL:  http://d.repec.org/n?u=RePEc:jhu:papers:567&r=ecm 
By:  David Stephen Pollock 
Abstract:  A theory of bandlimited linear stochastic processes is described and it is related to the familiar theory of ARMA models in discrete time. By ignoring the limitation on the frequencies of the forcing function, in the process of fitting a conventional ARMA model, one is liable to derive estimates that are severely biased. If the maximum frequency in the sampled data is less than the Nyquist value, then the underlying continuous function can be reconstituted by sinc function or Fourier interpolation. The estimation biases can be avoided by resampling the continuous process at a rate corresponding to the maximum frequency of the forcing function. Then, there is a direct correspondence between the parameters of the bandlimited ARMA model and those of an equivalent continuoustime process. 
Keywords:  Stochastic Differential Equations; BandLimited Stochastic Processes; Aliasing and Interference 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:11/11&r=ecm 
By:  Jennifer L. Castle; David F. Hendry 
Abstract:  Model selection from a general unrestricted model (GUM) can potentially confront three very different environments: over, exact, and underspecification of the data generation process (DGP). In the first, and moststudied setting, the DGP is nested in the GUM, and the main role of generaltospecific (Gets) selection is to eliminate the irrelevant variables while retaining the relevant. In an exact specification, the theory formulation is precisely correct and can always be retained by ‘forcing’ during selection, but is nevertheless embedded in a broader model where possible omissions, breaks, nonlinearity, or data contamination are checked. The most realistic case is where some aspects of the relevant DGP are correctly included, but some are omitted, leading to underspecification. We review the analysis of model selection procedures which allow for many relevant effects, but inadvertently omit others, yet irrelevant variables are also included in the GUM, and exploit the ability of automatic procedures to handle more variables than observations, and consequentially tackle perfect collinearity. Considering all of the possibilities  where it is not known which one obtains in practice  reveals that model selection can excel relative to just fitting a prior specification, yet has very low costs when an exact specification is correctly postulated initially. 
Keywords:  Model selection, congruence, misspecification, impulseindicator saturation, Autometrics 
JEL:  C51 C22 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:523&r=ecm 
By:  Biørn, Erik (Dept. of Economics, University of Oslo) 
Abstract:  When using data from individuals who are in the labour force to disentangle the empirical relevance of cohort, age and time effects for sickness absence, the inference may be biased, affected by sortingout mechanisms. One reason is unobserved heterogeneity potentially affecting both health status and ability to work, which can bias inference because the individuals entering the data set are conditional on being in the labour force. Can this sample selection be adequately handled by attaching unobserved heterogeneity to nonstructured fixed effects? In the paper we examine this issue and discuss the econometric setup for identifying from such data time effects in sickness absence. The inference and interpretation problem is caused, on the one hand, by the occurrence of time, cohort and age effects also in the labour market participation, on the other hand by correlation between unobserved heterogeneity in health status and in ability to work. We show that running panel data regressions, ordinary or logistic, of sickness absence data on certain covariates, when neglecting this sample selection, is likely to obscure the interpretation of the results, except in certain, not particularly realistic, cases. However, the fixed individual effects approach is more robust in this respect than an approach controlling for fixed cohort effects only. 
Keywords:  Sickness absence; healthlabour interaction; cohortagetime problem; selfselection; latent heterogeneity; bivariate censoring; truncated binormal distribution; panel data 
JEL:  C23 C25 I38 J22 
Date:  2010–12–18 
URL:  http://d.repec.org/n?u=RePEc:hhs:osloec:2010_020&r=ecm 
By:  Heinen, Florian; Kaufmann, Hendrik; Sibbertsen, Philipp 
Abstract:  While it is widely agreed that Purchasing Power Parity (PPP) holds as a longrun concept the specific dynamic driving the process is largely build upon a priori economic belief rather than a thorough statistical modeling procedure. The two prevailing time series models, i.e. the exponential smooth transition autoregressive (ESTAR) model and the Markov switching autoregressive (MSAR) model, are both able to support the PPP as a longrun concept. However, the dynamic behavior of real exchange rates implied by these two models is very different and leads to different economic interpretations. In this paper we approach this problem by offering a bootstrap based testing procedure to discriminate between these two rival models. We further study the small sample performance of the test. In an application we analyze several major real exchange rates to shed light on the question which model best describes these processes. This allows us to draw a conclusion about the driving forces of real exchange rates. 
Keywords:  Nonlinearities, Markov switching, Smooth transition, Specification testing, Real exchange rates 
JEL:  C12 C15 C22 C52 F31 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp463&r=ecm 
By:  Roxana Halbleib; Valerie Voev 
Abstract:  This paper analyzes the forecast accuracy of the multivariate realized volatility model introduced by Chiriac and Voev (2010), subject to different degrees of model parametrization and economic evaluation criteria. By modelling the Cholesky factors of the covariance matrices, the model generates positive definite, but biased covariance forecasts. In this paper, we provide empirical evidence that parsimonious versions of the model generate the best covariance forecasts in the absence of bias correction. Moreover, we show by means of stochastic dominance tests that any risk averse investor, regardless of the type of utility function or return distribution, would be betteroff from using this model than from using some standard approaches. 
Keywords:  Forecasting; Fractional integration; Stochastic dominance; Portfolio optimization; Realized covariance 
JEL:  C32 C53 G11 
Date:  2010–12 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/73585&r=ecm 
By:  David Stephen Pollock; Emi Mise 
Abstract:  Alternative methods for the seasonal adjustment of economic data are described that operate in the time domain and in the frequency domain. The timedomain method, which employs a classical comb filter, mimics the effects of the modelbased procedures of the SEATS–TRAMO and STAMP programs. The frequencydomain method eliminates the sinusoidal elements of which, in the judgment of the user, the seasonal component is composed. It is proposed that, in some circumstances, seasonal adjustment is best achieved by eliminating all elements in excess of the frequency that marks the upper limit of the trendcycle component of the data. It is argued that the choice of the method seasonal adjustment is liable to affect the determination of the turning points of the business cycle. 
Keywords:  Wiener–Kolmogorov Filtering; FrequencyDomain Methods; The TrendCycle Component 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:11/12&r=ecm 
By:  Sumru Altug (Koç University and CEPR); Baris Tan (Koç University); Gozde Gencer (Yapikredi Bank) 
Abstract:  This paper characterizes the business cycle as a recurring Markov chain for a broad set of developed and developing countries. The objective is to understand differences in cyclical phenomena across a broad range of countries based on the behavior of two key economic times series – industrial production and employment. The Markov chain approach is a parsimonious approach that allows us to examine the cyclical dynamics of different economic time series using limited judgment on the issue. Time homogeneity and time dependence tests are implemented to determine the stationarity and dependence properties of the series. Univariate processes for industrial production and employment growth are estimated individually and a composite indicator that combines information on these series is also constructed. Tests of equality of the estimated Markov chains across countries are also implemented to identify similarities and differences in the cyclical dynamics of the relevant series. 
Keywords:  Markov chain models, economic indicators, crosscountry analysis 
JEL:  C22 E32 E37 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:koc:wpaper:1101&r=ecm 
By:  David Stephen Pollock 
Abstract:  In statistical timeseries analysis, signal processing and control engineering, a transfer function is a mathematical relationship between a numerical input to a dynamic system and the resulting output. The theory of transfer functions describes how the input/output relationship is affected by the structure of the transfer function. The theory of the transfer functions of linear timeinvariant (LTI) systems has been available for many years. It was developed originally in connection with electrical and mechanical systems described in continuous time. The basic theory can be attributed largely to Oliver Heaviside (1850–1925) [3] [4]. With the advent of digital signal processing, the emphasis has shifted to discretetime representations. These are also appropriate to problems in statistical timeseries analysis, where the data are in the form of sequences of stochastic values sampled at regular intervals. 
Keywords:  Impulse response; Frequency response; Spectral density 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:11/15&r=ecm 
By:  Jennifer Castle; Xiaochuan Qin; W. Robert Reed (University of Canterbury) 
Abstract:  This review surveys a number of common Model Selection Algorithms (MSAs), discusses how they relate to each other, and identifies factors that explain their relative performances. At the heart of MSA performance is the tradeoff between Type I and Type II errors. Some relevant variables will be mistakenly excluded, and some irrelevant variables will be retained by chance. A successful MSA will find the optimal tradeoff between the two types of errors for a given data environment. Whether a given MSA will be successful in a given environment depends on the relative costs of these two types of errors. We use Monte Carlo experimentation to illustrate these issues. We confirm that no MSA does best in all circumstances. Even the worst MSA in terms of overall performance – the strategy of including all candidate variables – sometimes performs best (viz., when all candidate variables are relevant). We also show how (i) the ratio of relevant to total candidate variables and (ii) DGP noise affect relative MSA performance. Finally, we discuss a number of issues complicating the task of MSAs in producing reliable coefficient estimates. 
Keywords:  Model selection algorithms; Information Criteria; GeneraltoSpecific modeling; Bayesian Model Averaging; Portfolio Models; AIC; SIC; AICc; SICc; Monte Carlo Analysis; Autometrics 
JEL:  C52 C15 
Date:  2011–01–01 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:11/03&r=ecm 
By:  Zhu, Junjun; Xie, Shiyu 
Abstract:  We construct one triplethreshold GARCH model to analyze the asymmetric response of mean and conditional volatility. In parameter estimation, we apply GriddyGibbs sampling method, which require less work in selection of starting values and prerun. As we apply this model in Chinese stock market, we find that 12daysaverage return plays an important role in defining different regimes. While the down regime is characterized by negative 12daysaverage return, the up regime has positive 12daysaverage return. The conditional mean responds differently between down and up regime. In down regime, the return at date t is affected negatively by lag 2 negative return, while in up regime the return responds significantly to both positive and negative lag 1 past return. Moreover, our model shows that volatility reacts asymmetrically to positive and negative innovations, and this asymmetric reaction varies between down and up regimes. In down regime, volatility becomes more volatile when negative innovation impacts the market than when positive one does, while in up regime positive innovation leads to more volatile market than negative one. 
Keywords:  Threshold; GriddyGibbs sampling; MCMC method; GARCH 
JEL:  G15 C22 C11 
Date:  2010–06–18 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:28195&r=ecm 
By:  Luca RICCETTI (Universita' Politecnica delle Marche, Dipartimento di Economia) 
Abstract:  Many authors have suggested that the meanvariance criterion, conceived by Markowitz (1952), is not optimal for asset allocation, because the investor expected utility function is better proxied by a function that uses higher moments and because returns are distributed in a nonNormal way, being asymmetric and/or leptokurtic, so the meanvariance criterion can not correctly proxy the expected utility with nonNormal returns. In Riccetti (2010) I apply a simple GARCHcopula model and I find that copulas are not useful for choosing among stock indices, but they can be useful in a macro asset allocation model, that is, for choosing the stock and the bond composition of portfolios. In this paper I apply that GARCHcopula model for the macro asset allocation of portfolios containing a commodity component. I find that the copula model appears useful and better than the meanvariance one for the macro asset allocation also in presence of a commodity index, even if it is not better than GARCH models on independent univariate series, probably because of the low correlation of the commodity index returns to the stock, the bond and the exchange rate returns. 
Keywords:  Portfolio Choice 
JEL:  C52 C53 G11 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:anc:wpaper:355&r=ecm 
By:  Graziani, Rebecca (Department of Decision Sciences, Bocconi University, Milano); Keilman, Nico (Dept. of Economics, University of Oslo) 
Abstract:  The Scaled Model of Error has gained considerable popularity during the past ten years as a device for computing probabilistic population forecasts of the cohortcomponent type. In this report we investigate how sensitive probabilistic population forecasts produced by means of the Scaled Model of Error are for small changes in the correlation parameters. We consider changes in the correlation of the agespecific fertility forecast error increments across time and age, and changes in the correlation of the agespecific mortality forecast error increments across time, age and sex. Next we analyse the impact of such changes on the forecasts of the Total Fertility Rate and of the Male and Female Life Expectancies respectively. For age specific fertility we find that the correlation across ages has only limited impact on the uncertainty in the Total Fertility Rate. As a consequence, annual numbers of births will be little affected. The autocorrelation in error increments is an important parameter, in particular in the long run. Also, the autocorrelation in error increments for age specific mortality is important. It has a large effect on long run uncertainty in life expectancy values, and hence on the uncertainty around the elderly population in the future. In empirical applications of the Scaled Model of Error, one should give due attention to a correct estimation of these two parameters. 
Keywords:  Scaled model of error; Stochastic population forecast; Probabilistic cohort component model; Sensitivity; Correlation 
JEL:  C15 C49 C63 J40 
Date:  2010–11–23 
URL:  http://d.repec.org/n?u=RePEc:hhs:osloec:2010_022&r=ecm 
By:  David Stephen Pollock 
Abstract:  These notes have been written to accompany a tutorial session held at the London School of Economics as a prelude to the ERCIM conference of December 2010. 
Date:  2010–12 
URL:  http://d.repec.org/n?u=RePEc:lec:leecon:11/13&r=ecm 
By:  Pötscher, Benedikt M. 
Abstract:  Bounds on the order of magnitude of sums of negative powers of integrated processes are derived. 
Keywords:  integrated proesses; sums of negative powers; order of magnitude; martingale transform 
JEL:  C22 
Date:  2011–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:28287&r=ecm 