|
on Econometrics |
By: | Siem Jan Koopman (Vrije Universiteit Amsterdam); Marius Ooms (Vrije Universiteit Amsterdam); Irma Hindrayanto (Vrije Universiteit Amsterdam) |
Abstract: | This paper discusses identification, specification, estimation and forecasting for a general class of periodic unobserved components time series models with stochastic trend, seasonal and cycle components. Convenient state space formulations are introduced for exact maximum likelihood estimation, component estimation and forecasting. Identification issues are considered and a novel periodic version of the stochastic cycle component is presented. In the empirical illustration, the model is applied to postwar monthly US unemployment series and we discover a significantly periodic cycle. Furthermore, a comparison is made between the performance of the periodic unobserved components time series model and a periodic seasonal autoregressive integrated moving average model. Moreover, we introduce a new method to estimate the latter model. |
Keywords: | Unobserved component models; state space methods; seasonal adjustment; time–varying parameters; forecasting |
JEL: | C22 C51 E32 E37 |
Date: | 2006–11–20 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20060101&r=ecm |
By: | Chuan Goh |
Abstract: | This paper is concerned with tests of restrictions on the sample path of conditional quantile processes. These tests are tantamount to assessments of lack of fit for models of conditional quantile functions or more generally as tests of how certain covariates affect the distribution of an outcome variable of interest. This paper extends tests of the generalized likelihood ratio (GLR) type as introduced by Fan, Zhang and Zhang (2001) to nonparametric inference problems regarding conditional quantile processes. As such, the tests proposed here present viable alternatives to existing methods based on the Khmaladze (1981, 1988) martingale transformation. The range of inference problems that may be addressed by the methods proposed here is wide, and includes tests of nonparametric nulls against nonparametric alternatives as well as tests of parametric specifications against nonparametric alternatives. In particular, it is shown that a class of GLR statistics based on nonparametric additive quantile regressions have pivotal asymptotic distributions given by the suprema of squares of Bessel processes, as in Hawkins (1987) and Andrews (1993). The tests proposed here are also shown to be asymptotically rate-optimal for nonparametric hypothesis testing according to the formulations of Ingster (1993) and of Spokoiny (1996). |
Keywords: | quantile regression, nonparametric inference, minimax rate, additive models, local polynomials, generalized likelihood ratio |
JEL: | C12 C14 C15 C21 |
Date: | 2007–01–15 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-277&r=ecm |
By: | Cheng Hsiao (Department of Economics, University of Southern California); Siyan Wang (Department of Economics, University of Delaware) |
Abstract: | We consider a lag-augmented two- or three-stage least squares estimator for a structural dynamic model of nonstationary and possibly cointegrated variables without the prior knowledge of unit roots or rank of cointegration. We show that the conventional two- and three-stage least squares estimators are consistent but contain nonstandard distributions without the strict exogeneity assumption, hence the conventional Wald type test statistics may not be chi-square distributed. We propose a lag order augmented two- or three-stage least squares estimator that is consistent and asymptotically normally distributed. Limited Monte Carlo studies are conducted to shed light on the finite sample properties of various estimators. |
Keywords: | Structural vector autoregressions, Nonstationary time series, Cointegration, Hypothesis testing, Two and Three Stage Least Squares |
JEL: | C1 C3 |
Date: | 2006–09 |
URL: | http://d.repec.org/n?u=RePEc:scp:wpaper:06-55&r=ecm |
By: | Wendelin Schnedler (University of Heidelberg, Department of Economics) |
Abstract: | This article shows how to construct a likelihood for a general class of censoring problems. This likelihood is proven to be valid, i.e. its maximiser is consistent and the respective root-n estimator is asymptotically efficient and normally distributed under regularity conditions. The method generalises ordinary maximum likelihood estimation as well as several standard estimators for censoring problems (e.g. tobit type I - tobit type V). |
Keywords: | Censored variables; Limited dependent variables; Multivariate methods; Random censoring; Likelihood |
JEL: | C13 C24 |
Date: | 2005–02 |
URL: | http://d.repec.org/n?u=RePEc:awi:wpaper:0417&r=ecm |
By: | Prof. Dr. Walter Krämer (Fachbereich Statistik, Universität Dortmund); Christoph Hanck (Fachbereich Statistik, Universität Dortmund) |
Abstract: | We investigate the OLS-based estimator s2 of the disturbance variance in the standard linear regression model with cross section data when the disturbances are homoskedastic, but spatially correlated. For the most popular model of spatially autoregressive disturbances, we show that s2 can be severely biased in finite samples, but is asymptotically unbiased and consistent for most types of spatial weighting matrices as sample size increases. |
Keywords: | regression, spatial error correlation, bias, variance |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:dor:wpaper:7&r=ecm |
By: | Jushan Bai (Department of Economics, New York University, Newe York, New York 10003 USA, and School of Economics and Management, Tsinghua University); Chihwa Kao (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020); Serena Ng (Department of Economics, University of Michigan, Ann Arbor, MI 48109, USA) |
Abstract: | This paper studies estimation of panel cointegration models with cross-sectional dependence generated by unobserved global stochastic trends. The standard least squares estimator is, in general, inconsistent owing to the spuriousness induced by the unobservabla I(1) trends. We propose two iterative procedures that jointly estimate the slope parameters and the stochastic trends. The resulting estimators are referred to respectively as CupBC (continuously updated and bias-corrected) and the CupFM (continuously updated and fully modified) estimators. We establish their consistency and derive their limiting distributions. Both are asymptotically unbiased and asymptotically normal and permit inference to be conducted using standard test statistics. The estimates are also valid when there are mixed stationary and non-stationary factors, as well as when the factors are all stationary. |
JEL: | C13 C33 |
URL: | http://d.repec.org/n?u=RePEc:max:cprwps:90&r=ecm |
By: | Badi H. Baltagi (Center for Policy Research, Maxwell School, Syracuse University, Syracuse, NY 13244-1020) |
Abstract: | This note considers a panel data regression model with spatial autoregressive disturbances and random effects where the weight matrix is normalized and has equal elements. This is motivated by Kelejian et al. (2005), who argue that such a weighting matrix, having blocks of equal elements, might be considered when units are equally distant within certain neighborhoods but unrelated between neighborhoods. We derive a simple weighted least squares transformation that obtains GLS on this model as a simple OLS. For the special case of a spatial panel model with no random effects, we obtain two sufficient conditions where GLS on this model is equivalent to OLS. Finally, we show that these results, for the equal weight matrix, hold whether we use the spatial autoregressive specification, the spatial moving average specification, the spatial error components specification or the Kapoor et al. (2005) alternative to modeling panel data with spatially correlated error components. |
Keywords: | Panel data, spatial error correlation, equal weights, error components |
JEL: | C23 C12 |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:max:cprwps:89&r=ecm |
By: | Hyungsik Roger Moon (Department of Economics, University of Southern California); Frank Schorfheide (Department of Economics, University of Pennsylvania and CEPR) |
Abstract: | This paper derives limit distributions of empirical likelihood estimators for models in which inequality moment conditions provide overidentifying information. We show that the use of this information leads to a reduction of the asymptotic mean-squared estimation error and propose asymptotically valid con¯dence sets for the parameters of interest. While inequality moment conditions arise in many important economic models, we use a dynamic macroeconomic model as data generating process and il- lustrate our methods with instrumental variable estimators of monetary policy rules. The assumption that output does not fall in response to an expansionary monetary policy shock leads to an inequality moment condition that can substantially increase the precision with which the policy rule is estimated. The results obtained in this paper extend to conventional GMM estimators. |
Keywords: | Empirical Likelihood Estimation, Generalized Method of Moments, Inequality Moment Conditions, Instrumental Variable Estimation, Monetary Policy Rules |
JEL: | C32 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:scp:wpaper:06-56&r=ecm |
By: | Caporin Massimiliano (Department of Economics, University of Padova, Italy); Paruolo Paolo (Department of Economics, University of Insubria, Italy) |
Abstract: | This paper applies a new spatial approach for the specfication of multivariate GARCH models, called Spatial Effects in ARCH, SEARCH. We consider spatial dependence associated with industrial sectors and capitalization size. This parametrization extends current feasible specifications for large scale GARCH models, keeping the numbers of parameters linear as a function of the number of assets. An application to daily returns on 150 stocks from the NYSE for the period January 1994 to June 2001 shows the benefits of the present specification when compared to alternative specifications. |
Keywords: | Spatial models, GARCH, Volatility, Large scale models, Portfolio allocation. |
JEL: | C32 C51 C52 |
Date: | 2005–12 |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf0509&r=ecm |
By: | Guido W. Imbens (Department of Economics, UC Berkeley and NBER); Whitney Newey (Department of Economics, MIT); Geert Ridder (Department of Economics, University of Southern California) |
Abstract: | This paper develops a new nonparametric series estimator for the average treatment effect for the case with unconfounded treatment assignment, that is, where selection for treatment is on observables. The new estimator is efficient. In addition we develop an optimal procedure for choosing the smoothing parameter, the number of terms in the series by minimizing the mean squared error (MSE). The new estimator is linear in the first-stage nonparametric estimator. This simplifies the derivation of the MSE of the estimator as a function of the number of basis functions that is used in the first stage nonparametric regression. We propose an estimator for the MSE and show that in large samples minimization of this estimator is equivalent to minimization of the population MSE. |
Keywords: | Nonparametric Estimation, Imputation, Mean Squared Error, Order Selection |
JEL: | C14 C20 |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:scp:wpaper:06-57&r=ecm |
By: | Paruolo Paolo (Department of Economics, University of Insubria, Italy) |
Abstract: | This paper compares the finite sample performance of alternative tests for rank-dficiency of a submatrix of the cointegrating matrix. The paper focuses on the (implementation of the) likelihood ratio test proposed in Paruolo (2007, Oxford Bulletin of Economics and Statistics), and compares its finite sample performance with the ones of alternative tests proposed in Saikkonen (1999, Econometric Reviews) and Kurozumi (2005, Econometric Theory). All the tests have well-documented limit distributions; their finite sample performance is analyzed in this paper through a Monte Carlo simulation study. We use the Monte Carlo design used in Lukkonen, Ripatti and Saikkonen (1999, Journal of Business and Economic Statistics). It is found that the LR and the Kurozumi test perform remarkably better than the alternatives, with a mar- ginal advantage of the LR test. The paper also investigates the properties and the numerical performance of the alternating maximization algorithm that is employed to maximize the likelihood under the null. Alternative ways to choose its starting values are also discussed. In the simulations it is found that the algorithm requires a few iterations when the null is correctly speci?ed and a rather limited number of iteration in 90% of the other cases. The choice of starting values is found to have a signi?cant e¤ect on the number of iteration required by the algorithm. |
Keywords: | Invariance, Vector autoregressive process, Monte Carlo, Likeli-hood ratio test, Cointegration. |
JEL: | C15 C32 C63 |
Date: | 2006–09 |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf0605&r=ecm |
By: | Paruolo Paolo (Department of Economics, University of Insubria, Italy) |
Abstract: | This paper discusses the Monte Carlo (MC) design of Gaussian Vector Au- toregressive processes (VAR) for the evalutation of invariant statistics. We focus on the case of cointegrated (CI) I(1) processes, linear and invertible trans- formations and CI rank likelihood ratio (LR) tests. It is found that all VAR of order 1 can be reduced to a system of independent or recursive subsystems, of computational dimension at most equal to 2. The results are applied to the indexing of the distribution of LR test statistics for CI rank under local alternatives. They are also extended to the case of VAR processes of higher order. |
Keywords: | Invariance, Vector autoregressive process, Monte Carlo, Likeli-hood ratio test, Cointegration. |
JEL: | C32 |
Date: | 2005–09 |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf0504&r=ecm |
By: | Seung C. Ahn (Dept. of Economics, W.P. Carey School of Business, Arizona State University, Tempe, AZ 85287); Young H. Lee (Hansung University); Peter Schmidt (Michigan State University) |
Abstract: | This paper considers a panel data model with time-varying individual effects. The data are assumed to contain a large number of cross-sectional units repeatedly observed over a fixed number of time periods. The model has a feature of the fixed-effects model in that the effects are assumed to be correlated with the regressors. The unobservable individual effects are assumed to have a factor structure. For consistent estimation of the model, it is important to estimate the true number of factors. We propose a generalized methods of moments procedure by which both the number of factors and the regression coefficients can be consistently estimated. Some important identification issues are also discussed. Our simulation results indicate that the proposed methods produce reliable estimates. |
Keywords: | panel data, time-varying individual effects, factor models |
JEL: | C51 D24 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:crt:wpaper:0702&r=ecm |
By: | Mario Forni (Università di Modena e Reggio Emilia and CEPR Address: Università degli studi di Modena e Reggio Emilia - Dipartimento di Economia Politica, Viale Berengario 51, 41100 Modena, Italy.); Domenico Giannone (ECARES, Université Libre de Bruxelles, Campus du Solbosch, CP114, avenue F.D. Roosevelt 50, 1050 Bruxelles, Belgium.); Marco Lippi (Dipartimento di Scienze Economiche, Università di Roma “La Sapienza”, Via Cesalpino 12, 00161 Roma, Italy.); Lucrezia Reichlin (European Central Bank, Kaiserstrasse 29, 60311 Frankfurt am Main, Germany.) |
Abstract: | This paper shows how large-dimensional dynamic factor models are suitable for structural analysis. We establish sufficient conditions for identification of the structural shocks and the associated impulse-response functions. In particular, we argue that, if the data follow an approximate factor structure, the “problem of fundamentalness”, which is intractable in structural VARs, can be solved provided that the impulse responses are sufficiently heterogeneous. Finally, we propose a consistent method (and n,T rates of convergence) to estimate the impulse-response functions, as well as a bootstrapping procedure for statistical inference. JEL Classification: E0, C1. |
Keywords: | Dynamic factor models, structural VARs, identification, fundamentalness. |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20070712&r=ecm |
By: | Leisen Fabrizio (Università di Modena e Reggio Emilia, Dipartimento di Matematica, Modena, Italy); Mira Antonietta (Department of Economics, University of Insubria, Italy) |
Abstract: | Peskun ordering is a partial ordering defined on the space of transition matrices of discrete time Markov chains. If the Markov chains are reversible with respect to a common stationary distribution "greek Pi", Peskun ordering implies an ordering on the asymptotic variances of the resulting Markov chain Monte Carlo estimators of integrals with respect to "greek Pi". Peskun ordering is also relevant in the framework of time-invariance estimating equations in that it provides a necessary condition for ordering the asymptotic variances of the resulting estimators. In this paper Peskun ordering is extended from discrete time to continuous time Markov chains. Key words and phrases: Peskun ordering, Covariance ordering, Effciency ordering, MCMC, time-invariance estimating equations, asymptotic variance, continuous time Markov chains. |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf0609&r=ecm |
By: | Myungsup Kim (University of North Texas); Yangseon Kim (East-West Center); Peter Schmidt (Michigan State University) |
Abstract: | We study the construction of confidence intervals for efficiency levels of individual firms in stochastic frontier models with panel data. The focus is on bootstrapping and related methods. We start with a survey of various versions of the bootstrap. We also propose a simple parametric alternative in which one acts as if the identity of the best firm is known. Monte Carlo simulations indicate that the parametric method works better than the per- centile bootstrap, but not as well as bootstrap methods that make bias corrections. All of these methods are valid only for large time-series sample size (T), and correspondingly none of the methods yields very accurate confidence intervals except when T is large enough that the identity of the best firm is clear. We also present empirical results for two well-known data sets. |
Keywords: | Stochastic frontier, bootstrap, efficiency |
JEL: | C15 C23 D24 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:crt:wpaper:0704&r=ecm |
By: | Ulrich Fritsche (Department for Economics and Politics, University of Hamburg, and DIW Berlin); Joerg Doepke (Fachhochschule Merseburg) |
Abstract: | The paper analyses reasons for departures from strong rationality of growth and inflation forecasts based on annual observations from 1963 to 2004. We rely on forecasts from the joint forecast of the so-called "six leading" forecasting institutions in Germany and argue that violations of the rationality hypothesis are due to relatively few large forecast errors. These large errors are shown - based on evidence from probit models - to correlate with macroeconomic fundamentals, especially on monetary factors. We test for a non-linear relation between forecast errors and macroeconomic fundamentals and find evidence for such a non-linearity for inflation forecasts. |
Keywords: | forecast error evaluation, non-linearities, business cycles |
JEL: | E32 E37 C52 C53 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:hep:macppr:200602&r=ecm |
By: | Ying Chen; Vladimir Spokoiny |
Abstract: | In the ideal Black-Scholes world, financial time series are assumed 1) stationary (time homogeneous) and 2) having conditionally normal distribution given the past. These two assumptions have been widely-used in many methods such as the RiskMetrics, one risk management method considered as industry standard. However these assumptions are unrealistic. The primary aim of the paper is to account for nonstationarity and heavy tails in time series by presenting a local exponential smoothing approach, by which the smoothing parameter is adaptively selected at every time point and the heavy-tailedness of the process is considered. A complete theory addresses both issues. In our study, we demonstrate the implementation of the proposed method in volatility estimation and risk management given simulated and real data. Numerical results show the proposed method delivers accurate and sensitive estimates. |
Keywords: | Exponential Smoothing, Spatial Aggregation. |
JEL: | C14 C53 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007-002&r=ecm |
By: | Timotheos Angelidis; Stavros Degiannakis |
Abstract: | Academics and practitioners have extensively studied Value-at-Risk (VaR) to propose a unique risk management technique that generates accurate VaR estimations for long and short trading positions and for all types of financial assets. However, they have not succeeded yet as the testing frameworks of the proposals developed, have not been widely accepted. A two-stage backtesting procedure is proposed to select a model that not only forecasts VaR but also predicts the losses beyond VaR. Numerous conditional volatility models that capture the main characteristics of asset returns (asymmetric and leptokurtic unconditional distribution of returns, power transformation and fractional integration of the conditional variance) under four distributional assumptions (normal, GED, Student-t, and skewed Student-t) have been estimated to find the best model for three financial markets, long and short trading positions, and two confidence levels. By following this procedure, the risk manager can significantly reduce the number of competing models that accurately predict both the VaR and the Expected Shortfall (ES) measures. |
Keywords: | Value-at-Risk, Expected Shortfall, Volatility Forecasting, Arch Models |
JEL: | C22 C52 G15 |
Date: | 2007–01–12 |
URL: | http://d.repec.org/n?u=RePEc:crt:wpaper:0701&r=ecm |
By: | Urbain Jean-Pierre; Westerlund Joakim (METEOR) |
Abstract: | This paper illustrates analytically the effects of cross-unit cointegration using as an example the conventional pooled least squares estimate in the spurious panel regression case. The results suggest that the usual result of asymptotic normality depends critically on the absence of cross-unit cointegration. |
Keywords: | econometrics; |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:dgr:umamet:2006057&r=ecm |
By: | Prof. Dr. Walter Krämer (Fachbereich Statistik, Universität Dortmund) |
Abstract: | The paper considers the Markov-Switching GARCH(1,1)-model with time-varying transition probabilities. It derives su±cient conditions for the square of the process to display long memory and provides some additional intuition for the empirical observation that estimated GARCH-parameters often sum to almost one. |
Keywords: | Markov switching, GARCH, long memory |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:dor:wpaper:6&r=ecm |
By: | Justin McCrary |
Abstract: | Standard sufficient conditions for identification in the regression discontinuity design are continuity of the conditional expectation of counterfactual outcomes in the running variable. These continuity assumptions may not be plausible if agents are able to manipulate the running variable. This paper develops a test of manipulation related to continuity of the running variable density function. The methodology is applied to popular elections to the House of Representatives, where sorting is neither expected nor found, and to roll-call voting in the House, where sorting is both expected and found. |
JEL: | C1 C2 C3 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0334&r=ecm |
By: | Fonseca Giovanni (Department of Economics, University of Insubria, Italy) |
Abstract: | In the present paper we study the stability of a class of nonlinear ARMA models. We derive a sufficient condition to ensure the geometric ergodicity and we apply it to a very general threshold ARMA model imposing a mild assumption on the thresholds |
Keywords: | Nonlinear ARMA models, threshold ARMA processes, stationary processes, geometric ergodicity |
Date: | 2005–06 |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf0503&r=ecm |
By: | Martin Fukac; Adrian Pagan |
Abstract: | Our discussion is structured by three concerns ů model design, matching the data and operational requirements. The paper begins with a general discussion of the structure of dynamic stochastic general equilibrium (DSGE) models where we investigate issues like (i) the type of restrictions being imposed by DSGE models upon system dynamics, (ii) the implication these models would have for ölocation parametersö, viz. growth rates, and (iii) whether these models can track the long-run movements in variables as well as matching dynamic adjustment. The paper further looks at the types of models that have been constructed in central banks for macro policy analysis. We distinguish four generations of these and detail how the emerging current generation, which are often referred to as DSGE models, differs from the previous generations. The last part of the paper is devoted to a variety of topics involving estimation and evaluation of DSGE models. |
Keywords: | . DSGE model, Bayesian estimation, model evaluation. |
JEL: | C11 C13 C51 C52 |
Date: | 2006–11 |
URL: | http://d.repec.org/n?u=RePEc:cnb:wpaper:2006/6&r=ecm |
By: | Hampel, Katharina (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Kunz, Marcus (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Schanne, Norbert (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Wapler, Rüdiger (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]); Weyh, Antje (Institut für Arbeitsmarkt- und Berufsforschung (IAB), Nürnberg [Institute for Employment Research, Nuremberg, Germany]) |
Abstract: | "The labour-market policy-mix in Germany is increasingly being decided on a regional level. This requires additional knowledge about the regional development which (disaggregated) national forecasts cannot provide. Therefore, we separately forecast employment for the 176 German labour- market districts on a monthly basis. We first compare the prediction accuracy of standard time-series methods: autoregressive integrated moving averages (ARIMA), exponentially weighted moving averages (EWMA) and the structural-components approach (SC) in these small spatial units. Second, we augment the SC model by including autoregressive elements (SCAR) in order to incorporate the influence of former periods of the dependent variable on its current value. Due to the importance of spatial interdependencies in small labour-market units, we further augment the basic SC model by lagged values of neighbouring districts in a spatial dynamic panel (SCSAR). The prediction accuracies of the models are compared using the mean absolute percentage forecast error (MAPFE) for the simulated out-of-sample forecast for 2005. Our results show that the SCSAR is superior to the SCAR and basic SC model. ARIMA and EWMA models perform slightly better than SCSAR in many of the German labour-market districts. This reflects that these two moving-average models can better capture the trend reversal beginning in some regions at the end of 2004. All our models have a high forecast quality with an average MAPFE lower than 2.2 percent." (author's abstract, IAB-Doku) ((en)) |
Keywords: | regionaler Arbeitsmarkt, Beschäftigungsentwicklung, Prognoseverfahren |
JEL: | C53 J21 O18 |
Date: | 2007–01–16 |
URL: | http://d.repec.org/n?u=RePEc:iab:iabdpa:200702&r=ecm |
By: | Prof. Dr. Walter Krämer (Fachbereich Statistik, Universität Dortmund); Baudouin Tameze Azamo (Fachbereich Statistik, Universität Dortmund) |
Abstract: | It has long been known that the estimated persistence parameter in the GARCH(1,1) - model is biased upwards when the parameters of the model are not constant throughout the sample. The present paper explains the mechanics of this behavior for a particular class of estimates of the model parameters and for a particular type of structural change. It shows for any given sample size that the estimated persistence must tend to one in probability if the structural change is ignored and large enough. |
Keywords: | long memory, GARCH, structural change |
Date: | 2006–05 |
URL: | http://d.repec.org/n?u=RePEc:dor:wpaper:5&r=ecm |
By: | Petr Kadeřábek (Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic) |
Abstract: | We will assume a chaotic (mixing) reality, can observe a substantially aggregated state vector only and want to predict one or more of its elements using a stochastic model. However, chaotic dynamics can be predicted in a short term only, while in the long term an ergodic distribution is the best predictor. Our stochastic model will thus be considered a local approximation with no predictive ability for the far future. Using an estimate of an ergodic distribution of the predicted scalar (or eventually vector), we get, under additional reasonable assumptions, the uniquely specified resulting model, containing information from both the local model and the ergodic distribution. For a small prediction horizon, if the local model converges in probability to a constant and additional technical assumption is fulfilled, the resulting model converges in L1 norm to the local model. In long term, the resulting model converges in L1 to the ergodic distribution. We propose also a formula for computing the resulting model from the nonparametric specification of the ergodic distribution (using past observations directly). Two examples follow. |
Keywords: | Chaotic system; Prediction; Bayesian Analysis; Local Approximation; Ergodic Distribution |
JEL: | C11 C53 C62 |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:fau:wpaper:wp2006_31&r=ecm |
By: | Leisen Fabrizio (Università di Modena e Reggio Emilia, Dipartimento di Matematica, Modena, Italy); Mira Antonietta (Department of Economics, University of Insubria, Italy) |
Abstract: | If T is the coalescence time of the Propp and Wilson [15], perfect simulation algorithm, the aim of this paper is to show that T depends on the second largest eigenvalue modulus of the transition matrix of the underlying Markov chain. This gives a relationship between the ordering based on the speed of convergence to stationarity in total variation distance and the ordering dened in terms of speed of coalescence in perfect simulation. Key words and phrases: Peskun ordering, Covariance ordering, Effciency ordering, MCMC, time-invariance estimating equations, asymptotic variance, continuous time Markov chains. |
URL: | http://d.repec.org/n?u=RePEc:ins:quaeco:qf06010&r=ecm |
By: | Lubos Pastor; Robert F. Stambaugh |
Abstract: | The standard regression approach to modeling return predictability seems too restrictive in one way but too lax in another. A predictive regression models expected returns as an exact linear function of a given set of predictors but does not exploit the likely economic property that innovations in expected returns are negatively correlated with unexpected returns. We develop an alternative framework - a predictive system - that accommodates imperfect predictors and beliefs about that negative correlation. In this framework, the predictive ability of imperfect predictors is supplemented by information in lagged returns as well as lags of the predictors. Compared to predictive regressions, predictive systems deliver different and substantially more precise estimates of expected returns as well as different assessments of a given predictor's usefulness. |
JEL: | G1 |
Date: | 2007–01 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12814&r=ecm |