nep-ecm New Economics Papers
on Econometrics
Issue of 2006‒01‒24
forty-one papers chosen by
Sune Karlsson
Orebro University

  1. Asymptotically Efficient Estimation of the Change Point for Semiparametric GARCH models By Takayuki Shiohama
  2. Indirect Inference for Dynamic Panel Models By Christian Gourieroux; Peter C. B. Phillips; Jun Yu
  3. Optimal Bandwidth Selection in Heteroskedasticity-Autocorrelation Robust Testing By Yixiao Sun; Peter C. B. Phillips; Sainan Jin
  4. Forecasting Accuracy and Estimation Uncertainty using VAR Models with Short- and Long-Term Economic Restrictions: A Monte-Carlo Study By Osmani Teixeira de Carvalho Guillén; João Victor Issler; George Athanasopoulos
  5. Optimal Estimation of Cointegrated Systems with Irrelevant Instruments By Peter C. B. Phillips
  7. VARMA versus VAR for Macroeconomic Forecasting By George Athanasopoulos; Farshid Vahid
  8. Outlier Detection in GARCH Models By Jurgen A. Doornik; Marius Ooms
  9. A Complete VARMA Modelling Methodology Based on Scalar Components By George Athanasopoulos; Farshid Vahid
  10. Nonparametric Tests for Serial Independence Based on Quadratic Forms By Cees Diks; Valentyn Panchenko
  11. Gaussian Inference in AR(1) Time Series with or without a Unit Root By Peter C. B. Phillips; Chirok Han
  12. On Importance Sampling for State Space Models By Borus Jungbacker; Siem Jan Koopman
  13. A nonparametric test of stochastic dominance in multivariate distributions By Ian Crawford
  14. Formalized Data Snooping Based on Generalized Error Rates By Joseph P; Romano; Azeem M. Shaikh; Michael Wolf
  15. On the Choice-Based Sample Bias in Probabilistic Business Failure Prediction By Skogsvik, Kenth
  16. Testing for Additive Outliers in Seasonally Integrated Time Series By Niels Haldrup; Andreu Sansó
  17. Refined Inference on Long Memory in Realized Volatility By Offer Lieberman; Peter C. B. Phillips
  18. A Simple Multiple Variance-Ratio Test Based on Ranks By Gilbert Colletaz
  19. Comparing Distributions: The Harmonic Mass Index By Jeroen Hinloopen; Charles van Marrewijk
  20. Periodic Seasonal Reg-ARFIMA-GARCH Models for Daily Electricity Spot Prices By Siem Jan Koopman; Marius Ooms; M. Angeles Carnero
  22. Some Nonlinear Exponential Smoothing Models are Unstable By Rob J Hyndman; Muhammad Akram
  23. Judging Contending Estimators by Simulation: Tournaments in Dynamic Panel Data Models By Jan F. Kiviet
  24. Structural Change and the Order of Integration in By Luis Alberiko Gil-Alana
  25. Inverse probability weighted generalised empirical likelihood estimators : firm size and R&D revisited By Inkmann,Joachim
  26. Measuring Asymmetric Stochastic Cycle Components in U.S. Macroeconomic Time Series By Siem Jan Koopman; Kai Ming Lee
  27. Avoiding Data Snooping in Multilevel and Mixed Effects Models By David Afshartous; Michael Wolf
  28. Signal Extraction: How (In)efficient are Model-Based Approaches? An Empirical Study Based on TRAMO/SEATS and Census X-12-ARIMA By Marc Wildi; Bernd Schips
  29. Bootstrapping a linear estimator of the ARCH parameters By Arup Bose
  30. An Extension of the Blinder-Oaxaca Decomposition Technique to Logit and Probit Models By Robert W. Fairlie
  31. Assessing confidence intervals for the tail index by Edgeworth expansions for the Hill estimator By Haeusler,Erich; Segers,Johan
  32. An Evaluation of MLE in a Model of the Nonlinear Continuous-Time Short-Term Interest Rate By Ingrid Lo
  33. Identification of Covariance Structures By Riccardo LUCCHETTI
  34. New Variance Ratio Tests to Identify Random Walk from the General Mean Reversion Model By Kin Lam; May Chun Mei Wong; Wing-Keung Wong
  35. Sinusoidal Modeling Applied to Spatially Variant Tropospheric Ozone Air Pollution By Nicholas Z. Muller; Peter C. B. Phillips
  36. A Further Extension of Duration Dependent Models By Satoru Kanoh
  37. Series Expansions for Finite-State Markov Chains By Bernd Heidergott; Arie Hordijk; Miranda van Uitert
  38. Rough set methodology in meta-analysis - a comparative and exploratory analysis By Thomas Rupp
  39. Resampling vs. Shrinkage for Benchmarked Managers By Michael Wolf
  40. Discrete Choice Models with Multiple Unobserved Choice Characteristics By Susan Athey; Guido Imbens
  41. Revealed preference analysis of characteristics models By Laura Blow; Martin Browning; Ian Crawford

  1. By: Takayuki Shiohama
    Abstract: Instability of volatility parameters in GARCH models in an important issue for analyzing financial time series. In this paper we investigate the asymptotic theory for change point estimators in semiparametric GARCH models. When the parameters of a GARCH models have changed within an observed realization, two types estimators, Maximum likelihood estimator (MLE) and Bayesian estimator (BE), are proposed. Then we derive the asymptotic distributions for these estimators. The MLE and BE have different limit laws, and the BE is asymptotically efficient. Monte Carlo studies on the finite sample behaviors are conducted.
    Keywords: GARCH process, change point, maximum likelihood estimator, Bayesian estimator, asymptotic efficiency
    Date: 2006–01
  2. By: Christian Gourieroux (CREST-INSEE); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York); Jun Yu (School of Economics and Social Science, Singapore Management University)
    Abstract: It is well-known that maximum likelihood (ML) estimation of the autoregressive parameter of a dynamic panel data model with fixed effects is inconsistent under fixed time series sample size (T) and large cross section sample size (N) asymptotics. The estimation bias is particularly relevant in practical applications when T is small and the autoregressive parameter is close to unity. The present paper proposes a general, computationally inexpensive method of bias reduction that is based on indirect inference (Gouriéroux et al., 1993), shows unbiasedness and analyzes efficiency. The method is implemented in a simple linear dynamic panel model, but has wider applicability and can, for instance, be easily extended to more complicated frameworks such as nonlinear models. Monte Carlo studies show that the proposed procedure achieves substantial bias reductions with only mild increases in variance, thereby substantially reducing root mean square errors. The method is compared with certain consistent estimators and bias-corrected ML estimators previously proposed in the literature and is shown to have superior .nite sample properties to GMM and the bias-corrected ML of Hahn and Kuersteiner (2002). Finite sample performance is compared with that of a recent estimator proposed by Han and Phillips (2005).
    Keywords: Autoregression, Bias reduction, Dynamic panel, Fixed effects, Indirect inference
    JEL: C33
    Date: 2006–01
  3. By: Yixiao Sun (Department of Economics, University of California, San Dieg); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York); Sainan Jin (Guanghua School of Management, Peking University)
    Abstract: In time series regressions with nonparametrically autocorrelated errors, it is now standard empirical practice to use kernel-based robust standard errors that involve some smoothing function over the sample autocorrelations. The underlying smoothing parameter b, which can be defined as the ratio of the bandwidth (or truncation lag) to the sample size, is a tuning parameter that plays a key role in determining the asymptotic properties of the standard errors and associated semiparametric tests. Small-b asymptotics involve standard limit theory such as standard normal or chi-squared limits, whereas fixed-b asymptotics typically lead to nonstandard limit distributions involving Brownian bridge functionals. The present paper shows that the nonstandard fixed-b limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard small-b limit distribution. In particular, using asymptotic expansions of both the finite sample distribution and the nonstandard limit distribution, we confirm that the second-order corrected critical value based on the expansion of the nonstandard limiting distribution is also second-order correct under the standard small-b asymptotics. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding long-run variance estimator. A plug-in procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plug-in procedure works well in finite samples.
    Keywords: Asymptotic expansion, Bandwidth choice, Kernel method, Long-run variance, Loss function, Nonstandard asymptotics, Robust standard error, Type I and Type II errors
    JEL: C13 C14 C22 C51
    Date: 2006–01
  4. By: Osmani Teixeira de Carvalho Guillén (IBMEC Business School - Rio de Janeiro and Banco Central do Brasil); João Victor Issler (Graduate School of Economics - EPGE, Getulio Vargas Foundation); George Athanasopoulos (Department of Economics and Business Statistics, Monash University)
    Abstract: Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The first reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modified information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of fitted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy - reaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.
    Keywords: reduced rank models, model selection criteria, forecasting accuracy
    JEL: C32 C53
    Date: 2006–01–02
  5. By: Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York)
    Abstract: It has been know since Phillips and Hansen (1990) that cointegrated systems can be consistently estimated using stochastic trend instruments that are independent of the system variables. A similar phenomenon occurs with deterministically trending instruments. The present work shows that such “irrelevant” deterministic trend instruments may be systematically used to produce asymptotically efficient estimates of a cointegrated system. The approach is convenient in practice, involves only linear instrumental variables estimation, and is a straightforward one step procedure with no loss of degrees of freedom in estimation. Simulations reveal that the procedure works well in practice, having little finite sample bias and less finite sample dispersion than other popular cointegrating regression procedures such as reduced rank VAR regression, fully modified least squares, and dynamic OLS. The procedure is shown to be a form of maximum likelihood estimation where the likelihood is constructed for data projected onto the trending instruments. This “trend likelihood”” is related to the notion of the local Whittle likelihood but avoids frequency domain issues altogether. Correspondingly, the approach developed here has many potential applications beyond conventional cointegrating regression, such as the estimation of long memory and fractional cointegrating relationships.
    Keywords: Asymptotic efficiency, Cointegrated system, Instrumental variables, Irrelevant instrument, Karhunen-Loeve representation, Long memory, Optimal estimation, Orthonormal basis, Trend basis, Trend likelihood
    JEL: C22
    Date: 2006–01
  6. By: Ignacio N. Lobato; Carlos Velasco
    Abstract: In this article we introduce efficient Wald tests for testing the null hypothesis of unit root against the alternative of fractional unit root. In a local alternative framework, the proposed tests are locally asymptotically equivalent to the optimal Robinson (1991, 1994a) Lagrange Multiplier tests. Our results contrast with the tests for fractional unit roots introduced by Dolado, Gonzalo and Mayoral (2002) which are inefficient. In the presence of short range serial correlation, we propose a simple and efficient two-step test that avoids the estimation of a nonlinear regression model. In addition, the first order asymptotic properties of the proposed tests are not affected by the pre-estimation of short or long memory parameters
    Date: 2005–11
  7. By: George Athanasopoulos; Farshid Vahid
    Abstract: In this paper, we argue that there is no compelling reason for restricting the class of multivariate models considered for macroeconomic forecasting to VARs given the recent advances in VARMA modelling methodology and improvements in computing power. To support this claim, we use real macroeconomic data and show that VARMA models forecast macroeconomic variables more accurately than VAR models.
    Keywords: Forecasting, Identification, Multivariate time series, Scalar components, VARMA models.
    JEL: C32 C51
    Date: 2006–01
  8. By: Jurgen A. Doornik (Nuffield College, University of Oxford); Marius Ooms (Department of Econometrics, Vrije Universiteit Amsterdam)
    Abstract: We present a new procedure for detecting multiple additive outliers in GARCH(1,1) models at unknown dates. The outlier candidates are the observations with the largest standardized residual. First, a likelihood-ratio based test determines the presence and timing of an outlier. Next, a second test determines the type of additive outlier (volatility or level). The tests are shown to be similar with respect to the GARCH parameters. Their null distribution can be easily approximated from an extreme value distribution, so that computation of <I>p</I>-values does not require simulation. The procedure outperforms alternative methods, especially when it comes to determining the date of the outlier. We apply the method to returns of the Dow Jones index, using monthly, weekly, and daily data. The procedure is extended and applied to GARCH models with Student-<I>t</I> distributed errors.
    Keywords: Dummy variable; Generalized Autoregressive Conditional Heteroskedasticity; GARCH-t; Outlier detection; Extreme value distribution
    JEL: C22 C52 G10
    Date: 2005–10–13
  9. By: George Athanasopoulos; Farshid Vahid
    Abstract: This paper proposes an extension to scalar component methodology for the identification and estimation of VARMA models. The complete methodology determines the exact positions of all free parameters in any VARMA model with a predetermined embedded scalar component structure. This leads to an exactly identified system of equations that is estimated using full information maximum likelihood.
    Keywords: Identification, Multivariate time series, Scalar components, VARMA models.
    JEL: C32 C51
    Date: 2006–01
  10. By: Cees Diks (CeNDEF, Faculty of Economics, University of Amsterdam); Valentyn Panchenko (CeNDEF, Faculty of Economics, University of Amsterdam)
    Abstract: Tests for serial independence and goodness-of-fit based on divergence notions between probability distributions, such as the Kullback-Leibler divergence or Hellinger distance, have recently received much interest in time series analysis. The aim of this paper is to introduce tests for serial independence using kernel-based quadratic forms. This separates the problem of consistently estimating the divergence measure from that of consistently estimating the underlying joint densities, the existence of which is no longer required. Exact level tests are obtained by implementing a Monte Carlo procedure using permutations of the original observations. The bandwidth selection problem is addressed by introducing a multiple bandwidth procedure based on a range of different bandwidth values. After numerically establishing that the tests perform well compared to existing nonparametric tests, applications to estimated time series residuals are considered. The approac! h is illustrated with an application to financial returns data.
    Keywords: Bandwidth selection; Nonparametric tests; Serial independence; Quadratic forms
    JEL: C14 C15
    Date: 2005–08–02
  11. By: Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York); Chirok Han (Victoria University of Wellington)
    Abstract: This note introduces a simple first-difference-based approach to estimation and inference for the AR(1) model. The estimates have virtually no finite sample bias, are not sensitive to initial conditions, and the approach has the unusual advantage that a Gaussian central limit theory applies and is continuous as the autoregressive coefficient passes through unity with a uniform vn rate of convergence. En route, a useful CLT for sample covariances of linear processes is given, following Phillips and Solo (1992). The approach also has useful extensions to dynamic panels.
    Keywords: Autoregression, Differencing, Gaussian limit, Mildly explosive processes, Uniformity, Unit root
    JEL: C22
    Date: 2006–01
  12. By: Borus Jungbacker (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam)
    Abstract: On Importance Sampling for State Space Models Abstract: We consider likelihood inference and state estimation by means of importance sampling for state space models with a nonlinear non-Gaussian observation y ~ p(y|alpha) and a linear Gaussian state alpha ~ p(alpha). The importance density is chosen to be the Laplace approximation of the smoothing density p(alpha|y). We show that computationally efficient state space methods can be used to perform all necessary computations in all situations. It requires new derivations of the Kalman filter and smoother and the simulation smoother which do not rely on a linear Gaussian observation equation. Furthermore, results are presented that lead to a more effective implementation of importance sampling for state space models. An illustration is given for the stochastic volatility model with leverage.
    Keywords: Kalman filter; Likelihood function; Monte Carlo integration; Newton-Raphson; Posterior mode estimation; Simulation smoothing; Stochastic volatility model
    JEL: C15 C32
    Date: 2005–12–19
  13. By: Ian Crawford (University of Surrey and cemmap, Institute for Fiscal Studies)
    Abstract: The literature on statistical test of stochastic dominance has thus far been concerned with univariate distributions. This paper presents nonparametric statistical tests for multivariate distributions. This allows a nonparametric treatment of multiple welfare indicators. These test are applied to a time series of cross-section datasets on household level total expenditure and non labour market time in the UK. This contrasts the welfare inferences which might be drawn from looking at univariate (marginal) distributions with those which consider the joint distribution.
    Keywords: Social welfare, stochastic dominance, nonparametric statistical methods
    JEL: C14 D30
    Date: 2005–06
  14. By: Joseph P; Romano; Azeem M. Shaikh; Michael Wolf
    Abstract: It is common in econometric applications that several hypothesis tests are carried out at the same time. The problem then becomes how to decide which hypotheses to reject, accounting for the multitude of tests. The classical approach is to control the familywise error rate (FWE), that is, the probability of one or more false rejections. But when the number of hypotheses under consideration is large, control of the FWE can become too demanding. As a result, the number of false hypotheses rejected may be small or even zero. This suggests replacing control of the FWE by a more liberal measure. To this end, we review a number of proposals from the statistical literature. We briefly discuss how these procedures apply to the general problem of model selection. A simulation study and two empirical applications illustrate the methods.
    Keywords: Data snooping, false discovery proportion, false discovery rate, generalized familywise error rate, model selection, multiple testing, stepwise methods
    JEL: C12 C14 C52
    Date: 2005–12
  15. By: Skogsvik, Kenth (Dept. of Business Administration, Stockholm School of Economics)
    Abstract: Probabilistic business failure prediction models are commonly estimated from non-random samples of companies. The proportion of failure companies in such samples is often much larger than the proportion of failure companies in most real-world decision contexts. This so-called “choice-based sample bias” implies that calculated failure probabilities will be (more or less) biased. The purpose of the paper is to analyse this bias and its consequences for standard applications of probabilistic failure prediction models (for example probit/logit analysis) and in particular to investigate whether the bias can be eliminated without having to re-estimate the underlying statistical model. It is shown that there is a straightforward linkage between sample-based probabilities of failure and the corresponding population-based probabilities. Knowing this linkage, sample-based probabilities can be adjusted for the “choice-based sample bias”, provided that sufficiently large samples of randomly selected failure companies and randomly selected survival companies have been used in the estimation of the underlying statistical model. Empirical observations in previous research are in line with the theoretical results of the paper.
    Keywords: Business Failure Prediction; Choice-Based Sample Bias; Financial Analysis; Probabilistic Prediction Model; Probit/Logit Analysis
    Date: 2005–12–01
  16. By: Niels Haldrup; Andreu Sansó (Department of Economics, University of Aarhus, Denmark)
    Abstract: The role of additive outliers in integrated time series has attracted some attention recently and research shows that outlier detection should be an integral part of unit root testing procedures. Recently, Vogelsang (1999) suggested an iterative procedure for the detection of multiple additive outliers in integrated time series. However, the procedure appears to suffr from serious size distortions towards the finding of too many outliers as has been shown by Perron and Rodriguez (2003). In this note we prove the inconsistency of the test in each step of the iterative procedure and hence alternative routes need to be taken to detect outliers in nonstationary time series.
    Keywords: Additive outliers, outlier detection, integrated processes
    JEL: C12 C2 C22
    Date: 2006–01–16
  17. By: Offer Lieberman (Technion-Israel Institute of Technology); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York)
    Abstract: There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et. al. (2001), Martens et al. (2004)). The present paper provides some analytical explanations for this evidence and shows how recent results in Lieberman and Phillips (2004a, 2004b) can be used to refine statistical inference about d with little computational effort. In contrast to standard asymptotic normal theory now used in the literature which has an O(n-1/2) error rate on error rejection probabilities, the asymptotic approximation used here has an error rate of o(n-1/2). The new formula is independent of unknown parameters, is simple to calculate and highly user-friendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et. al. (2001) and Martens et. al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory.
    Keywords: ARFIMA; Edgeworth expansion; Fourier integral expansion; Fractional differencing; Improved inference; Long memory; Pivotal statistic; Realized volatility; Singularity
    JEL: C13 C22
    Date: 2006–01
  18. By: Gilbert Colletaz (LEO - Laboratoire d'économie d'Orleans - - CNRS : FRE2783 - Université d'Orléans)
    Abstract: Using Chow and Denning's arguments applied to the individual hypothesis test methodology of Wright (2000) I propose a multiple variance-ratio test based on ranks to investigate the hypothesis of no serial coorelation. This rank joint test can be exact if data are i.i.d.. Some Monte Carlo simulations show that its size distortions are small for observations obeying the martingale hypothesis while not being and i.i.d. process. Also, regarding size and power, it compares favorably with other popular tests.
    Keywords: Random walk hypothesis ; non parametric test ; variance-ratio test
    Date: 2006–01–13
  19. By: Jeroen Hinloopen (Universiteit van Amsterdam); Charles van Marrewijk (Erasmus Universiteit Rotterdam)
    Abstract: The information contained in PP-plots is transformed into a single number. The resulting Harmonic Mass (HM) index is distribution free and its sample counterpart is shown to be consistent. For a wide class of CDFs the exact analytical expression of the distribution of the sample HM index is derived, assuming the two underlying samples to be drawn from the same distribution. The robustness of the concomitant test statistic is assessed, and four different methods are discussed for applying the HM test in case of asymmetric samples.
    Keywords: Distribution; PP-plot; test statistic; critical percentile values; robustness.
    JEL: C12 C14
    Date: 2005–12–22
  20. By: Siem Jan Koopman (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam); Marius Ooms (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam); M. Angeles Carnero (Dpt. Fundamentos del Analisis Economico, University of Alicante)
    Abstract: Novel periodic extensions of dynamic long memory regression models with autoregressive conditional heteroskedastic errors are considered for the analysis of daily electricity spot prices. The parameters of the model with mean and variance specifications are estimated simultaneously by the method of approximate maximum likelihood. The methods are implemented for time series of 1, 200 to 4, 400 daily price observations. Apart from persistence, heteroskedasticity and extreme observations in prices, a novel empirical finding is the importance of day-of-the-week periodicity in the autocovariance function of electricity spot prices. In particular, daily log prices from the Nord Pool power exchange of Norway are modeled effectively by our framework, which is also extended with explanatory variables. For the daily log prices of three European emerging electricity markets (EEX in Germany, Powernext in France, APX in The Netherlands), which are less persistent, periodicity is also highly significant.
    Keywords: Autoregressive fractionally integrated moving average model; Generalised autoregressive conditional heteroskedasticity model; Long memory process; Periodic autoregressive model; Volatility
    JEL: C22 C51 G10
    Date: 2005–10–12
  21. By: Carmen Broto; Esther Ruiz
    Abstract: In this paper we consider a model with stochastic trend, seasonal and transitory components with the disturbances of the trend and transitory disturbances specified as QGARCH models. We propose to use the differences between the autocorrelations of squares and the squared autocorrelations of the auxiliary residuals to identify which component is heteroscedastic. The finite sample performance of these differences is analysed by means of Monte Carlo experiments. We show that conditional heteroscedasticity truly present in the data can be rejected when looking at the correlations of observations or of standardized residuals while the autocorrelations of auxiliary residuals allow us to detect adequately whether there is heteroscedasticity and which is the heteroscedastic component. We also analyse the finite sample behaviour of a QML estimator of the parameters of the model. Finally, we use auxiliary residuals to detect conditional heteroscedasticity in monthly series of inflation of eight OECD countries. We conclude that, for most of these series, the conditional heteroscedasticity affects the transitory component while the long-run and seasonal components are homoscedastic. Furthermore, in the countries where there is a significant relationship between the volatility and the level of inflation, this relation is positive, supporting the Friedman hypothesis.
    Date: 2006–01
  22. By: Rob J Hyndman; Muhammad Akram
    Abstract: This paper discusses the instability of eleven nonlinear state space models that underly exponential smoothing. Hyndman et al. (2002) proposed a framework of 24 state space models for exponential smoothing, including the well-known simple exponential smoothing, Holt's linear and Holt-Winters' additive and multiplicative methods. This was extended to 30 models with Taylor's (2003) damped multiplicative methods. We show that eleven of these 30 models are unstable, having infinite forecast variances. The eleven models are those with additive errors and either multiplicative trend or multiplicative seasonality, as well as the models with multiplicative errors, multiplicative trend and additive seasonality. The multiplicative Holt-Winters' model with additive errors is among the eleven unstable models. We conclude that: (1) a model with a multiplicative trend or a multiplicative seasonal component should also have a multiplicative error; and (2) a multiplicative trend should not be mixed with additive seasonality.
    Keywords: exponential smoothing, forecast variance, nonlinear models, prediction intervals, stability, state space models.
    JEL: C53 C22
    Date: 2006–01
  23. By: Jan F. Kiviet (Faculty of Economics and Econometrics, Universiteit van Amsterdam)
    Abstract: An attempt is made to set rules for a fair and fruitful competition between alternative inference methods based on their performance in simulation experiments. This leads to a list of eight methodologic aspirations. Against their background we criticize aspects of many simulation studies that have been used in the past to compare competing estimators for dynamic panel data models. To illustrate particular pitfalls some further Monte Carlo results are produced, obtained from a simulation design inspired by an analysis of the (non-)invariance properties of estimators and occasionally by available higher-order asymptotic results. We focus on the very specific case of alternative implementations of one and two step generalized method of moments (GMM) estimators in homoskedastic stable zero-mean panel AR(1) models with random individual specific effects. We compare a few implementations, including GMM sytem estimators with alternative weight matrices, and illu! strate that an impartial evaluation of the outcome of a Monte Carlo based contest requires evidence - both analytical and empirical - on the completeness, orthogonality and relevance of the simulation design.
    Keywords: finite sample behavior; generalized method of moments; initial conditions; Monte Carlo methodology; orthogonal parametrizations
    JEL: C13 C15 C23
    Date: 2005–12–08
  24. By: Luis Alberiko Gil-Alana (Facultad de Ciencias Económicas y Empresariales)
    Abstract: In this article I investigate whether the presence of structural breaks affects inference on the order of integration in univariate time series. For this purpose, we make use of a version of the tests of Robinson (1994) which allows us to test unit and fractional roots in the presence of deterministic changes. Several Monte Carlo experiments conducted across the paper show that the tests perform relatively well in the presence of both mean and slope breaks. The tests are applied to annual data on German real GDP, the results showing that the series may be well described in terms of a fractional model with a structural slope break due to World War II. Luis A. Gil-Alana Universidad de Navarra Departamento de Economía 31080 Pamplona SPAIN
    JEL: C15 C22
  25. By: Inkmann,Joachim (Tilburg University, Center for Economic Research)
    Abstract: The inverse probability weighted Generalised Empirical Likelihood (IPW-GEL) estimator is proposed for the estimation of the parameters of a vector of possibly non-linear unconditional moment functions in the presence of conditionally independent sample selection or attrition. The estimator is applied to the estimation of the firm size elasticity of product and process R&D expenditures using a panel of German manufacturing firms, which is affected by attrition and selection into R&D activities. IPW-GEL and IPW-GMM estimators are compared in this application as well as identification assumptions based on independent and conditionally independent sample selection. The results are similar in all specifications.
    Keywords: generalised emperical likelihood;inverse probability weighting;propensity score;conditional independence;missing at random;selection;attrition; research and development
    JEL: C13 C33 O31
    Date: 2005
  26. By: Siem Jan Koopman (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam); Kai Ming Lee (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam)
    Abstract: To gain insights in the current status of the economy, macroeconomic time series are often decomposed into trend, cycle and irregular components. This can be done by nonparametric band-pass filtering methods in the frequency domain or by model-based decompositions based on autoregressive moving average models or unobserved components time series models. In this paper we consider the latter and extend the model to allow for asymmetric cycles. In theoretical and empirical studies, the asymmetry of cyclical behavior is often discussed and considered for series such as unemployment and gross domestic product (GDP). The number of attempts to model asymmetric cycles is limited and it is regarded as intricate and nonstandard. In this paper we show that a limited modification of the standard cycle component leads to a flexible device for asymmetric cycles. The presence of asymmetry can be tested using classical likelihood based test statistics. The trend-cycle de! composition model is applied to three key U.S. macroeconomic time series. It is found that cyclical asymmetry is a prominent salient feature in the U.S. economy.
    Keywords: Asymmetric business cycles; Unobserved Components; Nonlinear state space models; Monte Carlo likelihood; Importance sampling
    JEL: C13 C22 E32
    Date: 2005–08–15
  27. By: David Afshartous; Michael Wolf
    Abstract: Multilevel or mixed effects models are commonly applied to hierarchical data; for example, see Goldstein (2003), Raudenbush and Bryk (2002), and Laird and Ware (1982). Although there exist many outputs from such an analysis, the level-2 residuals, otherwise known as random effects, are often of both substantive and diagnostic interest. Substantively, they are frequently used for institutional comparisons or rankings. Diagnostically, they are used to assess the model assumptions at the group level. Current inference on the level-2 residuals, however, typically does not account for data snooping, that is, for the harmful effects of carrying out a multitude of hypothesis tests at the same time. We provide a very general framework that encompasses both of the following inference problems: (1) Inference on the `absolute' level-2 residuals to determine which are significantly different from zero, and (2) Inference on any prespecified number of pairwise comparisons. Thus, the user has the choice of testing the comparisons of interest. As our methods are flexible with respect to the estimation method invoked, the user may choose the desired estimation method accordingly. We demonstrate the methods with the London Education Authority data used by Rasbash et al. (2004), the Wafer data used by Pinheiro and Bates (2000), and the NELS data used by Afshartous and de Leeuw (2004).
    Keywords: Data snooping, hierarchical linear models, hypothesis testing, pairwise comparisons, random e®ects, rankings
    JEL: C12 C14 C52
    Date: 2005–12
  28. By: Marc Wildi (University of Technical Sciences, Zurich); Bernd Schips (Swiss Institute for Business Cycle Research (KOF), Swiss Federal Institute of Technology Zurich (ETH))
    Abstract: Estimation of signals at the current boundary of time series is an important task in many practical applications. In order to apply the symmetric filter at current time, model-based approaches typically rely on forecasts generated from a time series model in order to extend (stretch) the time series into the future. In this paper we analyze performances of concurrent filters based on TRAMO and X-12-ARIMA for business survey data and compare the results to a new effcient estimation method which does not rely on forecasts. It is shown that both model-based procedures are subject to heavy model misspeci.cation related to false unit root identification at frequency zero and at seasonal frequencies. Our results strongly suggest that the traditional modelbased approach should not be used for problems involving multi-step ahead forecasts such as e.g. the determination of concurrent filters.
    Keywords: Signalextraction, concurrent filter, unit root, amplitude and time delay.
    Date: 2004–12
  29. By: Arup Bose
    Date: 2006
  30. By: Robert W. Fairlie (University of California, Santa Cruz, National Poverty Center and IZA Bonn)
    Abstract: The Blinder-Oaxaca decomposition technique is widely used to identify and quantify the separate contributions of group differences in measurable characteristics, such as education, experience, marital status, and geographical differences to racial and gender gaps in outcomes. The technique cannot be used directly, however, if the outcome is binary and the coefficients are from a logit or probit model. I describe a relatively simple method of performing a decomposition that uses estimates from a logit or probit model. Expanding on the original application of the technique in Fairlie (1999), I provide a more thorough discussion of how to apply the technique, an analysis of the sensitivity of the decomposition estimates to different parameters, and the calculation of standard errors. I also compare the estimates to Blinder-Oaxaca decomposition estimates and discuss an example of when the Blinder-Oaxaca technique may be problematic.
    Keywords: decomposition, logit, probit, Blinder-Oaxaca decomposition, race, gender
    JEL: C6 J15 J16
    Date: 2006–01
  31. By: Haeusler,Erich; Segers,Johan (Tilburg University, Center for Economic Research)
    Abstract: We establish Edgeworth expansions for the distribution function of the centered and normalized Hill estimator for the reciprocal of the index of regular variation of the tail of a distribution function. The expansions are used to derive expansions for coverage probabilities of confidence intervals for the tail index based on the Hill estimator.
    Keywords: asymptotic normality;confidence intervals;Edgeworth expansions;extreme value index;Hill estimator;regular variation;tail index;62 G 20;62 G 32
    JEL: C13 C14
    Date: 2005
  32. By: Ingrid Lo
    Abstract: The author compares the performance of three Gaussian approximation methods--by Nowman (1997), Shoji and Ozaki (1998), and Yu and Phillips (2001)--in estimating a model of the nonlinear continuous-time short-term interest rate. She finds that the performance of Nowman's method is similar to that of Shoji and Ozaki's method, whereas the window width used in the Yu and Phillips method has a critical influence on parameter estimates. When a small window width is used, the Yu and Phillips method does not outperform the other two methods. Choosing a suitable window width can reduce estimation bias quite significantly, whereas too large a window width can worsen estimation bias and the fit of the model. An empirical study is implemented using Canadian and U.K. one-month interest rate data.
    Keywords: Interest rates; Econometric and statistical methods
    JEL: C1 E4
    Date: 2005
  33. By: Riccardo LUCCHETTI (Universita' Politecnica delle Marche, Dipartimento di Economia)
    Abstract: The issue of identification of covariance structures, which arises in a number of different contexts, has been so far linked to conditions on the true parameters to be estimated. In this paper, this limitation is removed. As done by Johansen (1995) in the context of linear models, the present paper provides necessary and sufficient conditions for the identification of a covariance structure that depend only on the constraints, and can therefore be checked independently of estimated parameters. A sufficient condition is developed, which only depends on the structure of the constraints. It is shown that this structure condition, if coupled with the familiar order condition, provides a sufficient condition for identification. In practice, since the structure condition holds if and only if a certain matrix, constructed from the constraint matrices, is invertible, automatic software checking for identification is feasible even for large-scale systems.
    JEL: C13 C30
    Date: 2004–07
  34. By: Kin Lam (Department of Finance & Decision Sciences, Hong Kong Baptist University); May Chun Mei Wong (Dental Public Health, The University of Hong Kong); Wing-Keung Wong (Department of Economics, The National University of Singapore)
    Abstract: We develop some properties on the autocorrelation of the k-period returns for the general mean reversion (GMR) process in which the stationary component is not restricted to the AR(l) process but take the form of a general ARMA process. We then derive some properties of the GMR process and three new non-parametric tests comparing the relative variability of returns over different horizons to validate the GMR process as an alternative to random walk. We further examine the asymptotic properties of these tests which can then be applied to identify random walk models from the GMR processes.
    Keywords: mean reversion, variance ratio test, random walk, stock price, stock return
    JEL: G12 G14
  35. By: Nicholas Z. Muller (School of Forestry and Environmental Studies, Yale University); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York)
    Abstract: This paper demonstrates how parsimonious models of sinusoidal functions can be used to fit spatially variant time series in which there is considerable variation of a periodic type. A typical shortcoming of such tools relates to the difficulty in capturing idiosyncratic variation in periodic models. The strategy developed here addresses this deficiency. While previous work has sought to overcome the shortcoming by augmenting sinusoids with other techniques, the present approach employs station-specific sinusoids to supplement a common regional component, which succeeds in capturing local idiosyncratic behavior in a parsimonious manner. The experiments conducted herein reveal that a semi-parametric approach enables such models to fit spatially varying time series with periodic behavior in a remarkably tight fashion. The methods are applied to a panel data set consisting of hourly air pollution measurements. The augmented sinusoidal models produce an excellent fit to these data at three different levels of spatial detail.
    Keywords: Air Pollution, Idiosyncratic component, Regional variation, Semiparametric model, Sinusoidal function, Spatial-temporal data, Tropospheric Ozone
    JEL: C22 C23
    Date: 2006–01
  36. By: Satoru Kanoh
    Abstract: The duration dependence of stock market cycles has been investigated using the Markov-switching model where the market conditions are unobservable. In the conventional modeling, restrictions are imposed that transition probability is a monotonic function of duration and the duration is truncated at a certain value. This paper proposes a model that is free from these arbitrary restrictions and nests the conventional models. In the model,the parameters that characterize the transition probability are formulated in the state space. Empirical results in several stock markets show that the duration structures differ greatly depending on countries. They are not necessarily monotonic functions of duration and, therefore, cannot be described by the conventional models.
    Keywords: Duration, World stock markets, Markov-switching model, Nonparametric Model, Gibbs sampling, Marginal Likelihood
    Date: 2005–11
  37. By: Bernd Heidergott (Faculty of Economics, Vrije Universiteit Amsterdam); Arie Hordijk (Leiden University, Mathematical Institute); Miranda van Uitert (Faculty of Economics, Vrije Universiteit Amsterdam)
    Abstract: This paper provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm.
    Keywords: finite-state Markov chain; (Taylor) series expansion; measure-valued derivatives; coupled processors
    JEL: C63 C44
    Date: 2005–09–20
  38. By: Thomas Rupp (Institut für Volkswirtschaftslehre (Department of Economics), Technische Universität Darmstadt (Darmstadt University of Technology))
    Abstract: We study the applicability of the pattern recognition methodology "rough set data analysis" (RSDA) in the field of meta analysis. We give a summary of the mathematical and statistical background and then proceed to an application of the theory to a meta analysis of empirical studies dealing with the deterrent effect introduced by Becker and Ehrlich. Results are compared with a previously devised meta regression analysis. We find that the RSDA can be used to discover information overlooked by other methods, to preprocess the data for further studying and to strengthen results previously found by other methods.
    Keywords: Rough Data Set, RSDA, Meta Analysis, Data Mining, Pattern Recognition, Deterrence, Criminometrics
    JEL: K14 K42 C49
    Date: 2005–11
  39. By: Michael Wolf
    Abstract: A well-known pitfall of Markowitz (1952) portfolio optimization is that the sample covariance matrix, which is a critical input, is very erroneous when there are many assets to choose from. If unchecked, this phenomenon skews the optimizer towards extreme weights that tend to perform poorly in the real world. One solution that has been proposed is to shrink the sample covariance matrix by pulling its most extreme elements towards more moderate values. An alternative solution is the resampled eciency suggested by Michaud (1998). This paper compares shrinkage estimation to resampled efficiency. In addition, it studies whether the two techniques can be combined to achieve a further improvement. All this is done in the context of an active portfolio manager who aims to outperform a benchmark index and who is evaluated by his realized information ratio.
    Keywords: Benchmarked Managers, shrinkage
    JEL: C91
    Date: 2006–01
  40. By: Susan Athey; Guido Imbens
    Date: 2006–01–13
  41. By: Laura Blow (Institute for Fiscal Studies); Martin Browning (CAM, University of Copenhagen); Ian Crawford (University of Surrey and cemmap, Institute for Fiscal Studies)
    Abstract: Characteristics models have been found to be useful in many areas of economics. However, their empirical implementation tends to rely heavily on functional form assumptions. In this paper we develop a revealed preference approach to characteristics models. We derive the necessary and sufficient empirical conditions under which data on the market behaviour of heterogeneous, price-taking consumers are nonparametrically consistent with the consumer characteristics model. Where these conditions hold, we show how information may be recovered on individual consumer’s marginal valuations of product attributes. In some cases marginal valuations are point identified and in other cases we can only recover bounds. Where the conditions fail we highlight the role which the introduction of unobserved product attributes can play in rationalising the data. We implement these ideas using consumer panel data on the Danish milk market.
    Keywords: Product characteristics, revealed preference
    JEL: C43 D11
    Date: 2005–04

This nep-ecm issue is ©2006 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.