
on Econometrics 
By:  Takayuki Shiohama 
Abstract:  Instability of volatility parameters in GARCH models in an important issue for analyzing financial time series. In this paper we investigate the asymptotic theory for change point estimators in semiparametric GARCH models. When the parameters of a GARCH models have changed within an observed realization, two types estimators, Maximum likelihood estimator (MLE) and Bayesian estimator (BE), are proposed. Then we derive the asymptotic distributions for these estimators. The MLE and BE have different limit laws, and the BE is asymptotically efficient. Monte Carlo studies on the finite sample behaviors are conducted. 
Keywords:  GARCH process, change point, maximum likelihood estimator, Bayesian estimator, asymptotic efficiency 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:hit:hituec:a471&r=ecm 
By:  Christian Gourieroux (CRESTINSEE); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York); Jun Yu (School of Economics and Social Science, Singapore Management University) 
Abstract:  It is wellknown that maximum likelihood (ML) estimation of the autoregressive parameter of a dynamic panel data model with fixed effects is inconsistent under fixed time series sample size (T) and large cross section sample size (N) asymptotics. The estimation bias is particularly relevant in practical applications when T is small and the autoregressive parameter is close to unity. The present paper proposes a general, computationally inexpensive method of bias reduction that is based on indirect inference (Gouriéroux et al., 1993), shows unbiasedness and analyzes efficiency. The method is implemented in a simple linear dynamic panel model, but has wider applicability and can, for instance, be easily extended to more complicated frameworks such as nonlinear models. Monte Carlo studies show that the proposed procedure achieves substantial bias reductions with only mild increases in variance, thereby substantially reducing root mean square errors. The method is compared with certain consistent estimators and biascorrected ML estimators previously proposed in the literature and is shown to have superior .nite sample properties to GMM and the biascorrected ML of Hahn and Kuersteiner (2002). Finite sample performance is compared with that of a recent estimator proposed by Han and Phillips (2005). 
Keywords:  Autoregression, Bias reduction, Dynamic panel, Fixed effects, Indirect inference 
JEL:  C33 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1550&r=ecm 
By:  Yixiao Sun (Department of Economics, University of California, San Dieg); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York); Sainan Jin (Guanghua School of Management, Peking University) 
Abstract:  In time series regressions with nonparametrically autocorrelated errors, it is now standard empirical practice to use kernelbased robust standard errors that involve some smoothing function over the sample autocorrelations. The underlying smoothing parameter b, which can be defined as the ratio of the bandwidth (or truncation lag) to the sample size, is a tuning parameter that plays a key role in determining the asymptotic properties of the standard errors and associated semiparametric tests. Smallb asymptotics involve standard limit theory such as standard normal or chisquared limits, whereas fixedb asymptotics typically lead to nonstandard limit distributions involving Brownian bridge functionals. The present paper shows that the nonstandard fixedb limit distributions of such nonparametrically studentized tests provide more accurate approximations to the finite sample distributions than the standard smallb limit distribution. In particular, using asymptotic expansions of both the finite sample distribution and the nonstandard limit distribution, we confirm that the secondorder corrected critical value based on the expansion of the nonstandard limiting distribution is also secondorder correct under the standard smallb asymptotics. We further show that, for typical economic time series, the optimal bandwidth that minimizes a weighted average of type I and type II errors is larger by an order of magnitude than the bandwidth that minimizes the asymptotic mean squared error of the corresponding longrun variance estimator. A plugin procedure for implementing this optimal bandwidth is suggested and simulations confirm that the new plugin procedure works well in finite samples. 
Keywords:  Asymptotic expansion, Bandwidth choice, Kernel method, Longrun variance, Loss function, Nonstandard asymptotics, Robust standard error, Type I and Type II errors 
JEL:  C13 C14 C22 C51 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1545&r=ecm 
By:  Osmani Teixeira de Carvalho Guillén (IBMEC Business School  Rio de Janeiro and Banco Central do Brasil); João Victor Issler (Graduate School of Economics  EPGE, Getulio Vargas Foundation); George Athanasopoulos (Department of Economics and Business Statistics, Monash University) 
Abstract:  Using vector autoregressive (VAR) models and MonteCarlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The first reduces parameter space by imposing longterm restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing shortterm restrictions as discussed by the literature on serialcorrelation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modified information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of fitted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short and longterm restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy  reaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of longterm parameters, opening up the discussion of joint estimation of short and longterm parameters in VAR models. 
Keywords:  reduced rank models, model selection criteria, forecasting accuracy 
JEL:  C32 C53 
Date:  2006–01–02 
URL:  http://d.repec.org/n?u=RePEc:ibr:dpaper:200601&r=ecm 
By:  Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York) 
Abstract:  It has been know since Phillips and Hansen (1990) that cointegrated systems can be consistently estimated using stochastic trend instruments that are independent of the system variables. A similar phenomenon occurs with deterministically trending instruments. The present work shows that such “irrelevant” deterministic trend instruments may be systematically used to produce asymptotically efficient estimates of a cointegrated system. The approach is convenient in practice, involves only linear instrumental variables estimation, and is a straightforward one step procedure with no loss of degrees of freedom in estimation. Simulations reveal that the procedure works well in practice, having little finite sample bias and less finite sample dispersion than other popular cointegrating regression procedures such as reduced rank VAR regression, fully modified least squares, and dynamic OLS. The procedure is shown to be a form of maximum likelihood estimation where the likelihood is constructed for data projected onto the trending instruments. This “trend likelihood”” is related to the notion of the local Whittle likelihood but avoids frequency domain issues altogether. Correspondingly, the approach developed here has many potential applications beyond conventional cointegrating regression, such as the estimation of long memory and fractional cointegrating relationships. 
Keywords:  Asymptotic efficiency, Cointegrated system, Instrumental variables, Irrelevant instrument, KarhunenLoeve representation, Long memory, Optimal estimation, Orthonormal basis, Trend basis, Trend likelihood 
JEL:  C22 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1547&r=ecm 
By:  Ignacio N. Lobato; Carlos Velasco 
Abstract:  In this article we introduce efficient Wald tests for testing the null hypothesis of unit root against the alternative of fractional unit root. In a local alternative framework, the proposed tests are locally asymptotically equivalent to the optimal Robinson (1991, 1994a) Lagrange Multiplier tests. Our results contrast with the tests for fractional unit roots introduced by Dolado, Gonzalo and Mayoral (2002) which are inefficient. In the presence of short range serial correlation, we propose a simple and efficient twostep test that avoids the estimation of a nonlinear regression model. In addition, the first order asymptotic properties of the proposed tests are not affected by the preestimation of short or long memory parameters 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:cte:werepe:we056935&r=ecm 
By:  George Athanasopoulos; Farshid Vahid 
Abstract:  In this paper, we argue that there is no compelling reason for restricting the class of multivariate models considered for macroeconomic forecasting to VARs given the recent advances in VARMA modelling methodology and improvements in computing power. To support this claim, we use real macroeconomic data and show that VARMA models forecast macroeconomic variables more accurately than VAR models. 
Keywords:  Forecasting, Identification, Multivariate time series, Scalar components, VARMA models. 
JEL:  C32 C51 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20064&r=ecm 
By:  Jurgen A. Doornik (Nuffield College, University of Oxford); Marius Ooms (Department of Econometrics, Vrije Universiteit Amsterdam) 
Abstract:  We present a new procedure for detecting multiple additive outliers in GARCH(1,1) models at unknown dates. The outlier candidates are the observations with the largest standardized residual. First, a likelihoodratio based test determines the presence and timing of an outlier. Next, a second test determines the type of additive outlier (volatility or level). The tests are shown to be similar with respect to the GARCH parameters. Their null distribution can be easily approximated from an extreme value distribution, so that computation of <I>p</I>values does not require simulation. The procedure outperforms alternative methods, especially when it comes to determining the date of the outlier. We apply the method to returns of the Dow Jones index, using monthly, weekly, and daily data. The procedure is extended and applied to GARCH models with Student<I>t</I> distributed errors. 
Keywords:  Dummy variable; Generalized Autoregressive Conditional Heteroskedasticity; GARCHt; Outlier detection; Extreme value distribution 
JEL:  C22 C52 G10 
Date:  2005–10–13 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050092&r=ecm 
By:  George Athanasopoulos; Farshid Vahid 
Abstract:  This paper proposes an extension to scalar component methodology for the identification and estimation of VARMA models. The complete methodology determines the exact positions of all free parameters in any VARMA model with a predetermined embedded scalar component structure. This leads to an exactly identified system of equations that is estimated using full information maximum likelihood. 
Keywords:  Identification, Multivariate time series, Scalar components, VARMA models. 
JEL:  C32 C51 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20062&r=ecm 
By:  Cees Diks (CeNDEF, Faculty of Economics, University of Amsterdam); Valentyn Panchenko (CeNDEF, Faculty of Economics, University of Amsterdam) 
Abstract:  Tests for serial independence and goodnessoffit based on divergence notions between probability distributions, such as the KullbackLeibler divergence or Hellinger distance, have recently received much interest in time series analysis. The aim of this paper is to introduce tests for serial independence using kernelbased quadratic forms. This separates the problem of consistently estimating the divergence measure from that of consistently estimating the underlying joint densities, the existence of which is no longer required. Exact level tests are obtained by implementing a Monte Carlo procedure using permutations of the original observations. The bandwidth selection problem is addressed by introducing a multiple bandwidth procedure based on a range of different bandwidth values. After numerically establishing that the tests perform well compared to existing nonparametric tests, applications to estimated time series residuals are considered. The approac! h is illustrated with an application to financial returns data. 
Keywords:  Bandwidth selection; Nonparametric tests; Serial independence; Quadratic forms 
JEL:  C14 C15 
Date:  2005–08–02 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050076&r=ecm 
By:  Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York); Chirok Han (Victoria University of Wellington) 
Abstract:  This note introduces a simple firstdifferencebased approach to estimation and inference for the AR(1) model. The estimates have virtually no finite sample bias, are not sensitive to initial conditions, and the approach has the unusual advantage that a Gaussian central limit theory applies and is continuous as the autoregressive coefficient passes through unity with a uniform vn rate of convergence. En route, a useful CLT for sample covariances of linear processes is given, following Phillips and Solo (1992). The approach also has useful extensions to dynamic panels. 
Keywords:  Autoregression, Differencing, Gaussian limit, Mildly explosive processes, Uniformity, Unit root 
JEL:  C22 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1546&r=ecm 
By:  Borus Jungbacker (Vrije Universiteit Amsterdam); Siem Jan Koopman (Vrije Universiteit Amsterdam) 
Abstract:  On Importance Sampling for State Space Models Abstract: We consider likelihood inference and state estimation by means of importance sampling for state space models with a nonlinear nonGaussian observation y ~ p(yalpha) and a linear Gaussian state alpha ~ p(alpha). The importance density is chosen to be the Laplace approximation of the smoothing density p(alphay). We show that computationally efficient state space methods can be used to perform all necessary computations in all situations. It requires new derivations of the Kalman filter and smoother and the simulation smoother which do not rely on a linear Gaussian observation equation. Furthermore, results are presented that lead to a more effective implementation of importance sampling for state space models. An illustration is given for the stochastic volatility model with leverage. 
Keywords:  Kalman filter; Likelihood function; Monte Carlo integration; NewtonRaphson; Posterior mode estimation; Simulation smoothing; Stochastic volatility model 
JEL:  C15 C32 
Date:  2005–12–19 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050117&r=ecm 
By:  Ian Crawford (University of Surrey and cemmap, Institute for Fiscal Studies) 
Abstract:  The literature on statistical test of stochastic dominance has thus far been concerned with univariate distributions. This paper presents nonparametric statistical tests for multivariate distributions. This allows a nonparametric treatment of multiple welfare indicators. These test are applied to a time series of crosssection datasets on household level total expenditure and non labour market time in the UK. This contrasts the welfare inferences which might be drawn from looking at univariate (marginal) distributions with those which consider the joint distribution. 
Keywords:  Social welfare, stochastic dominance, nonparametric statistical methods 
JEL:  C14 D30 
Date:  2005–06 
URL:  http://d.repec.org/n?u=RePEc:sur:surrec:1205&r=ecm 
By:  Joseph P; Romano; Azeem M. Shaikh; Michael Wolf 
Abstract:  It is common in econometric applications that several hypothesis tests are carried out at the same time. The problem then becomes how to decide which hypotheses to reject, accounting for the multitude of tests. The classical approach is to control the familywise error rate (FWE), that is, the probability of one or more false rejections. But when the number of hypotheses under consideration is large, control of the FWE can become too demanding. As a result, the number of false hypotheses rejected may be small or even zero. This suggests replacing control of the FWE by a more liberal measure. To this end, we review a number of proposals from the statistical literature. We briefly discuss how these procedures apply to the general problem of model selection. A simulation study and two empirical applications illustrate the methods. 
Keywords:  Data snooping, false discovery proportion, false discovery rate, generalized familywise error rate, model selection, multiple testing, stepwise methods 
JEL:  C12 C14 C52 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:zur:iewwpx:259&r=ecm 
By:  Skogsvik, Kenth (Dept. of Business Administration, Stockholm School of Economics) 
Abstract:  Probabilistic business failure prediction models are commonly estimated from nonrandom samples of companies. The proportion of failure companies in such samples is often much larger than the proportion of failure companies in most realworld decision contexts. This socalled “choicebased sample bias” implies that calculated failure probabilities will be (more or less) biased. The purpose of the paper is to analyse this bias and its consequences for standard applications of probabilistic failure prediction models (for example probit/logit analysis) and in particular to investigate whether the bias can be eliminated without having to reestimate the underlying statistical model. It is shown that there is a straightforward linkage between samplebased probabilities of failure and the corresponding populationbased probabilities. Knowing this linkage, samplebased probabilities can be adjusted for the “choicebased sample bias”, provided that sufficiently large samples of randomly selected failure companies and randomly selected survival companies have been used in the estimation of the underlying statistical model. Empirical observations in previous research are in line with the theoretical results of the paper. 
Keywords:  Business Failure Prediction; ChoiceBased Sample Bias; Financial Analysis; Probabilistic Prediction Model; Probit/Logit Analysis 
Date:  2005–12–01 
URL:  http://d.repec.org/n?u=RePEc:hhb:hastba:2005_013&r=ecm 
By:  Niels Haldrup; Andreu Sansó (Department of Economics, University of Aarhus, Denmark) 
Abstract:  The role of additive outliers in integrated time series has attracted some attention recently and research shows that outlier detection should be an integral part of unit root testing procedures. Recently, Vogelsang (1999) suggested an iterative procedure for the detection of multiple additive outliers in integrated time series. However, the procedure appears to suffr from serious size distortions towards the finding of too many outliers as has been shown by Perron and Rodriguez (2003). In this note we prove the inconsistency of the test in each step of the iterative procedure and hence alternative routes need to be taken to detect outliers in nonstationary time series. 
Keywords:  Additive outliers, outlier detection, integrated processes 
JEL:  C12 C2 C22 
Date:  2006–01–16 
URL:  http://d.repec.org/n?u=RePEc:aah:aarhec:200601&r=ecm 
By:  Offer Lieberman (TechnionIsrael Institute of Technology); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York) 
Abstract:  There is an emerging consensus in empirical finance that realized volatility series typically display long range dependence with a memory parameter (d) around 0.4 (Andersen et. al. (2001), Martens et al. (2004)). The present paper provides some analytical explanations for this evidence and shows how recent results in Lieberman and Phillips (2004a, 2004b) can be used to refine statistical inference about d with little computational effort. In contrast to standard asymptotic normal theory now used in the literature which has an O(n1/2) error rate on error rejection probabilities, the asymptotic approximation used here has an error rate of o(n1/2). The new formula is independent of unknown parameters, is simple to calculate and highly userfriendly. The method is applied to test whether the reported long memory parameter estimates of Andersen et. al. (2001) and Martens et. al. (2004) differ significantly from the lower boundary (d = 0.5) of nonstationary long memory. 
Keywords:  ARFIMA; Edgeworth expansion; Fourier integral expansion; Fractional differencing; Improved inference; Long memory; Pivotal statistic; Realized volatility; Singularity 
JEL:  C13 C22 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1549&r=ecm 
By:  Gilbert Colletaz (LEO  Laboratoire d'économie d'Orleans  http://www.univorleans.fr/DEG/LEO  CNRS : FRE2783  Université d'Orléans) 
Abstract:  Using Chow and Denning's arguments applied to the individual hypothesis test methodology of Wright (2000) I propose a multiple varianceratio test based on ranks to investigate the hypothesis of no serial coorelation. This rank joint test can be exact if data are i.i.d.. Some Monte Carlo simulations show that its size distortions are small for observations obeying the martingale hypothesis while not being and i.i.d. process. Also, regarding size and power, it compares favorably with other popular tests. 
Keywords:  Random walk hypothesis ; non parametric test ; varianceratio test 
Date:  2006–01–13 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:halshs00007801_v1&r=ecm 
By:  Jeroen Hinloopen (Universiteit van Amsterdam); Charles van Marrewijk (Erasmus Universiteit Rotterdam) 
Abstract:  The information contained in PPplots is transformed into a single number. The resulting Harmonic Mass (HM) index is distribution free and its sample counterpart is shown to be consistent. For a wide class of CDFs the exact analytical expression of the distribution of the sample HM index is derived, assuming the two underlying samples to be drawn from the same distribution. The robustness of the concomitant test statistic is assessed, and four different methods are discussed for applying the HM test in case of asymmetric samples. 
Keywords:  Distribution; PPplot; test statistic; critical percentile values; robustness. 
JEL:  C12 C14 
Date:  2005–12–22 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050122&r=ecm 
By:  Siem Jan Koopman (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam); Marius Ooms (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam); M. Angeles Carnero (Dpt. Fundamentos del Analisis Economico, University of Alicante) 
Abstract:  Novel periodic extensions of dynamic long memory regression models with autoregressive conditional heteroskedastic errors are considered for the analysis of daily electricity spot prices. The parameters of the model with mean and variance specifications are estimated simultaneously by the method of approximate maximum likelihood. The methods are implemented for time series of 1, 200 to 4, 400 daily price observations. Apart from persistence, heteroskedasticity and extreme observations in prices, a novel empirical finding is the importance of dayoftheweek periodicity in the autocovariance function of electricity spot prices. In particular, daily log prices from the Nord Pool power exchange of Norway are modeled effectively by our framework, which is also extended with explanatory variables. For the daily log prices of three European emerging electricity markets (EEX in Germany, Powernext in France, APX in The Netherlands), which are less persistent, periodicity is also highly significant. 
Keywords:  Autoregressive fractionally integrated moving average model; Generalised autoregressive conditional heteroskedasticity model; Long memory process; Periodic autoregressive model; Volatility 
JEL:  C22 C51 G10 
Date:  2005–10–12 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050091&r=ecm 
By:  Carmen Broto; Esther Ruiz 
Abstract:  In this paper we consider a model with stochastic trend, seasonal and transitory components with the disturbances of the trend and transitory disturbances specified as QGARCH models. We propose to use the differences between the autocorrelations of squares and the squared autocorrelations of the auxiliary residuals to identify which component is heteroscedastic. The finite sample performance of these differences is analysed by means of Monte Carlo experiments. We show that conditional heteroscedasticity truly present in the data can be rejected when looking at the correlations of observations or of standardized residuals while the autocorrelations of auxiliary residuals allow us to detect adequately whether there is heteroscedasticity and which is the heteroscedastic component. We also analyse the finite sample behaviour of a QML estimator of the parameters of the model. Finally, we use auxiliary residuals to detect conditional heteroscedasticity in monthly series of inflation of eight OECD countries. We conclude that, for most of these series, the conditional heteroscedasticity affects the transitory component while the longrun and seasonal components are homoscedastic. Furthermore, in the countries where there is a significant relationship between the volatility and the level of inflation, this relation is positive, supporting the Friedman hypothesis. 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws060402&r=ecm 
By:  Rob J Hyndman; Muhammad Akram 
Abstract:  This paper discusses the instability of eleven nonlinear state space models that underly exponential smoothing. Hyndman et al. (2002) proposed a framework of 24 state space models for exponential smoothing, including the wellknown simple exponential smoothing, Holt's linear and HoltWinters' additive and multiplicative methods. This was extended to 30 models with Taylor's (2003) damped multiplicative methods. We show that eleven of these 30 models are unstable, having infinite forecast variances. The eleven models are those with additive errors and either multiplicative trend or multiplicative seasonality, as well as the models with multiplicative errors, multiplicative trend and additive seasonality. The multiplicative HoltWinters' model with additive errors is among the eleven unstable models. We conclude that: (1) a model with a multiplicative trend or a multiplicative seasonal component should also have a multiplicative error; and (2) a multiplicative trend should not be mixed with additive seasonality. 
Keywords:  exponential smoothing, forecast variance, nonlinear models, prediction intervals, stability, state space models. 
JEL:  C53 C22 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20063&r=ecm 
By:  Jan F. Kiviet (Faculty of Economics and Econometrics, Universiteit van Amsterdam) 
Abstract:  An attempt is made to set rules for a fair and fruitful competition between alternative inference methods based on their performance in simulation experiments. This leads to a list of eight methodologic aspirations. Against their background we criticize aspects of many simulation studies that have been used in the past to compare competing estimators for dynamic panel data models. To illustrate particular pitfalls some further Monte Carlo results are produced, obtained from a simulation design inspired by an analysis of the (non)invariance properties of estimators and occasionally by available higherorder asymptotic results. We focus on the very specific case of alternative implementations of one and two step generalized method of moments (GMM) estimators in homoskedastic stable zeromean panel AR(1) models with random individual specific effects. We compare a few implementations, including GMM sytem estimators with alternative weight matrices, and illu! strate that an impartial evaluation of the outcome of a Monte Carlo based contest requires evidence  both analytical and empirical  on the completeness, orthogonality and relevance of the simulation design. 
Keywords:  finite sample behavior; generalized method of moments; initial conditions; Monte Carlo methodology; orthogonal parametrizations 
JEL:  C13 C15 C23 
Date:  2005–12–08 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050112&r=ecm 
By:  Luis Alberiko GilAlana (Facultad de Ciencias Económicas y Empresariales) 
Abstract:  In this article I investigate whether the presence of structural breaks affects inference on the order of integration in univariate time series. For this purpose, we make use of a version of the tests of Robinson (1994) which allows us to test unit and fractional roots in the presence of deterministic changes. Several Monte Carlo experiments conducted across the paper show that the tests perform relatively well in the presence of both mean and slope breaks. The tests are applied to annual data on German real GDP, the results showing that the series may be well described in terms of a fractional model with a structural slope break due to World War II. Luis A. GilAlana Universidad de Navarra Departamento de Economía 31080 Pamplona SPAIN alana@unav.es 
JEL:  C15 C22 
URL:  http://d.repec.org/n?u=RePEc:una:unccee:wp2005&r=ecm 
By:  Inkmann,Joachim (Tilburg University, Center for Economic Research) 
Abstract:  The inverse probability weighted Generalised Empirical Likelihood (IPWGEL) estimator is proposed for the estimation of the parameters of a vector of possibly nonlinear unconditional moment functions in the presence of conditionally independent sample selection or attrition. The estimator is applied to the estimation of the firm size elasticity of product and process R&D expenditures using a panel of German manufacturing firms, which is affected by attrition and selection into R&D activities. IPWGEL and IPWGMM estimators are compared in this application as well as identification assumptions based on independent and conditionally independent sample selection. The results are similar in all specifications. 
Keywords:  generalised emperical likelihood;inverse probability weighting;propensity score;conditional independence;missing at random;selection;attrition; research and development 
JEL:  C13 C33 O31 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2005131&r=ecm 
By:  Siem Jan Koopman (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam); Kai Ming Lee (Faculty of Economics and Business Administration, Vrije Universiteit Amsterdam) 
Abstract:  To gain insights in the current status of the economy, macroeconomic time series are often decomposed into trend, cycle and irregular components. This can be done by nonparametric bandpass filtering methods in the frequency domain or by modelbased decompositions based on autoregressive moving average models or unobserved components time series models. In this paper we consider the latter and extend the model to allow for asymmetric cycles. In theoretical and empirical studies, the asymmetry of cyclical behavior is often discussed and considered for series such as unemployment and gross domestic product (GDP). The number of attempts to model asymmetric cycles is limited and it is regarded as intricate and nonstandard. In this paper we show that a limited modification of the standard cycle component leads to a flexible device for asymmetric cycles. The presence of asymmetry can be tested using classical likelihood based test statistics. The trendcycle de! composition model is applied to three key U.S. macroeconomic time series. It is found that cyclical asymmetry is a prominent salient feature in the U.S. economy. 
Keywords:  Asymmetric business cycles; Unobserved Components; Nonlinear state space models; Monte Carlo likelihood; Importance sampling 
JEL:  C13 C22 E32 
Date:  2005–08–15 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050081&r=ecm 
By:  David Afshartous; Michael Wolf 
Abstract:  Multilevel or mixed effects models are commonly applied to hierarchical data; for example, see Goldstein (2003), Raudenbush and Bryk (2002), and Laird and Ware (1982). Although there exist many outputs from such an analysis, the level2 residuals, otherwise known as random effects, are often of both substantive and diagnostic interest. Substantively, they are frequently used for institutional comparisons or rankings. Diagnostically, they are used to assess the model assumptions at the group level. Current inference on the level2 residuals, however, typically does not account for data snooping, that is, for the harmful effects of carrying out a multitude of hypothesis tests at the same time. We provide a very general framework that encompasses both of the following inference problems: (1) Inference on the `absolute' level2 residuals to determine which are significantly different from zero, and (2) Inference on any prespecified number of pairwise comparisons. Thus, the user has the choice of testing the comparisons of interest. As our methods are flexible with respect to the estimation method invoked, the user may choose the desired estimation method accordingly. We demonstrate the methods with the London Education Authority data used by Rasbash et al. (2004), the Wafer data used by Pinheiro and Bates (2000), and the NELS data used by Afshartous and de Leeuw (2004). 
Keywords:  Data snooping, hierarchical linear models, hypothesis testing, pairwise comparisons, random e®ects, rankings 
JEL:  C12 C14 C52 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:zur:iewwpx:260&r=ecm 
By:  Marc Wildi (University of Technical Sciences, Zurich); Bernd Schips (Swiss Institute for Business Cycle Research (KOF), Swiss Federal Institute of Technology Zurich (ETH)) 
Abstract:  Estimation of signals at the current boundary of time series is an important task in many practical applications. In order to apply the symmetric filter at current time, modelbased approaches typically rely on forecasts generated from a time series model in order to extend (stretch) the time series into the future. In this paper we analyze performances of concurrent filters based on TRAMO and X12ARIMA for business survey data and compare the results to a new effcient estimation method which does not rely on forecasts. It is shown that both modelbased procedures are subject to heavy model misspeci.cation related to false unit root identification at frequency zero and at seasonal frequencies. Our results strongly suggest that the traditional modelbased approach should not be used for problems involving multistep ahead forecasts such as e.g. the determination of concurrent filters. 
Keywords:  Signalextraction, concurrent filter, unit root, amplitude and time delay. 
Date:  2004–12 
URL:  http://d.repec.org/n?u=RePEc:kof:wpskof:0496&r=ecm 
By:  Arup Bose 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:cin:ucecwp:200603&r=ecm 
By:  Robert W. Fairlie (University of California, Santa Cruz, National Poverty Center and IZA Bonn) 
Abstract:  The BlinderOaxaca decomposition technique is widely used to identify and quantify the separate contributions of group differences in measurable characteristics, such as education, experience, marital status, and geographical differences to racial and gender gaps in outcomes. The technique cannot be used directly, however, if the outcome is binary and the coefficients are from a logit or probit model. I describe a relatively simple method of performing a decomposition that uses estimates from a logit or probit model. Expanding on the original application of the technique in Fairlie (1999), I provide a more thorough discussion of how to apply the technique, an analysis of the sensitivity of the decomposition estimates to different parameters, and the calculation of standard errors. I also compare the estimates to BlinderOaxaca decomposition estimates and discuss an example of when the BlinderOaxaca technique may be problematic. 
Keywords:  decomposition, logit, probit, BlinderOaxaca decomposition, race, gender 
JEL:  C6 J15 J16 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp1917&r=ecm 
By:  Haeusler,Erich; Segers,Johan (Tilburg University, Center for Economic Research) 
Abstract:  We establish Edgeworth expansions for the distribution function of the centered and normalized Hill estimator for the reciprocal of the index of regular variation of the tail of a distribution function. The expansions are used to derive expansions for coverage probabilities of confidence intervals for the tail index based on the Hill estimator. 
Keywords:  asymptotic normality;confidence intervals;Edgeworth expansions;extreme value index;Hill estimator;regular variation;tail index;62 G 20;62 G 32 
JEL:  C13 C14 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:2005129&r=ecm 
By:  Ingrid Lo 
Abstract:  The author compares the performance of three Gaussian approximation methodsby Nowman (1997), Shoji and Ozaki (1998), and Yu and Phillips (2001)in estimating a model of the nonlinear continuoustime shortterm interest rate. She finds that the performance of Nowman's method is similar to that of Shoji and Ozaki's method, whereas the window width used in the Yu and Phillips method has a critical influence on parameter estimates. When a small window width is used, the Yu and Phillips method does not outperform the other two methods. Choosing a suitable window width can reduce estimation bias quite significantly, whereas too large a window width can worsen estimation bias and the fit of the model. An empirical study is implemented using Canadian and U.K. onemonth interest rate data. 
Keywords:  Interest rates; Econometric and statistical methods 
JEL:  C1 E4 
Date:  2005 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:0545&r=ecm 
By:  Riccardo LUCCHETTI (Universita' Politecnica delle Marche, Dipartimento di Economia) 
Abstract:  The issue of identification of covariance structures, which arises in a number of different contexts, has been so far linked to conditions on the true parameters to be estimated. In this paper, this limitation is removed. As done by Johansen (1995) in the context of linear models, the present paper provides necessary and sufficient conditions for the identification of a covariance structure that depend only on the constraints, and can therefore be checked independently of estimated parameters. A sufficient condition is developed, which only depends on the structure of the constraints. It is shown that this structure condition, if coupled with the familiar order condition, provides a sufficient condition for identification. In practice, since the structure condition holds if and only if a certain matrix, constructed from the constraint matrices, is invertible, automatic software checking for identification is feasible even for largescale systems. 
JEL:  C13 C30 
Date:  2004–07 
URL:  http://d.repec.org/n?u=RePEc:anc:wpaper:214&r=ecm 
By:  Kin Lam (Department of Finance & Decision Sciences, Hong Kong Baptist University); May Chun Mei Wong (Dental Public Health, The University of Hong Kong); WingKeung Wong (Department of Economics, The National University of Singapore) 
Abstract:  We develop some properties on the autocorrelation of the kperiod returns for the general mean reversion (GMR) process in which the stationary component is not restricted to the AR(l) process but take the form of a general ARMA process. We then derive some properties of the GMR process and three new nonparametric tests comparing the relative variability of returns over different horizons to validate the GMR process as an alternative to random walk. We further examine the asymptotic properties of these tests which can then be applied to identify random walk models from the GMR processes. 
Keywords:  mean reversion, variance ratio test, random walk, stock price, stock return 
JEL:  G12 G14 
URL:  http://d.repec.org/n?u=RePEc:nus:nusewp:wp0514&r=ecm 
By:  Nicholas Z. Muller (School of Forestry and Environmental Studies, Yale University); Peter C. B. Phillips (Cowles Foundation, Yale University; University of Auckland & University of York) 
Abstract:  This paper demonstrates how parsimonious models of sinusoidal functions can be used to fit spatially variant time series in which there is considerable variation of a periodic type. A typical shortcoming of such tools relates to the difficulty in capturing idiosyncratic variation in periodic models. The strategy developed here addresses this deficiency. While previous work has sought to overcome the shortcoming by augmenting sinusoids with other techniques, the present approach employs stationspecific sinusoids to supplement a common regional component, which succeeds in capturing local idiosyncratic behavior in a parsimonious manner. The experiments conducted herein reveal that a semiparametric approach enables such models to fit spatially varying time series with periodic behavior in a remarkably tight fashion. The methods are applied to a panel data set consisting of hourly air pollution measurements. The augmented sinusoidal models produce an excellent fit to these data at three different levels of spatial detail. 
Keywords:  Air Pollution, Idiosyncratic component, Regional variation, Semiparametric model, Sinusoidal function, Spatialtemporal data, Tropospheric Ozone 
JEL:  C22 C23 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1548&r=ecm 
By:  Satoru Kanoh 
Abstract:  The duration dependence of stock market cycles has been investigated using the Markovswitching model where the market conditions are unobservable. In the conventional modeling, restrictions are imposed that transition probability is a monotonic function of duration and the duration is truncated at a certain value. This paper proposes a model that is free from these arbitrary restrictions and nests the conventional models. In the model,the parameters that characterize the transition probability are formulated in the state space. Empirical results in several stock markets show that the duration structures differ greatly depending on countries. They are not necessarily monotonic functions of duration and, therefore, cannot be described by the conventional models. 
Keywords:  Duration, World stock markets, Markovswitching model, Nonparametric Model, Gibbs sampling, Marginal Likelihood 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:hst:hstdps:d05127&r=ecm 
By:  Bernd Heidergott (Faculty of Economics, Vrije Universiteit Amsterdam); Arie Hordijk (Leiden University, Mathematical Institute); Miranda van Uitert (Faculty of Economics, Vrije Universiteit Amsterdam) 
Abstract:  This paper provides series expansions of the stationary distribution of a finite Markov chain. This leads to an efficient numerical algorithm for computing the stationary distribution of a finite Markov chain. Numerical examples are given to illustrate the performance of the algorithm. 
Keywords:  finitestate Markov chain; (Taylor) series expansion; measurevalued derivatives; coupled processors 
JEL:  C63 C44 
Date:  2005–09–20 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20050086&r=ecm 
By:  Thomas Rupp (Institut für Volkswirtschaftslehre (Department of Economics), Technische Universität Darmstadt (Darmstadt University of Technology)) 
Abstract:  We study the applicability of the pattern recognition methodology "rough set data analysis" (RSDA) in the field of meta analysis. We give a summary of the mathematical and statistical background and then proceed to an application of the theory to a meta analysis of empirical studies dealing with the deterrent effect introduced by Becker and Ehrlich. Results are compared with a previously devised meta regression analysis. We find that the RSDA can be used to discover information overlooked by other methods, to preprocess the data for further studying and to strengthen results previously found by other methods. 
Keywords:  Rough Data Set, RSDA, Meta Analysis, Data Mining, Pattern Recognition, Deterrence, Criminometrics 
JEL:  K14 K42 C49 
Date:  2005–11 
URL:  http://d.repec.org/n?u=RePEc:tud:ddpiec:157&r=ecm 
By:  Michael Wolf 
Abstract:  A wellknown pitfall of Markowitz (1952) portfolio optimization is that the sample covariance matrix, which is a critical input, is very erroneous when there are many assets to choose from. If unchecked, this phenomenon skews the optimizer towards extreme weights that tend to perform poorly in the real world. One solution that has been proposed is to shrink the sample covariance matrix by pulling its most extreme elements towards more moderate values. An alternative solution is the resampled eciency suggested by Michaud (1998). This paper compares shrinkage estimation to resampled efficiency. In addition, it studies whether the two techniques can be combined to achieve a further improvement. All this is done in the context of an active portfolio manager who aims to outperform a benchmark index and who is evaluated by his realized information ratio. 
Keywords:  Benchmarked Managers, shrinkage 
JEL:  C91 
Date:  2006–01 
URL:  http://d.repec.org/n?u=RePEc:zur:iewwpx:263&r=ecm 
By:  Susan Athey; Guido Imbens 
Date:  2006–01–13 
URL:  http://d.repec.org/n?u=RePEc:cla:levrem:122247000000001040&r=ecm 
By:  Laura Blow (Institute for Fiscal Studies); Martin Browning (CAM, University of Copenhagen); Ian Crawford (University of Surrey and cemmap, Institute for Fiscal Studies) 
Abstract:  Characteristics models have been found to be useful in many areas of economics. However, their empirical implementation tends to rely heavily on functional form assumptions. In this paper we develop a revealed preference approach to characteristics models. We derive the necessary and sufficient empirical conditions under which data on the market behaviour of heterogeneous, pricetaking consumers are nonparametrically consistent with the consumer characteristics model. Where these conditions hold, we show how information may be recovered on individual consumer’s marginal valuations of product attributes. In some cases marginal valuations are point identified and in other cases we can only recover bounds. Where the conditions fail we highlight the role which the introduction of unobserved product attributes can play in rationalising the data. We implement these ideas using consumer panel data on the Danish milk market. 
Keywords:  Product characteristics, revealed preference 
JEL:  C43 D11 
Date:  2005–04 
URL:  http://d.repec.org/n?u=RePEc:sur:surrec:0305&r=ecm 