
on Econometrics 
By:  Drost,Feike C.; Akker,Ramon van den; Werker,Bas J.M. (Tilburg University, Center for Economic Research) 
Abstract:  Integervalued autoregressive (INAR) processes have been introduced to model nonnegative integervalued phenomena that evolve in time. The distribution of an INAR(p) process is determined by two parameters: a vector of survival probabilities and a probability distribution on the nonnegative integers, called an immigration or innovation distribution. This paper provides an efficient estimator of the parameters, and in particular, shows that the INAR(p) model has the Local Asymptotic Normality property. 
Keywords:  count data;integervalued time series;information loss structure 
JEL:  C12 C13 C19 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200645&r=ecm 
By:  Lorenzo Cappellari; Stephen P. Jenkins 
Abstract:  We discuss methods for calculating multivariate normal probabilities by simulation and two new Stata programs for this purpose: mvdraws for deriving draws from the standard uniform density using either Halton or pseudorandom sequences, and an egen function mvnp() for calculating the probabilities themselves. Several illustrations show how the programs may be used for maximum simulated likelihood estimation. 
Keywords:  Simulation estimation, maximum simulated likelihood, multivariate probit, Halton sequences, pseudorandom sequences, multivariate normal, GHK simulator 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp584&r=ecm 
By:  Giordani, Paolo (Research Department, Central Bank of Sweden); Kohn, Robert (School of Economics, School of Banking and Finance) 
Abstract:  Time series subject to parameter shifts of random magnitude and timing are commonly modeled with a changepoint approach using Chib's (1998) algorithm to draw the break dates. We outline some advantages of an alternative approach in which breaks come through mixture distributions in state innovations, and for which the sampler of Gerlach, Carter and Kohn (2000) allows reliable and efficient inference. We show how this approach can be used to (i) model shifts in variance that occur independently of shifts in other parameters (ii) draw the break dates efficiently in changepoint and regimeswitching models with either Markov or nonMarkov transition probabilities. We extend the proofs given in Carter and Kohn (1994) and in Gerlach, Carter and Kohn (2000) to statespace models with system matrices which are functions of lags of the dependent variables, and we further improve the algorithms in Gerlach, Carter and Kohn by introducing to the time series literature the concept of adaptive MetropolisHastings sampling for discrete latent variable models. We develop an easily implemented adative algorithm that promises to sizably reduce computing time in a variety of problems including mixture innovation, changepoint, regimeswitching, and outlier detection. 
Keywords:  Structural breaks; Parameter instability; Changepoint; Statespace; Mixtures; Discrete latent variables; Adaptive MetropolisHastings 
JEL:  C11 C15 C22 
Date:  2006–05–01 
URL:  http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0196&r=ecm 
By:  Azhong Ye; Rob J Hyndman; Zinai Li 
Abstract:  We present a local linear estimator with variable bandwidth for multivariate nonparametric regression. We prove its consistency and asymptotic normality in the interior of the observed data and obtain its rates of convergence. This result is used to obtain practical direct plugin bandwidth selectors for heteroscedastic regression in one and two dimensions. We show that the local linear estimator with variable bandwidth has better goodnessoffit properties than the local linear estimator with constant bandwidth, in the presence of heteroscedasticity. 
Keywords:  Heteroscedasticity; kernel smoothing; local linear regression; plugin bandwidth, variable bandwidth. 
JEL:  C12 C15 C52 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20068&r=ecm 
By:  Matteo Pelagatti 
Abstract:  Duration dependent Markovswitching VAR (DDMSVAR) models are time series models with data generating process consisting in a mixture of two VAR processes. The switching between the two VAR processes is governed by a two state Markov chain with transition probabilities that depend on how long the chain has been in a state. In the present paper we analyze the second order properties of such models and propose a Markov chain Monte Carlo algorithm to carry out Bayesian inference on the model’s unknowns. Furthermore, a freeware software written by the author for the analysis of time series by means of DDMSVAR models is illustrated. The methodology and the software are applied to the analysis of the U.S. business cycle. 
Keywords:  Markovswitching, business cycle, Gibbs sampler, duration dependence, vector autoregression 
JEL:  C11 C15 C32 C41 E32 
Date:  2003–08 
URL:  http://d.repec.org/n?u=RePEc:mis:wpaper:20061101&r=ecm 
By:  Drost,Feike C.; Akker,Ramon van den; Werker,Bas J.M. (Tilburg University, Center for Economic Research) 
Abstract:  This paper considers integervalued autoregressive processes where the autoregression parameter is close to unity. We consider the asymptotics of this `near unit root' situation. The local asymptotic structure of the likelihood ratios of the model is obtained, showing that the limit experiment is Poissonian. This Poisson limit experiment is used to construct efficient estimators and tests. 
Keywords:  integervalued times series;Poisson limit experiment;localtounity asymptotics 
JEL:  C12 C13 C19 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200644&r=ecm 
By:  Alfonso Miranda (Department of Economics, Keele,); Sophia RabeHesketh (Graduate Schoolof Education,) 
Abstract:  Studying behaviour in economics, sociology, and statistics often involves fitting models in which the response variable depends on a dummy variable (also known as a regime switch variable) or in which the response variable is observed only if a particular selection condition is met. In either case, standard regression techniques deliver inconsistent estimators if unobserved factors that affect the response are correlated with unobserved factors that affect the switching or selection variable. Consistent estimators can be obtained by maximum likelihood estimation of a joint model of the outcome and switching or selection variable. This paper describes a ‘wrapper’ program, ssm, that calls gllamm (RabeHesketh et al. 2004a) to fit such models. The wrapper accepts data in a simple structure, has a straightforward syntax, and reports output in a manner that is easily interpretable. One important feature of ssm is that the loglikelihood can be evaluated using adaptive quadrature (Rabe Hesketh and Skrondal 2002; RabeHesketh et al. 2005) 
Keywords:  Endogenous switching, sample selection, binary variable, count data, ordinal variable, probit, poisson regression, adaptive quadrature, gllamm, wrapper, ssm. 
JEL:  C13 C31 C35 C87 I12 
Date:  2005–12 
URL:  http://d.repec.org/n?u=RePEc:kee:kerpuk:2005/14&r=ecm 
By:  Matteo Pelagatti; Stefania Rondena 
Abstract:  The Dynamic Conditional Correlation (DCC) model of Engle has made the estimation of multivariate GARCH models feasible for reasonably big vectors of securities’ returns. In the present paper we show how Engle’s multistep estimation of the model can be easily extended to elliptical conditional distributions and apply different leptokurtic DCC models to twenty shares listed at the Milan Stock Exchange. 
Keywords:  Multivariate GARCH, Correlation, Elliptical distributions, Fat Tails 
JEL:  C32 C51 C87 
Date:  2004–06 
URL:  http://d.repec.org/n?u=RePEc:mis:wpaper:20060508&r=ecm 
By:  Riccardo LUCCHETTI (Universita' Politecnica delle Marche, Dipartimento di Economia); Giulio PALOMBA ([n.d.]) 
Abstract:  Forecasting models for bond yields often use macro data to improve their properties. Unfortunately, macro data are not available at frequencies higher than monthly.;In order to mitigate this problem, we propose a nonlinear VEC model with conditional heteroskedasticity (NECH) and find that such model has superior insample performance than models which fail to encompass nonlinearities and/or GARCHtype effects.;Outofsample forecasts by our model are marginally superior to competing models; however, the data points we used for evaluating forecasts refer to a period of relative tranquillity on the financial markets, whereas we argue that our model should display superior performance under "unusual" circumstances. 
Keywords:  conditional heteroskedasticity, forecasting, interest rates, nonlinear cointegration 
JEL:  C32 C53 E43 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:anc:wpaper:261&r=ecm 
By:  Minfeng Deng 
Abstract:  One of the key assumptions in spatial econometric modelling is that the spatial process is isotropic, which means that direction is irrelevant in the specification of the spatial structure. On one hand, this assumption largely reduces the complexity of the spatial models and facilitates estimation and interpretation; on the other hand, it appears rather restrictive and hard to justify in many empirical applications. In this paper a very general anisotropic spatial model, which allows for a high level of flexibility in the spatial structure, is proposed. This new model can be estimated using maximum likelihood and its asymptotic properties are well understood. When the model is applied to the wellknown 1970 Boston housing prices data, it significantly outperforms the isotropic spatial lag model. It also provides interesting additional insights into the price determination process in the properties market. 
Keywords:  Anisotropy, spatial econometrics, maximum likelihoods estimation, housing prices. 
JEL:  C21 R15 R31 
Date:  2006–03 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20067&r=ecm 
By:  Eva Boj; Aurea Grane; Josep Fortiana; M. Merce Claramunt 
Abstract:  Distancebased regression allows for a neat implementation of the Partial Least Squares recurrence. In this paper we address practical issues arising when dealing with moderately large datasets (n ~ 104) such as those typical of automobile insurance premium calculations. 
Date:  2006–05 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws063514&r=ecm 
By:  Fabio Busetti (Bank of Italy); Silvia Fabiani (Bank of Italy); Andrew Harvey (Cambridge University) 
Abstract:  We consider how unit root and stationarity tests can be used to study the convergence properties of prices and rates of inflation. Special attention is paid to the issue of whether a mean should be extracted in carrying out unit root and stationarity tests and whether there is an advantage to adopting a new (DickeyFuller) unit root test based on deviations from the last observation. The asymptotic distribution of the new test statistic is given and Monte Carlo simulation experiments show that the test yields considerable power gains for highly persistent autoregressive processes with relatively large initial conditions, the case of primary interest for analysing convergence. We argue that the joint use of unit root and stationarity tests in levels and first differences allows the researcher to distinguish between series that are converging and series that have already converged, and we set out a strategy to establish whether convergence occurs in relative prices or just in rates of inflation. The tests are applied to the monthly series of the Consumer Price Index in the Italian regional capitals over the period 19702003. It is found that all pairwise contrasts of inflation rates have converged or are in the process of converging. Only 24% of price level contrasts appear to be converging, but a multivariate test provides strong evidence of overall convergence. 
Keywords:  DickeyFuller test, initial condition, law of one price, stationarity test 
JEL:  C22 C32 
Date:  2006–02 
URL:  http://d.repec.org/n?u=RePEc:bdi:wptemi:td_575_06&r=ecm 
By:  Andrew Ang; Geert Bekaert; Min Wei 
Abstract:  Surveys do! We examine the forecasting power of four alternative methods of forecasting U.S. inflation outofsample: time series ARIMA models; regressions using real activity measures motivated from the Phillips curve; term structure models that include linear, nonlinear, and arbitragefree specifications; and surveybased measures. We also investigate several methods of combining forecasts. Our results show that surveys outperform the other forecasting methods and that the term structure specifications perform relatively poorly. We find little evidence that combining forecasts produces superior forecasts to survey information alone. When combining forecasts, the data consistently places the highest weights on survey information. 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgfe:200615&r=ecm 
By:  Anatoliy Belaygorod; Michael J. Dueker 
Abstract:  We extend Lubik and Schorfheide's (2004) likelihoodbased estimation of dynamic stochastic general equilibrium (DSGE) models under indeterminacy to encompass a sample period including both determinacy and indeterminacy by implementing the changepoint methodology (Chib, 1998). This feature is useful because DSGE models generally are estimated with data sets that include the Great Inflation of the 1970s and the surrounding low inflation periods. Timing the transitions between determinate and indeterminate equilibria is one of the key contributions of this paper. Moreover, by letting the data provide estimates of the state transition dates and allowing the estimated structural parameters to be the same across determinacy states, we obtain more precise estimates of the differences in characteristics, such as the impulse responses, across the states. In particular, we find that positive interest rate shocks were inflationary under indeterminacy. While the changepoint treatment of indeterminacy is applicable to all estimated linear DSGE models, we demonstrate our methodology by estimating the canonical Woodford model with a timevarying inflation target. Implementation of the changepoint methodology coupled with Tailored MetropolisHastings provides a highly efficient Bayesian MCMC algorithm. Our priorposterior updates indicate substantially lower sensitivity to hyperparameters of the prior relative to other estimated DSGE models. 
Keywords:  Equilibrium (Economics)  Mathematical models ; Econometric models  Evaluation 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:fip:fedlwp:2006025&r=ecm 
By:  Tung Liu (Department of Economics, Ball State University); Courtenay Cliff Stone (Department of Economics, Ball State University) 
Abstract:  Virtually all business and economics statistics texts start their discussion of hypothesis tests with some moreorless detailed reference to criminal trials. Apparently, these authors believe that students are better able to understand the relevance and usefulness of hypothesis test procedures by introducing them first via the dramatic analogy of the criminal justice system. In this paper, we argue that using the criminal trial analogy to motivate and introduce hypothesis test procedures represents bad statistics and bad pedagogy. First, we show that statistical hypothesis test procedures can not be applied to criminal trials. Thus, the criminal trial analogy is invalid. Second, we propose that students can better understand the simplicity and validity of statistical hypothesis test procedures if these procedures are carefully contrasted with the difficulties of decisionmaking in the context of criminal trials. The criminal trial discussion provides a bad analogy but an excellent counterexample for teaching statistical hypothesis procedures and the nature of statistical decisionmaking. 
Keywords:  hypothesis tests, criminal trials, NeymanPearson hypothesis test procedures 
JEL:  A22 C12 K14 
Date:  2006–03 
URL:  http://d.repec.org/n?u=RePEc:bsu:wpaper:200601&r=ecm 
By:  Carsten Kuchler; Martin Spieß 
Abstract:  Like other data quality dimensions, the concept of accuracy is often adopted to characterise a particular data set. However, its common specification basically refers to statistical properties of estimators, which can hardly be proved by means of a single survey at hand. This ambiguity can be resolved by assigning 'accuracy' to survey processes that are known to affect these properties. In this contribution, we consider the subprocess of imputation as one important step in setting up a data set and argue that the so called 'hitrate' criterion, that is intended to measure the accuracy of a data set by some distance function of 'true' but unobserved and imputed values, is neither required nor desirable. In contrast, the socalled 'inference' criterion allows for valid inferences based on a suitably completed data set under rather general conditions. The underlying theoretical concepts are illustrated by means of a simulation study. It is emphasised that the same principal arguments apply to other survey processes that introduce uncertainty into an edited data set. 
Keywords:  Survey Quality, Survey Processes, Accuracy, Assessment of Imputation Methods, Multiple Imputation 
JEL:  C42 C81 C11 C13 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:diw:diwwpp:dp586&r=ecm 
By:  E Bataa; D R Osborn; D H Kim 
Abstract:  We extend the vector autoregression (VAR) based expectations hypothesis tests of term structure using recent developments in bootstrap literature. Firstly, we use wild bootstrap to allow for conditional heteroskedasticity in the VAR residuals without imposing any parameterization on this heteroskedasticity. Secondly, we endogenize the model selection procedure in the bootstrap replications to reflect true uncertainty. Finally, a stationarity correction is introduced which is designed to prevent finitesample bias adjusted VAR parameters from becoming explosive. When the new methodology is applied to extensive US zero coupon term structure data ranging from 1 month to 10 years, we find less rejections for the theory in a subsample of Jan 1982Dec 2003 than in Jan 1952Dec 1978, and when it is rejected it occurs at only the very short and long ends of the maturity spectrum, in contrast to the U shape pattern observed in some of the previous literature. 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:man:cgbcrp:72&r=ecm 
By:  Mikael Bask (Monetary Policy and Research Department, Bank of Finland,); Tung Liu (Department of Economics, Ball State University); Anna Widerberg (Department of Economics,) 
Abstract:  The aim of this paper is to illustrate how the stability of a stochastic dynamic system is measured using the Lyapunov exponents. Specifically, we use a feedforward neural network to estimate these exponents as well as asymptotic results for this estimator to test for unstable (chaotic) dynamics. The data set used is spot electricity prices from the Nordic power exchange market, Nord Pool, and the dynamic system that generates these prices appears to be chaotic in one case. 
Keywords:  Feedforward Neural Network; Nord Pool; Lyapunov Exponents; Spot Electricity Prices; Stochastic Dynamic System 
JEL:  C12 C14 C22 
Date:  2006–04 
URL:  http://d.repec.org/n?u=RePEc:bsu:wpaper:200603&r=ecm 
By:  Christophe Hurlin (LEO  Laboratoire d'économie d'Orleans  [CNRS : UMR6221]  [Université d'Orléans]); Valérie Mignon (CEPII  Centre d'études prospectives et d'informations internationales  [Université de Paris X  Nanterre]) 
Abstract:  L'objet de ce papier est de dresser un panorama complet de la littérature relative aux tests de cointégration sur données de panel. Après un exposé des concepts spécifiques à la cointégration en panel, sont ainsi présentés les tests de l'hypothèse nulle d'absence de cointégration (tests de Pedroni (1995, 1997, 1999, 2003), Kao (1999), Bai et Ng (2001) et Groen et Kleibergen (2003)) ainsi que le test de McCoskey et Kao (1998) reposant sur l'hypothèse nulle de cointégration. Quelques éléments relatifs à l'inférence et l'estimation de systèmes cointégrés sont également fournis. 
Keywords:  Données de panel non stationnaires ; racine unitaire ; cointégration 
Date:  2006–05–22 
URL:  http://d.repec.org/n?u=RePEc:hal:papers:halshs00070887_v1&r=ecm 
By:  Toepoel,Vera; Das,Marcel; Soest,Arthur van (Tilburg University, Center for Economic Research) 
Abstract:  This article shows that respondents gain meaning from visual cues in a web survey as well as from verbal cues (words). We manipulated the layout of a five point rating scale using verbal, graphical, numerical, and symbolic language. This paper extends the existing literature in four directions: (1) all languages (verbal, graphical, numeric, and symbolic) are individually manipulated on the same rating scale, (2) a heterogeneous sample is used, (3) in which way personal characteristics and a respondent's need to think and evaluate account for variance in survey responding is analyzed, and (4) a web survey is used. Our experiments show differences due to verbal and graphical language but no effects of numeric or symbolic language are found. Respondents with a high need for cognition and a high need to evaluate are affected more by layout than respondents with a low need to think or evaluate. Furthermore, men, the elderly, and the highly educated are the most sensible for layout effects. 
Keywords:  web survey;questionnaire lay out;context effects;need for cognition;need to evaluate 
JEL:  C42 C81 C93 
Date:  2006 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200630&r=ecm 
By:  Angela Birk 
Abstract:  The paper shows an easy method to get the impulse responses of VARs of a stochastic recursive dynamic macro model by defining the transition matrix and the stationary distribution function of a model using the model, i.e. economic theory, itself. 
URL:  http://d.repec.org/n?u=RePEc:lsu:lsuwpp:200611&r=ecm 