|
on Econometrics |
By: | Miguel A. Delgado; Javier Hidalgo; Carlos Velasco |
Abstract: | This article proposes a class of goodness-of-fit tests for the autocorrelation function of a time series process, including those exhibiting long-range dependence. Test statistics for composite hypotheses are functionals of a (approximated) martingale transformation of the Bartlett's Tp-process with estimated parameters, which converges in distribution to the standard Brownian Motion under the null hypothesis. We discuss tests of different nature such as omnibus, directional and Portmanteau-type tests. A Monte Carlo study illustrates the performance of the different tests in practice. |
Keywords: | Nonparametric model checking, spectral distribution, linear processes, martingale decomposition, local alternatives, omnibus, smooth and directional tests, long-range alternatives |
JEL: | C14 C22 |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/482&r=ecm |
By: | Bas Donkers; Marcia M Schafgans |
Abstract: | We propose an easy to use derivative based two-step estimation procedure for semi-parametric index models. In the first step various functionals involving the derivatives of the unknown function are estimated using nonparametric kernel estimators. The functionals used provide moment conditions for the parameters of interest, which are used in the second step within a method-of-moments framework to estimate the parameters of interest. The estimator is shown to be root N consistent and asymptotically normal. We extend the procedure to multiple equation models. Our identification conditions and estimation framework provide natural tests for the number of indices in the model. In addition we discuss tests of separability, additivity, and linearity of the influence of the indices. |
Keywords: | Semiparametric estimation, multiple index models, average derivative functionals, generalized methods of moments estimator, rank testing |
JEL: | C14 C31 C52 |
Date: | 2005–07 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/493&r=ecm |
By: | Violetta Dalla; Javier Hidalgo |
Abstract: | The paper proposes a simple test for the hypothesis of strong cycles and as a by-product a test for weak dependence for linear processes. We show that the limit distribution of the test is the maximum of a (semi)Gaussian process G(t), t ? [0; 1]. Because the covariance structure of G(t) is a complicated function of t and model dependent, to obtain the critical values (if possible) of maxt?[0;1] G(t) may be difficult. For this reason we propose a bootstrap scheme in the frequency domain to circumvent the problem of obtaining (asymptotically) valid critical values. The proposed bootstrap can be regarded as an alternative procedure to existing bootstrap methods in the time domain such as the residual-based bootstrap. Finally, we illustrate the performance of the bootstrap test by a small Monte Carlo experiment and an empirical example. |
Keywords: | Cyclical data, strong and weak dependence, spectral density functions, Whittle estimator, bootstrap algorithms |
JEL: | C15 C22 |
Date: | 2005–02 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/486&r=ecm |
By: | Violetta Dalla; Liudas Giraitis; Javier Hidalgo |
Abstract: | For linear processes, semiparametric estimation of the memory parameter, based on the log-periodogramand local Whittle estimators, has been exhaustively examined and their properties are well established.However, except for some specific cases, little is known about the estimation of the memory parameter fornonlinear processes. The purpose of this paper is to provide general conditions under which the localWhittle estimator of the memory parameter of a stationary process is consistent and to examine its rate ofconvergence. We show that these conditions are satisfied for linear processes and a wide class of nonlinearmodels, among others, signal plus noise processes, nonlinear transforms of a Gaussian process ?tandEGARCH models. Special cases where the estimator satisfies the central limit theorem are discussed. Thefinite sample performance of the estimator is investigated in a small Monte-Carlo study. |
Keywords: | Long memory, semiparametric estimation, local Whittle estimator. |
JEL: | C14 C22 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:\2006\497&r=ecm |
By: | Myunghwan Seo |
Abstract: | There is a growing literature on unit root testing in threshold autoregressive models. This paper makes two contributions to the literature. First, an asymptotic theory is developed for unit root testing in a threshold autoregression, in which the errors are allowed to be dependent and heterogeneous, and the lagged level of the dependent variable is employed as the threshold variable. The asymptotic distribution of the proposed Wald test is non-standard and depends on nuisance parameters. Second, the consistency of the proposed residual-based block bootstrap is established based on a newly developed asymptotic theory for this bootstrap. It is demonstrated by a set of Monte Carlo simulations that the Wald test exhibits considerable power gains over the ADF test that neglects threshold effects. The law of one price hypothesis is investigated among used car markets in the US. |
Keywords: | Threshold autoregression, unit root test, threshold cointegration, residual-based block bootstrap |
JEL: | C12 C15 C22 |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/484&r=ecm |
By: | James MacKinnon (Queen's University) |
Abstract: | There are many bootstrap methods that can be used for econometric analysis. In certain circumstances, such as regression models with independent and identically distributed error terms, appropriately chosen bootstrap methods generally work very well. However, there are many other cases, such as regression models with dependent errors, in which bootstrap methods do not always work well. This paper discusses a large number of bootstrap methods that can be useful in econometrics. Applications to hypothesis testing are emphasized, and simulation results are presented for a few illustrative cases. |
Keywords: | bootstrap, Monte Carlo test, wild bootstrap, sieve bootstrap, moving block bootstrap |
JEL: | C12 C15 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1028&r=ecm |
By: | Javier Hidalgo |
Abstract: | We consider the estimation of the location of the pole and memory parameter, ?0 and a respectively, of covariance stationary linear processes whose spectral density function f(?) satisfies f(?) ~ C|? - ?0|-a in a neighbourhood of ?0. We define a consistent estimator of ?0 and derive its limit distribution Z?0 . As in related optimization problems, when the true parameter value can lie on the boundary of the parameter space, we show that Z?0 is distributed as a normal random variable when ?0 ? (0, p), whereas for ?0 = 0 or p, Z?0 is a mixture of discrete and continuous random variables with weights equal to 1/2. More specifically, when ?0 = 0, Z?0 is distributed as a normal random variable truncated at zero. Moreover, we describe and examine a two-step estimator of the memory parameter a, showing that neither its limit distribution nor its rate of convergence is affected by the estimation of ?0. Thus, we reinforce and extend previous results with respect to the estimation of a when ?0 is assumed to be known a priori. A small Monte Carlo study is included to illustrate the finite sample performance of our estimators. |
Keywords: | spectral density estimation, long memory processes, Gaussian processes |
JEL: | C14 G22 |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/481&r=ecm |
By: | Gunnar Bårdsen; Niels Haldrup (Department of Economics, University of Aarhus, Denmark) |
Abstract: | In static single equation cointegration regression models the OLS estimator will have a non-standard distribution unless regressors are strictly exogenous. In the literature a number of estimators have been suggested to deal with this problem, especially by the use of semi-nonparametric estimators. Theoretically ideal instruments can be defined to ensure a limiting Gaussian distribution of IV estimators, but unfortunately such instruments are unlikely to be found in real data. In the present paper we suggest an IV estimator where the Hodrick-Prescott filtered trends are used as instruments for the regressors in cointegrating regressions. These instruments are almost ideal and simulations show that the IV estimator using such instruments alleviate the endogeneity problem extremely well in both finite and large samples. |
Keywords: | Cointegration, Instrumental variables, Mixed Gaussianity. |
JEL: | C2 C22 C32 |
Date: | 2006–02–16 |
URL: | http://d.repec.org/n?u=RePEc:aah:aarhec:2006-03&r=ecm |
By: | Yoshihiko Nishiyama; Peter M Robinson |
Abstract: | In a number of semiparametric models, smoothing seems necessary in order to obtain estimates of the parametric component which are asymptotically normal and converge at parametric rate. However, smoothing can inflate the error in the normal approximation, so that refined approximations are of interest, especially in sample sizes that are not enormous. We show that a bootstrap distribution achieves a valid Edgeworth correction in case of density-weighted averaged derivative estimates of semiparametric index models. Approaches to bias-reduction are discussed. We also develop a higher order expansion, to show that the bootstrap achieves a further reduction in size distortion in case of two-sided testing. The finite sample performance of the methods is investigated by means of Monte Carlo simulations from a Tobit model. |
Keywords: | Bootstrap, Edgeworth correction, semiparametric averaged derivatives |
JEL: | C14 C24 |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/483&r=ecm |
By: | Joseph P. Romano; Michael Wolf |
Abstract: | Confidence intervals in econometric time series regressions suer from notorious coverage problems. This is especially true when the dependence in the data is noticeable and sample sizes are small to moderate, as is often the case in empirical studies. This paper suggests using the studentized block bootstrap and discusses practical issues, such as the choice of the block size. A particular data-dependent method is proposed to automate the method. As a side note, it is pointed out that symmetric confidence intervals are preferred over equal-tailed ones, since they exhibit improved coverage accuracy. The improvements in small sample performance are supported by a simulation study. |
Keywords: | Bootstrap, Confidence Intervals, Studentization, Time Series Regressions, Prewhitening |
JEL: | C14 C15 C22 C32 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:zur:iewwpx:273&r=ecm |
By: | Jeff Racine (McMaster University); James MacKinnon (Queen's University) |
Abstract: | Conventional procedures for Monte Carlo and bootstrap tests require that B, the number of simulations, satisfy a specific relationship with the level of the test. Otherwise, a test that would instead be exact will either overreject or underreject for finite B. We present expressions for the rejection frequencies associated with existing procedures and propose a new procedure that yields exact Monte Carlo tests for any positive value of B. This procedure, which can also be used for bootstrap tests, is likely to be most useful when simulation is expensive. |
Keywords: | resampling, Monte Carlo test, bootstrap test, percentiles |
JEL: | C12 C15 |
Date: | 2004–10 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1027&r=ecm |
By: | Patrick Bajari; Han Hong |
Abstract: | Recently, empirical industrial organization economists have proposed estimators for dynamic games of incomplete information. In these models, agents choose from a finite number actions and maximize expected discounted utility in a Markov perfect equilibrium. Previous econometric methods estimate the probability distribution of agents’ actions in a first stage. In a second step, a finite vector of parameters of the period return function are estimated. In this paper, we develop semiparametric estimators for dynamic games allowing for continuous state variables and a nonparametric first stage. The estimates of the structural parameters are T1/2 consistent (where T is the sample size) and asymptotically normal even though the first stage is estimated nonparametrically. We also propose sufficient conditions for identification of the model. |
JEL: | L0 L5 C1 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0320&r=ecm |
By: | Andrew J. Patton; Allan Timmermann |
Abstract: | Evaluation of forecast optimality in economics and finance has almost exclusively been conducted on the assumption of mean squared error loss under which forecasts should be unbiased and forecast errors serially uncorrelated at the single period horizon with increasing variance as the forecast horizon grows. This paper considers properties of optimal forecasts under general loss functions and establishes new testable implications of forecast optimality. These hold when the forecaster's loss function is unknown but testable restrictions can be imposed on the data generating process, trading off conditions on the data generating process against conditions on the loss function. Finally, we propose flexible parametric estimation of the forecaster's loss function, and obtain a test of forecast optimality via a test of over-identifying restrictions. |
Keywords: | forecast evaluation, loss function, rationality tests |
JEL: | C53 C22 C52 |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/485&r=ecm |
By: | Peter M Robinson; Paolo Zaffaroni |
Abstract: | Strong consistency and asymptotic normality of the Gaussian pseudo-maximumlikelihood estimate of the parameters in a wide class of ARCH(8) processesare established. We require the ARCH weights to decay at least hyperbolically,with a faster rate needed for the central limit theorem than for the law of largenumbers. Various rates are illustrated in examples of particular parameteriza-tions in which our conditions are shown to be satisfied. |
Keywords: | ARCH(8,)models, pseudo-maximum likelihoodestimation, asymptotic inference |
Date: | 2005–10 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/495&r=ecm |
By: | Peter M Robinson |
Abstract: | Much time series data are recorded on economic and financial variables. Statistical modelling of such data is now very well developed, and has applications in forecasting. We review a variety of statistical models from the viewpoint of 'memory', or strength of dependence across time, which is a helpful discriminator between different phenomena of interest. Both linear and nonlinear models are discussed. |
Keywords: | Long memory, short memory, stochastic volatility |
JEL: | C22 |
Date: | 2005–03 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/487&r=ecm |
By: | Margarita Genius; Elisabetta Strazzera |
Abstract: | A Monte Carlo analysis is conducted to assess the validity of the bivariate modeling approach for detection and correction of different forms of elicitation effects in Double Bound Contingent Valuation data. Alternative univariate and bivariate models are applied to several simulated data sets, each one characterized by a specific elicitation effect, and their performance is assessed using standard selection criteria. The bivariate models include the standard Bivariate Probit model, and an alternative specification, based on the Copula approach to multivariate modeling, which is shown to be useful in cases where the hypothesis of normality of the joint distribution is not supported by the data. It is found that the bivariate approach can effectively correct elicitation effects while maintaining an adequate level of efficiency in the estimation of the parameters of interest. |
Keywords: | Double Bound, Elicitation effects, Bivariate models, Probit, Joe Copula |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:cns:cnscwp:200502&r=ecm |
By: | Jay Shanken; Guofu Zhou |
Abstract: | In this paper, we conduct a simulation analysis of the Fama and MacBeth (1973) two-pass procedure, as well as maximum likelihood (ML) and generalized method of moments estimators of cross-sectional expected return models. We also provide some new analytical results on computational issues, the relations between estimators, and asymptotic distributions under model misspecification. The GLS estimator is often much more precise than the usual OLS estimator, but it displays more bias as well. A "truncated" form of ML performs quite well overall in terms of bias and precision, but produces less reliable inferences than the OLS estimator. |
JEL: | G12 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12055&r=ecm |
By: | Katsumi Shimotsu (Department of Economics, Queen's University); Morten Ørregaard Nielsen (Department of Economics, Cornell University) |
Abstract: | We propose to extend the cointegration rank determination procedure of Robinson and Yajima (2002) to accommodate both (asymptotically) stationary and nonstationary fractionally integrated processes as the common stochastic trends and cointegrating errors by applying the exact local Whittle analysis of Shimotsu and Phillips (2005). The proposed method estimates the cointegrating rank by examining the rank of the spectral density matrix of the d’th differenced process around the origin, where the fractional integration order, d, is estimated by the exact local Whittle estimator. Similar to other semiparametric methods, the approach advocated here only requires information about the behavior of the spectral density matrix around the origin, but it relies on a choice of (multiple) bandwidth(s) and threshold parameters. It does not require estimating the cointegrating vector(s) and is easier to implement than regression-based approaches, but it only provides a consistent estimate of the cointegration rank, and formal tests of the cointegration rank or levels of confidence are not available except for the special case of no cointegration. We apply the proposed methodology to the analysis of exchange rate dynamics among a system of seven exchange rates. Contrary to both fractional and integer-based parametric approaches, which indicate at most one cointegrating relation, our results suggest three or possibly four cointegrating relations in the data. |
Keywords: | ` |
JEL: | C14 C32 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1029&r=ecm |
By: | Segers,Johan (Tilburg University, Center for Economic Research) |
Abstract: | Classical extreme-value theory for stationary sequences of random variables can up to a large extent be paraphrased as the study of exceedances over a high threshold. A special role within the description of the temporal dependence between such exceedances is played by the extremal index. Parts of this theory can be generalized not only to random variables on an arbitrary state space hitting certain failure sets but even to a triangular array of rare events on an abstract probability space. In the case of M4 processes, or maxima of multivariate moving maxima, the arguments take a simple and direct form. |
Keywords: | block maximum;exceedance;extremal index;failure set;mixing condition;rare event;stationary sequence;60G70;62G32 |
JEL: | M4 C13 C14 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:dgr:kubcen:20067&r=ecm |
By: | Patrick Bajari; Han Hong; John Krainer; Denis Nekipelov |
Abstract: | We propose a method for estimating static games of incomplete information. A static game is a generalization of a discrete choice model, such as a multinomial logit or probit, which allows the actions of a group of agents to be interdependent. Unlike most earlier work, the method we propose is semiparametric and does not require the covariates to lie in a discrete set. While the estimator we propose is quite flexible, we demonstrate that in most cases it can be easily implemented using standard statistical packages such as STATA. We also propose an algorithm for simulating the model which finds all equilibria to the game. As an application of our estimator, we study recommendations for high technology stocks between 1998-2003. We find that strategic motives, typically ignored in the empirical literature, appear to be an important consideration in the recommendations submitted by equity analysts. |
JEL: | L0 L5 C1 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12013&r=ecm |
By: | Russell Davidson (McGill University); James MacKinnon (Queen's University) |
Abstract: | We perform an extensive series of Monte Carlo experiments to compare the performance of two variants of the "Jackknife Instrumental Variables Estimator," or JIVE, with that of the more familiar 2SLS and LIML estimators. We find no evidence to suggest that JIVE should ever be used. It is always more dispersed than 2SLS, often very much so, and it is almost always inferior to LIML in all respects. Interestingly, JIVE seems to perform particularly badly when the instruments are weak. |
Keywords: | two-stage least squares, LIML, JIVE, instrumental variables, weak instruments |
JEL: | C12 C15 C30 |
Date: | 2004–09 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1031&r=ecm |
By: | Peter M Robinson; J Vidal Sanz |
Abstract: | In the estimation of parametric models for stationary spatial or spatio-temporal data on a d-dimensional lattice, for d >= 2, the achievement of asymptotic efficiency under Gaussianity, and asymptotic normality more generally, with standard convergence rate, faces two obstacles. One is the "edge effect", which worsens with increasing d. The other is the possible difficulty of computing a continuous-frequency form of Whittle estimate or a time domain Gaussian maximum likelihood estimate, due mainly to the Jacobian term. This is especially a problem in "multilateral" models, which are naturally expressed in terms of lagged values in both directions for one or more of the d dimensions. An extension of the discrete-frequency Whittle estimate from the time series literature deals conveniently with the computational problem, but when subjected to a standard device for avoiding the edge effect has disastrous asymptotic performance, along with finite sample numerical drawbacks, the objective function lacking a minimum-distance interpretation and losing any global convexity properties. We overcome these problems by first optimizing a standard, guaranteed non-negative, discrete-frequency, Whittle function, without edge-effect correction, providing an estimate with a slow convergence rate, then improving this by a sequence of computationally convenient approximate Newton iterations using a modified, almost-unbiased periodogram, the desired asymptotic properties being achieved after finitely many steps. The asymptotic regime allows increase in both directions of all d dimensions, with the central limit theorem established after re-ordering as a triangular array. However our work offers something new for "unilateral" models also. When the data are non-Gaussian, asymptotic variances of all parameter estimates may be affected, and we propose consistent, non-negative definite estimates of the asymptotic variance matrix. |
Keywords: | spatial data, multilateral modelling, Whittle estimation, edge effect, consistent variance estimation |
JEL: | C13 |
Date: | 2005–06 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:/2005/492&r=ecm |
By: | Peter M Robinson |
Abstract: | Smoothed nonparametric kernel spectral density estimates areconsidered for stationary data observed on a d-dimensional lattice.The implications for edge effect bias of the choice of kernel andbandwidth are considered. Under some circumstances the bias canbe dominated by the edge effect. We show that this problem can bemitigated by tapering. Some extensions and related issues arediscussed.MSC: 62M30, 62M15 C22 |
Keywords: | nonparametric spectrum estimation, edge effect, tapering. |
JEL: | C22 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:\2006\498&r=ecm |
By: | Helena Veiga |
Abstract: | In this paper we fit the main features of financial returns by means of a two factor long memory stochastic volatility model (2FLMSV). Volatility, which is not observable, is explained by both a short-run and a long-run factor. The first factor follows a stationary AR(1) process whereas the second one, whose purpose is to fit the persistence of volatility observable in data, is a fractional integrated process as proposed by Breidt et al. (1998) and Harvey (1998). We show formally that this model (1) creates more kurtosis than the long memory stochastic volatility (LMSV) of Breidt et al. (1998) and Harvey (1998), (2) deals with volatility persistence and (3) produces small first order autocorrelations of squared observations. In the empirical analysis, we use the estimation procedure of Gallant and Tauchen (1996), the Efficient Method of Moments (EMM), and we provide evidence that our specification performs better than the LMSV model in capturing the empirical facts of data. |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws061303&r=ecm |
By: | Jesus Fernandez-Villaverde; Juan F. Rubio-Ramirez |
Abstract: | This paper shows how particle filtering allows us to undertake likelihood-based inference in dynamic macroeconomic models. The models can be nonlinear and/or non-normal. We describe how to use the output from the particle filter to estimate the structural parameters of the model, those characterizing preferences and technology, and to compare different economies. Both tasks can be implemented from either a classical or a Bayesian perspective. We illustrate the technique by estimating a business cycle model with investment-specific technological change, preference shocks, and stochastic volatility. |
JEL: | C11 C15 E10 E32 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0321&r=ecm |
By: | Pedro N. Rodríguez,; Simón Sosvilla-Rivero |
Abstract: | Previous empirical studies have shown that predictive regressions in which model uncertainty is assessed and propagated generate desirable properties when predicting out-of-sample. However, it is still not clear (a) what the important conditioning variables for predicting stock returns out-of-sample are, and (b) how composite weighted ensembles outperform model selection criteria. By comparing the unconditional accuracy of prediction regressions to the conditional accuracy conditioned on specific explanatory variables masked), we find that cross-sectional premium and term spread are robust predictors of future stock returns. Additionally, using the bias-variance decomposition for the 0/1 loss function, the analysis shows that lower bias, and not lower variance, is the fundamental difference between composite weighted ensembles and model selection criteria. This difference, nevertheless, does not necessarily imply that model averaging techniques improve our ability to describe monthly up-and-down movements' behavior in stock markets. |
URL: | http://d.repec.org/n?u=RePEc:fda:fdaddt:2006-03&r=ecm |
By: | Michael Greenacre |
Abstract: | Although correspondence analysis is now widely available in statistical software packages and applied in a variety of contexts, notably the social and environmental sciences, there are still some misconceptions about this method as well as unresolved issues which remain controversial to this day. In this paper we hope to settle these matters, namely (i) the way CA measures variance in a two-way table and how to compare variances between tables of different sizes, (ii) the influence, or rather lack of influence, of outliers in the usual CA maps, (iii) the scaling issue and the biplot interpretation of maps,(iv) whether or not to rotate a solution, and (v) statistical significance of results. |
Keywords: | Biplot, bootstrapping, canonical correlation, chi-square distance, confidence, ellipse, contingency table, convex hull, correspondence analysis, inertia, randomization test, rotation, singular value |
JEL: | C19 C88 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:940&r=ecm |
By: | P. Jeganathan (Indian Statistical Institute) |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1558&r=ecm |
By: | Svend Hylleberg (Department of Economics, University of Aarhus, Denmark) |
Abstract: | The main objective behind the production of seasonally adjusted time series is to give an easy access to a common time series data set purged of what is considered seasonal noise. Although the application of o¢ cially seasonally adjusted data may have the advantage of being cost saving it may also imply a less e¢ cient use of the information available, and one may apply a distorted set of data. Hence, in many cases, there may be a need for treating seasonality as an integrated part of an econometric analysis. In this article we present several di¤erent ways to integrate the seasonal adjustment into the econometric analysis in addition to applying data adjusted by the two most popular adjustment methods. |
Keywords: | Seasonality |
JEL: | C10 |
Date: | 2006–02–22 |
URL: | http://d.repec.org/n?u=RePEc:aah:aarhec:2006-04&r=ecm |
By: | Konstantin A. Kholodilin |
URL: | http://d.repec.org/n?u=RePEc:diw:diwwpp:dp554&r=ecm |
By: | Andrew Leigh; Justin Wolfers |
Abstract: | We review the efficacy of three approaches to forecasting elections: econometric models that project outcomes on the basis of the state of the economy; public opinion polls; and election betting (prediction markets). We assess the efficacy of each in light of the 2004 Australian election. This election is particularly interesting both because of innovations in each forecasting technology, and also because the increased majority achieved by the Coalition surprised most pundits. While the evidence for economic voting has historically been weak for Australia, the 2004 election suggests an increasingly important role for these models. The performance of polls was quite uneven, and predictions both across pollsters, and through time, vary too much to be particularly useful. Betting markets provide an interesting contrast, and a slew of data from various betting agencies suggests a more reasonable degree of volatility, and useful forecasting performance both throughout the election cycle and across individual electorates. |
JEL: | D72 D84 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12053&r=ecm |
By: | Nancy E. Reichman; Hope Corman; Kelly Noonan; Dhaval Dave |
Abstract: | We use survey data, augmented with data collected from respondents' medical records, to explore selection into prenatal inputs among a group of urban, mostly unmarried mothers. We explore the extent to which several theoretically important but typically unobserved variables (representing wantedness, taste for risky behavior, and maternal health endowment) are likely to bias the estimated effects of prenatal inputs (illicit drug use, cigarette smoking, and prenatal care) on infant health outcomes (birth weight, low birth weight, and abnormal conditions). We also explore the consequences of including other non-standard covariates and of using self-reported inputs versus measure of inputs that incorporate information from medical records. We find that although the typically unobserved variables have strong associations with both inputs and outcomes with high explanatory power, excluding them from infant health production functions does not substantially bias the estimated effects of prenatal inputs. The bias from using self-reported measure of the inputs is much more substantial. The results suggest promising new directions for research on the production of infant health. |
JEL: | I1 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12004&r=ecm |