nep-ecm New Economics Papers
on Econometrics
Issue of 2007‒08‒08
23 papers chosen by
Sune Karlsson
Orebro University

  1. Robust M-estimation of multivariate conditionally heteroscedastic time series models with elliptical innovations By Boudt, Kris; Croux, Christophe
  2. The Limit of Finite-Sample Size and a Problem with Subsampling By Donald W.K. Andrews; Patrik Guggenberger
  3. "Block Sampler and Posterior Mode Estimation for A Nonlinear and Non-Gaussian State-space Model with Correlated Errors" By Yasuhiro Omori; Toshiaki Watanabe
  4. Forecasting VARMA processes using VAR models and subspace-based state space models By Izquierdo, Segismundo S.; Hernández, Cesáreo; del Hoyo, Juan
  5. Validity of Subsampling and "Plug-in Asymptotic" Inference for Parameters Defined by Moment Inequalities By Donald W.K. Andrews; Patrik Guggenberger
  6. Instrumental Variable Estimation of Treatment Effects for Duration Outcomes By Govert E. Bijwaard
  7. Irregularly Spaced Intraday Value at Risk (ISIVaR) Models : Forecasting and Predictive Abilities By Christophe Hurlin; Gilbert Colletaz; Sessi Tokpavi
  8. Optimal combination forecasts for hierarchical time series By Rob J. Hyndman; Roman A. Ahmed; George Athanasopoulos
  9. The Impact of Effect Size Heterogeneity on Meta-Analysis: A Monte Carlo Experiment By Mark J. Koetse; Raymond J.G.M. Florax; Henri L.F. de Groot
  10. "The Conditional Limited Information Maximum Likelihood Approach to Dynamic Panel Structural Equations" By Naoto Kunitomo; Kentaro Akashi
  11. An Alternative System GMM Estimation in Dynamic Panel Models By Housung Jung; Hyeog Ug Kwon
  12. Two canonical VARMA forms: Scalar component models vis-à-vis the Echelon form By George Athanasopoulos; D.S. Poskitt; Farshid Vahid
  13. Dynamic Discrete Choice Structural Models: A Survey By Victor Aguirregabiria; Pedro mira
  14. "On Likelihood Ratio Tests of Structural Coefficients: Anderson-Rubin (1949) revisited" By Naoto Kunitomo; T. W. Anderson
  15. Estimating High-Frequency Based (Co-) Variances: A Unified Approach By Ingmar Nolte; Valeri Voev
  16. The Identification and Economic Content of Ordered Choice Models with Stochastic Thresholds By Flavio Cunha; James J. Heckman; Salvador Navarro
  17. Generalized canonical regression By Arturo Estrella
  18. Modelling Volatilities and Conditional Correlations in Futures Markets with a Multivariate t Distribution By Bahram Pesaran; M. Hashem Pesaran
  19. Mixed Hitting-Time Models By Jaap H. Abbring
  20. Spatial Stochastic Frontier Models: accounting for unobserved local determinants of inefficiency By Alexandra M. Schmidt; Ajax R. B. Moreira; Thais C. O. Fonseca; Steven M. Helfand
  21. Determining Growth Determinants: Default Priors and Predictive Performance in Bayesian Model Averaging By Theo Eicher; Chris Papageogiou; Adrian E Raftery
  22. Extracting business cycle fluctuations: what do time series filters really do? By Arturo Estrella
  23. Estimation with the Nested Logit Model: Specifications and Software Particularities By Nadja Silberhorn; Yasemin Boztug; Lutz Hildebrandt

  1. By: Boudt, Kris; Croux, Christophe
    Abstract: This paper proposes new methods for the econometric analysis of outlier contaminated multivariate conditionally heteroscedastic time series. Robust alternatives to the Gaussian quasi-maximum likelihood estimator are presented. Under elliptical symmetry of the innovation vector, consistency results for M-estimation of the general conditional heteroscedasticity model are obtained. We also propose a robust estimator for the cross-correlation matrix and a diagnostic check for correct specification of the innovation density function. In a Monte Carlo experiment, the effect of outliers on different types of M-estimators is studied. We conclude with a financial application in which these new tools are used to analyse and estimate the symmetric BEKK model for the 1980-2006 series of weekly returns on the Nasdaq and NYSE composite indices. For this dataset, robust estimators are needed to cope with the outlying returns corresponding to the stock market crash in 1987 and the burst of the dotcom bubble in 2000.
    Keywords: conditional heteroscedasticity; M-estimators; multivariate time series; outliers; quasi-maximum likelihood; robust methods
    JEL: C51 C13 C53 C32
    Date: 2007–07–27
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4271&r=ecm
  2. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA)
    Abstract: This paper considers inference based on a test statistic that has a limit distribution that is discontinuous in a nuisance parameter or the parameter of interest. The paper shows that subsample, b_n < n bootstrap, and standard fixed critical value tests based on such a test statistic often have asymptotic size -- defined as the limit of the finite-sample size -- that is greater than the nominal level of the tests. We determine precisely the asymptotic size of such tests under a general set of high-level conditions that are relatively easy to verify. The high-level conditions are verified in several examples. Analogous results are established for confidence intervals. The results apply to tests and confidence intervals (i) when a parameter may be near a boundary, (ii) for parameters defined by moment inequalities, (iii) based on super-efficient or shrinkage estimators, (iv) based on post-model selection estimators, (v) in scalar and vector autoregressive models with roots that may be close to unity, (vi) in models with lack of identification at some point(s) in the parameter space, such as models with weak instruments and threshold autoregressive models, (vii) in predictive regression models with nearly-integrated regressors, (viii) for non-differentiable functions of parameters, and (ix) for differentiable functions of parameters that have zero first-order derivative. Examples (i)-(iii) are treated in this paper. Examples (i) and (iv)-(vi) are treated in sequels to this paper, Andrews and Guggenberger (2005a, b). In models with unidentified parameters that are bounded by moment inequalities, i.e., example (ii), certain subsample confidence regions are shown to have asymptotic size equal to their nominal level. In all other examples listed above, some types of subsample procedures do not have asymptotic size equal to their nominal level.
    Keywords: Asymptotic size, b < n bootstrap, Finite-sample size, Over-rejection, Size correction, Subsample confidence interval, Subsample test
    JEL: C12 C15
    Date: 2007–03
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1605r&r=ecm
  3. By: Yasuhiro Omori (Faculty of Economics, University of Tokyo); Toshiaki Watanabe (Institute of Economic Research, Hitotsubashi University)
    Abstract: This article introduces a new efficient simulation smoother and disturbance smoother for general state-space models where there exists a correlation between error terms of the measurement and state equations. The state vector is divided into several blocks where each block consists of many state variables. For each block, corresponding disturbances are sampled simultaneously from their conditional posterior distribution. The algorithm is based on the multivariate normal approximation of the conditional posterior density and exploits a conventional simulation smoother for a linear and Gaussian state space model. The performance of our method is illustrated using two examples (1) stochastic volatility models with leverage effects and (2) stochastic volatility models with leverage effects and state-dependent variances. The popular single move sampler which samples a state variable at a time is also conducted for comparison in the first example. It is shown that our proposed sampler produces considerable improvement in the mixing property of the Markov chain Monte Carlo chain.
    Date: 2007–08
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2007cf508&r=ecm
  4. By: Izquierdo, Segismundo S.; Hernández, Cesáreo; del Hoyo, Juan
    Abstract: VAR modelling is a frequent technique in econometrics for linear processes. VAR modelling offers some desirable features such as relatively simple procedures for model specification (order selection) and the possibility of obtaining quick non-iterative maximum likelihood estimates of the system parameters. However, if the process under study follows a finite-order VARMA structure, it cannot be equivalently represented by any finite-order VAR model. On the other hand, a finite-order state space model can represent a finite-order VARMA process exactly, and, for state-space modelling, subspace algorithms allow for quick and non-iterative estimates of the system parameters, as well as for simple specification procedures. Given the previous facts, we check in this paper whether subspace-based state space models provide better forecasts than VAR models when working with VARMA data generating processes. In a simulation study we generate samples from different VARMA data generating processes, obtain VAR-based and state-space-based models for each generating process and compare the predictive power of the obtained models. Different specification and estimation algorithms are considered; in particular, within the subspace family, the CCA (Canonical Correlation Analysis) algorithm is the selected option to obtain state-space models. Our results indicate that when the MA parameter of an ARMA process is close to 1, the CCA state space models are likely to provide better forecasts than the AR models. We also conduct a practical comparison (for two cointegrated economic time series) of the predictive power of Johansen restricted-VAR (VEC) models with the predictive power of state space models obtained by the CCA subspace algorithm, including a density forecasting analysis.
    Keywords: subspace algorithms; VAR; forecasting; cointegration; Johansen; CCA
    JEL: C53 C5 C51
    Date: 2006–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:4235&r=ecm
  5. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCLA)
    Abstract: This paper considers inference for parameters defined by moment inequalities and equalities. The parameters need not be identified. For a specified class of test statistics, this paper establishes the uniform asymptotic validity of subsampling, m out of n bootstrap, and "plug-in asymptotic" tests and confidence intervals for such parameters. Establishing uniform asymptotic validity is crucial in moment inequality problems because the test statistics of interest have discontinuities in their pointwise asymptotic distributions. The size results are quite general because they hold without specifying the particular form of the moment conditions -- only 2+delta moments finite are required. The results allow for i.i.d. and dependent observations and for preliminary consistent estimation of identified parameters.
    Keywords: Asymptotic size, Confidence set, Exact size, m out of n bootstrap, Subsampling, Moment inequalities
    JEL: C12 C15
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1620&r=ecm
  6. By: Govert E. Bijwaard (Erasmus University Rotterdam and IZA)
    Abstract: In this article we propose and implement an instrumental variable estimation procedure to obtain treatment effects on duration outcomes. The method can handle the typical complications that arise with duration data of time-varying treatment and censoring. The treatment effect we define is in terms of shifting the quantiles of the outcome distribution based on the Generalized Accelerated Failure Time (GAFT) model. The GAFT model encompasses two competing approaches to duration data; the (Mixed) Proportional Hazard (MPH) model and the Accelerated Failure Time (AFT) model. We discuss the large sample properties of the proposed Instrumental Variable Linear Rank (IVLR), and show how we can, with one additional step, improve upon its efficiency. We discuss the empirical implementation of the estimator and apply it to the Illinois re-employment bonus experiment.
    Keywords: treatment effect, duration model, censoring, instrumental variable
    JEL: C21 C41 J64
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp2896&r=ecm
  7. By: Christophe Hurlin (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Gilbert Colletaz (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans]); Sessi Tokpavi (LEO - Laboratoire d'économie d'Orleans - [CNRS : UMR6221] - [Université d'Orléans])
    Abstract: The objective of this paper is to propose a market risk measure defined in price event time and a suitable backtesting procedure for irregularly spaced data. Firstly, we combine Autoregressive Conditional Duration models for price movements and a non parametric quantile estimation to derive a semi-parametric Irregularly Spaced Intraday Value at Risk (ISIVaR) model. This ISIVaR measure gives two information: the expected duration for the next price event and the related VaR. Secondly, we use a GMM approach to develop a backtest and investigate its finite sample properties through numerical Monte Carlo simulations. Finally, we propose an application to two NYSE stocks.
    Keywords: Value at Risk; High-frequency data; ACD models; Irregularly spaced market risk models; Backtesting
    Date: 2007–07–13
    URL: http://d.repec.org/n?u=RePEc:hal:papers:halshs-00162440_v1&r=ecm
  8. By: Rob J. Hyndman; Roman A. Ahmed; George Athanasopoulos
    Abstract: In many applications, there are multiple time series that are hierarchically organized and can be aggregated at several different levels in groups based on products, geography or some other features. We call these "hierarchical time series". They are commonly forecast using either a "bottom-up" or a "top-down" method. In this paper we propose a new approach to hierarchical forecasting which provides optimal forecasts that are better than forecasts produced by either a top-down or a bottom-up approach. Our method is based on independently forecasting all series at all levels of the hierarchy and then using a regression model to optimally combine and reconcile these forecasts. The resulting revised forecasts add up appropriately across the hierarchy, are unbiased and have minimum variance amongst all combination forecasts under some simple assumptions. We show in a simulation study that our method performs well compared to the top-down approach and the bottom-up method. It also allows us to construct prediction intervals for the resultant forecasts. Finally, we apply the method to forecasting Australian tourism demand where the data are disaggregated by purpose of visit and geographical region.
    Keywords: Bottom-up forecasting, combining forecasts, GLS regression, hierarchical forecasting, Moore-Penrose inverse, reconciling forecasts, top-down forecasting.
    JEL: C53 C32 C23
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-9&r=ecm
  9. By: Mark J. Koetse (Vrije Universiteit Amsterdam); Raymond J.G.M. Florax (Purdue University, and Vrije Universiteit Amsterdam); Henri L.F. de Groot (Vrije Universiteit Amsterdam)
    Abstract: In this paper we use Monte Carlo simulation to investigate the impact of effect size heterogeneity on the results of a meta-analysis. Specifically, we address the small sample behaviour of the OLS, the fixed effects regression and the mixed effects meta-estimators under three alternative scenarios of effect size heterogeneity. We distinguish heterogeneity in effect size variance, heterogeneity due to a varying true underlying effect across primary studies, and heterogeneity due to a non-systematic impact of omitted variable bias in primary studies. Our results show that the mixed effects estimator is to be preferred to the other two estimators in the first two situations. However, in the presence of random effect size variation due to a non-systematic impact of omitted variable bias, using the mixed effects estimator may be suboptimal. We also address the impact of sample size and show that meta-analysis sample size is far more effective in reducing meta-estimator variance and increasing the power of hypothesis testing than primary study sample size.
    Keywords: Effect size heterogeneity; meta-analysis; Monte Carlo simulation; fixed effects regression estimator; mixed effects estimator
    JEL: C12 C15 C40
    Date: 2007–07–16
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070052&r=ecm
  10. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo); Kentaro Akashi (Graduate School of Economics, University of Tokyo)
    Abstract: We propose the conditional limited information maximum likelihood (CLIML) approach for estimating dynamic panel structural equation models. When there are dynamic effects and endogenous variables with individual effects at the same time, the CLIML estimation method for the doubly-filtered data does give not only a consistent estimation, but also it attains the asymptotic efficiency when the number of orthogonal condition is large. Our formulation includes Alvarez and Arellano (2003), Blundell and Bond (2000) and other linear dynamic panel models as special cases.
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2007cf503&r=ecm
  11. By: Housung Jung; Hyeog Ug Kwon
    Abstract: The system GMM estimator in dynamic panel data models which combines two moment conditions, i.e., for the differenced equation and for the model in levels, is known to be more efficient than the first-difference GMM estimator. However, an initial optimal weight matrix is not known for the system estimation procedure. Therefore, we suggest the use of 'a suboptimal weight matrix' which may reduce the finite sample bias whilst increasing its efficiency. Using the Kantorovich inequality, we find that the potential efficiency gain becomes large when the variance of individual effects increases compared to the variance of the idiosyncratic errors. (Our Monte Carlo experiments show that the small sample properties of the suboptimal system estimator are shown to be much more reliable than any other conventional system GMM estimator in terms of bias and efficiency.
    Keywords: Dynamic panel data, sub-optimal weighting matrix, KI upper boud
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:hst:hstdps:d07-217&r=ecm
  12. By: George Athanasopoulos; D.S. Poskitt; Farshid Vahid
    Abstract: In this paper we study two methodologies which identify and specify canonical form VARMA models. The two methodologies are: (i) an extension of the scalar component methodology which specifies canonical VARMA models by identifying scalar components through canonical correlations analysis and (ii) the Echelon form methodology which specifies canonical VARMA models through the estimation of Kronecker indices. We compare the actual forms and the methodologies on three levels. Firstly we present a theoretical comparison. Secondly, we present a Monte-Carlo simulation study that compares the performance of the two methodologies in identifying some pre-specified data generating processes. Lastly we compare the out-of-sample forecast performance of the two forms when models are fitted to real macroeconomic data.
    Keywords: Echelon form, Identification, Multivariate time series, Scalar component, VARMA model.
    JEL: C32 C51
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2007-10&r=ecm
  13. By: Victor Aguirregabiria; Pedro mira
    Abstract: This paper reviews methods for the estimation of dynamic discrete choice structural models and discusses related econometric issues. We consider single agent models, competitive equilibrium models and dynamic games. The methods are illustrated with descriptions of empirical studies which have applied these techniques to problems in different areas of economics. Programming codes for the estimation methods are available in a companion web page.
    Keywords: Dynamic structural models; Discrete choice; Estimation methods.
    JEL: C14 C25 C61
    Date: 2007–07–27
    URL: http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-297&r=ecm
  14. By: Naoto Kunitomo (Faculty of Economics, University of Tokyo); T. W. Anderson (Department of Statistics and Department of Economics, Stanford University)
    Abstract: We develop the likelihood ratio criterion (LRC) for testing the coefficients of a structural equation in a system of simultaneous equations in econometrics. We relate the likelihood ratio criterion to the AR statistic proposed by Anderson and Rubin (1949, 1950), which has been widely known and used in econometrics over the past several decades. The method originally developed by Anderson and Rubin (1949, 1950) can be modified to the situation when there are many (or weak in some sense) instruments which may have some relevance in recent econometrics. The method of LRC can be extended to the linear functional relationships (or the errors-in-variables) model, the reduced rank regression and the cointegration models.
    Date: 2007–06
    URL: http://d.repec.org/n?u=RePEc:tky:fseres:2007cf499&r=ecm
  15. By: Ingmar Nolte (University of Konstanz); Valeri Voev (University of Konstanz)
    Abstract: We propose a unified framework for estimating integrated variances and covariances based on simple OLS regressions, allowing for a general market microstructure noise specification. We show that our estimators can outperform in terms of the root mean squared error criterion the most recent and commonly applied estimators, such as the realized kernels of Barndorff-Nielsen, Hansen, Lunde & Shephard (2006), the two-scales realized variance of Zhang, Mykland & A¨ýt-Sahalia (2005), the Hayashi & Yoshida (2005) covariance estimator, and the realized variance and covariance with the optimal sampling frequency chosen after Bandi & Russell (2005a) and Bandi & Russell (2005b). The power of our methodology stems from the fact that instead of trying to correct the realized quantities for the noise, we identify both the true underlying integrated moments and the moments of the noise, which are also estimated within our framework. Apart from being simple to implement, an important property of our estimators is that they are quite robust to misspecifications of the noise process.
    Keywords: High frequency data, Realized volatility and covariance, Market microstructure
    JEL: G10 F31 C32
    Date: 2007–07–26
    URL: http://d.repec.org/n?u=RePEc:knz:cofedp:0707&r=ecm
  16. By: Flavio Cunha (University of Chicago); James J. Heckman (University of Chicago, American Bar Foundation, University College Dublin and IZA); Salvador Navarro (University of Wisconsin-Madison)
    Abstract: This paper extends the widely used ordered choice model by introducing stochastic thresholds and interval-specific outcomes. The model can be interpreted as a generalization of the GAFT (MPH) framework for discrete duration data that jointly models durations and outcomes associated with different stopping times. We establish conditions for nonparametric identification. We interpret the ordered choice model as a special case of a general discrete choice model and as a special case of a dynamic discrete choice model.
    Keywords: discrete choice, ordered choice, dynamics
    JEL: C31
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp2940&r=ecm
  17. By: Arturo Estrella
    Abstract: This paper introduces a generalized approach to canonical regression, in which a set of jointly dependent variables enters the left-hand side of the equation as a linear combination, formally like the linear combination of regressors in the right-hand side of the equation. Natural applications occur when the dependent variable is the sum of components that may optimally receive unequal weights or in time series models in which the appropriate timing of the dependent variable is not known a priori. The paper derives a quasi-maximum likelihood estimator as well as its asymptotic distribution and provides illustrative applications.
    Keywords: Time-series analysis ; Econometric models
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:288&r=ecm
  18. By: Bahram Pesaran (Wadhwani Asset Management, LLP); M. Hashem Pesaran (CIMF, Cambridge University, GSA Capital and IZA)
    Abstract: This paper considers a multivariate t version of the Gaussian dynamic conditional correlation (DCC) model proposed by Engle (2002), and suggests the use of devolatized returns computed as returns standardized by realized volatilities rather than by GARCH type volatility estimates. The t-DCC estimation procedure is applied to a portfolio of daily returns on currency futures, government bonds and equity index futures. The results strongly reject the normal-DCC model in favour of a t-DCC specification. The t-DCC model also passes a number of VaR diagnostic tests over an evaluation sample. The estimation results suggest a general trend towards a lower level of return volatility, accompanied by a rising trend in conditional cross correlations in most markets; possibly reflecting the advent of euro in 1999 and increased interdependence of financial markets.
    Keywords: volatilities and correlations, futures market, multivariate t, financial interdependence, VaR diagnostics
    JEL: C51 C52 G11
    Date: 2007–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp2906&r=ecm
  19. By: Jaap H. Abbring (VU University Amsterdam)
    Abstract: We study a mixed hitting-time (MHT) model that specifies durations as the first time a Lévy process— a continuous-time process with stationary and independent increments— crosses a heterogeneous threshold. Such models are of substantial interest because they can be reduced from optimal-stopping models with heterogeneous agents that do not naturally produce a mixed proportional hazards (MPH) structure. We show how strategies for analyzing the MPH model's identifiability can be adapted to prove identifiability of an MHT model with observed regressors and unobserved heterogeneity. We discuss inference from censored data and extensions to time-varying covariates and latent processes with more general time and dependency structures. We conclude by discussing the relative merits of the MHT and MPH models as complementary frameworks for econometric duration analysis.
    Keywords: duration analysis; hitting time; identifiability; Lévy process; mixture
    JEL: C14 C41
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20070057&r=ecm
  20. By: Alexandra M. Schmidt; Ajax R. B. Moreira; Thais C. O. Fonseca; Steven M. Helfand
    Abstract: In this paper, we analyze the productivity of farms across n = 370 municipalities located in the Center-West region of Brazil. We propose a stochastic frontier model with a latent spatial structure to account for possible unknown geographical variation of the outputs. This spatial component is included in the one-sided disturbance term. We explore two different distributions for this term, the exponential and the truncated normal. We use the Bayesian paradigm to fit the proposed models. We also compare between an independent normal prior and a conditional autoregressive prior for these spatial effects. The inference procedure takes explicit account of the uncertainty when considering these spatial effects. As the resultant posterior distribution does not have a closed form, we make use of stochastic simulation techniques to obtain samples from it. Two different model comparison criteria provide support for the importance of including these latent spatial effects, even after considering covariates at the municipal level.
    Date: 2006–10
    URL: http://d.repec.org/n?u=RePEc:ipe:ipetds:1220&r=ecm
  21. By: Theo Eicher; Chris Papageogiou; Adrian E Raftery
    Abstract: Economic growth has been a showcase of model uncertainty, given the many competing theories and candidate regressors that have been proposed to explain growth. Bayesian Model Averaging (BMA) addresses model uncertainty as part of the empirical strategy, but its implementation is subject to the choice of priors: the priors for the parameters in each model, and the prior over the model space. For a well-known growth dataset, we show that model choice can be sensitive to the prior specification, but that economic significance (model-averaged inference about regression coefficients) is quite robust to the choice of prior. We provide a procedure to assess priors in terms of their predictive performance. The Unit Information Prior, combined with a uniform model prior outperformed other popular priors in the growth dataset and in simulated data. It also identified the richest set of growth determinants, supporting several new growth theories. We also show that there is a tradeoff between model and parameter priors, so that the results of reducing prior expected model size and increasing prior parameter variance are similar. Our branch-and-bound algorithm for implementing BMA was faster than the alternative coin flip importance sampling and MC3 algorithms, and was also more successful in identifying the best model.
    Date: 2007–08
    URL: http://d.repec.org/n?u=RePEc:udb:wpaper:uwec-2007-25&r=ecm
  22. By: Arturo Estrella
    Abstract: Various methods are available to extract the "business cycle component" of a given time series variable. These methods may be derived as solutions to frequency extraction or signal extraction problems and differ in both their handling of trends and noise and their assumptions about the ideal time-series properties of a business cycle component. The filters are frequently illustrated by application to white noise, but applications to other processes may have very different and possibly unintended effects. This paper examines several frequently used filters as they apply to a range of dynamic process specifications and derives some guidelines for the use of such techniques.
    Keywords: Business cycles ; Time-series analysis
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:fip:fednsr:289&r=ecm
  23. By: Nadja Silberhorn; Yasemin Boztug; Lutz Hildebrandt
    Abstract: The paper discusses the nested logit model for choices between a set of mutually exclusive alternatives (e.g. brand choice, strategy decisions, modes of transportation, etc.). Due to the ability of the nested logit model to allow and account for similarities between pairs of alternatives, the model has become very popular for the empirical analysis of choice decisions. However the fact that there are two different specifications of the nested logit model (with different outcomes) has not received adequate attention. The utility maximization nested logit (UMNL) model and the non-normalized nested logit (NNNL) model have different properties, influencing the estimation results in a different manner. This paper introduces distinct specifications of the nested logit model and indicates particularities arising from model estimation. The effects of using various software packages on the estimation results of a nested logit model are shown using simulated data sets for an artificial decision situation.
    Keywords: nested logit model, utility maximization nested logit, nonnormalized nested logit, simulation study
    JEL: C13 C31 C87 M31
    Date: 2007–08
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2007-046&r=ecm

This nep-ecm issue is ©2007 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.