nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒06‒16
27 papers chosen by
Sune Karlsson
Orebro University

  1. Asymptotic Refinements of a Misspecification-Robust Bootstrap for Generalized Method of Moments Estimators By Seojeong Lee
  2. Model Selection Tests for Conditional Moment Inequality Models By Yu-Chin Hsu; Xiaoxia Shi
  3. Comparison of estimators of the Weibull Distribution By Muhammad Akram; Aziz Hayat
  4. Multivariate Tests of Mean-Variance Efficiency and Spanning with a Large Number of Assets and Time-Varying Covariances By Sermin Gungor; Richard Luger
  5. Thresholds and Smooth Transitions in Vector Autoregressive Models By Kirstin Hubrich; Timo Teräsvirta
  6. On bootstrap validity for specification tests with weak instruments By Doko Tchatoka, Firmin
  7. Inference on treatment effects after selection amongst high-dimensional controls By Alexandre Belloni; Victor Chernozhukov; Christian Hansen
  8. Uniform post selection inference for LAD regression models By Alexandre Belloni; Victor Chernozhukov; Kengo Kato
  9. Fourier estimation of stochastic leverage using high frequency data By Imma Valentina Curato
  10. Tractable latent state filtering for non-linear DSGE models using a second-order approximation By Robert Kollmann
  11. The change-point problem and segmentation of processes with conditional heteroskedasticity By Ana Badagián; Regina Kaiser; Daniel Peña
  12. Quantile models with endogeneity By Victor Chernozhukov; Christian Hansen
  13. Forecasting using a large number of predictors: Bayesian model averaging versus principal components regression By Rachida Ouysse
  14. Does the choice of estimator matter when forecasting returns? By Joakim Westerlund; Paresh K Narayan
  15. Ten Things You Should Know About the Dynamic Conditional Correlation Representation By Massimiliano Caporin; Michael McAleer
  16. Dealing with the Endogeneity Problem in Data Envelopment Analysis By Cordero, José Manuel; Santín, Daniel; Sicilia, Gabriela
  17. An examination of tourist arrivals dynamics using short-term time series data: a space-time cluster approach By Dogan Gursoy; Anna Maria Parroco; Raffaele Scuderi
  18. A new condition for pooling states in multinomial logit. By Hong il Yoo
  19. Bayesian Markov Switching Stochastic Correlation Models By Roberto Casarin; Marco Tronzano; Domenico Sartore
  20. Testing and Estimating Models Using Indirect Inference By Le, Vo Phuong Mai; Meenagh, David
  21. Bayesian inference for CoVar By Mauro Bernardi; Ghislaine Gayraud; Lea Petrella
  22. On the Spatial Correlation of International Conflict Initiation and Other Binary and Dyadic Dependent Variables By Shali Luo; J. Isaac Miller
  23. Dynamic Panel Data Models By Maurice J.G. Bun; Sarafidis, V.
  24. Changing with the Tide: Semi-Parametric Estimation of Preference Dynamics By Thijs Dekker; Paul Koster; Roy Brouwer
  25. A partially linear approach to modelling the dynamics of spot and futures prices By Gaul, Jürgen; Theissen, Erik
  26. Bayesian Analysis of Nonlinear Exchange Rate Dynamics and the Purchasing Power Parity Persistence Puzzle By Ming Chien Lo; James Morley
  27. The perceived unreliability of rank-ordered data: an econometric origin and implications. By Hong il Yoo

  1. By: Seojeong Lee (School of Economics, the University of New South Wales)
    Abstract: I propose a nonparametric iid bootstrap that achieves asymptotic refinements for t tests and confidence intervals based on the generalized method of moments (GMM) estimators even when the model is misspecified. In addition, my bootstrap does not require recentering the bootstrap moment function, which has been considered as critical for GMM. Regardless of model misspecification, the proposed bootstrap achieves the same sharp magnitude of refinements as the conventional bootstrap methods which establish asymptotic refinements by recentering in the absence of misspecification. The key idea is to link the misspecified bootstrap moment condition to the large sample theory of GMM under misspecification of Hall and Inoue (2003, Journal of Econometrics 114, 361-394). Examples of possibly misspecified moment condition models with Monte Carlo simulation results are provided: (i) Combining data sets, and (ii) invalid instrumental variables.
    Keywords: nonparametric iid bootstrap, asymptotic refinement, Edgeworth expansion, generalized method of moments, model misspecification.
    JEL: C14 C15 C31 C33
    Date: 2013–09
  2. By: Yu-Chin Hsu (Institute of Economics, Academia Sinica, Taipei, Taiwan); Xiaoxia Shi (Department of Economics University of Wisconsin at Madison)
    Abstract: In this paper, we propose a Vuong (1989)-type model selection test for conditional moment inequality models. The test uses a new average generalized empirical likelihood (AGEL) criterion function designed to incorporate full restriction of the conditional model. We also introduce a new adjustment to the test statistic making it asymptotically pivotal whether the candidate models are nested or nonnested. The test uses simple standard normal critical value and is shown to be asymptotically similar, to be consistent against all fixed alternatives and to have nontrivial power against n-1/2-local alternatives. Monte Carlo simulations demonstrate that the finite sample performance of the test is in accordance with the theoretical prediction.
    Keywords: Asymptotic size, Model selection test, Conditional moment inequalities, Partial identi- cation, Generalized empirical likelihood
    JEL: C12 C52
    Date: 2013–05
  3. By: Muhammad Akram (Monash University); Aziz Hayat (Deakin University)
    Abstract: We compare the small sample performance (in terms of bias and root mean squared error) of L-moment estimator of 3-parameter Weibull distribution with Maximum likelihood Estimation (MLE), Moment Estimation (MoE), Least squared estimation (LSE), the Modified MLE (MMLE), Modified MoE (MMoE), and the Maximum Product of Spacing (MPS). Overall, the LM method has the tendency to perform well as it is almost always close to the best method of estimation. The ML performance is remarkable even in small sample of size n = 10 when the shape parameter β lies in [1.5, 4] range.
    Keywords: Weibull distribution; Order statistics; L-moment estimation; Maximum likelihood estimation, Methods of moments; Maximum Product of Spacing
    Date: 2012–03–26
  4. By: Sermin Gungor; Richard Luger
    Abstract: We develop a finite-sample procedure to test for mean-variance efficiency and spanning without imposing any parametric assumptions on the distribution of model disturbances. In so doing, we provide an exact distribution-free method to test uniform linear restrictions in multivariate linear regression models. The framework allows for unknown forms of non-normalities, and time-varying conditional variances and covariances among the model disturbances. We derive exact bounds on the null distribution of joint F statistics in order to deal with the presence of nuisance parameters, and we show how to implement the resulting generalized non-parametric bounds tests with Monte Carlo resampling techniques. In sharp contrast to the usual tests that are not computable when the number of test assets is too large, the power of the new test procedure potentially increases along both the time and cross-sectional dimensions.
    Keywords: Asset Pricing; Econometric and statistical methods; Financial markets
    JEL: C12 C15 C33 G11 G12
    Date: 2013
  5. By: Kirstin Hubrich (European Central Bank, Frankfurt am Main); Timo Teräsvirta (Aarhus University, School of Economics and Management and CREATES)
    Keywords: common nonlinearity, impulse response analysis, linearity testing, multivariate nonlinear model, nonlinear cointegration, threshold estimation
    JEL: C32 C51 C52 C53
    Date: 2013–06–06
  6. By: Doko Tchatoka, Firmin
    Abstract: This paper investigates the asymptotic validity of the bootstrap for Durbin-Wu-Hausman (DWH) specification tests when instrumental variables (IVs) may be arbitrary weak. It is shown that under strong identification, the bootstrap offers a better approximation than the usual asymptotic chi-square distributions. However, the bootstrap provides only a first-order approximation when instruments are weak. This indicates clearly that unlike the Wald-statistic based on a k-class type estimator (Moreira et al., 2009), the bootstrap is valid even for the Wald-type of DWH statistics in the presence of weak instruments.
    Keywords: Specification tests, weak instruments, bootstrap
    JEL: C12 C15 C19 C3 C36 C52
    Date: 2013–03–31
  7. By: Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and MIT); Christian Hansen (Institute for Fiscal Studies and Chicago GSB)
    Abstract: We propose robust methods for inference on the effect of a treatment variable on a scalar outcome in the presence of very many controls. Our setting is a partially linear model with possibly non-Gaussian and heteroscedastic disturbances where the number of controls may be much larger than the sample size. To make informative inference feasible, we require the model to be approximately sparse; that is, we require that the effect of confounding factors can be controlled for up to a small approximation error by conditioning on a relatively small number of controls whose identities are unknown. The latter condition makes it possible to estimate the treatment effect by selecting approximately the right set of controls. We develop a novel estimation and uniformly valid inference method for the treatment effect in this setting, called the 'post-double-selection' method. Our results apply to Lasso-type methods used for covariate selection as well as to any other model selection method that is able to find a sparse model with good approximation properties. The main attractive feature of our method is that it allows for imperfect slection of the controls and provides confidence intervals that are valid uniformly across a large class of models. In contrast, standard post-model selection estimators fail to provide uniform inference even in simple cases with a small, fixed number of controls. Thus our method resolves the problem of uniform inference after model selection for a large, interesting class of models. We also present a simple generalisation of our method to a fully heterogeneous model with a binary treatment variable. We illustrate the use of the developed methods with numerical simulations and an application that considers the effect of abortion crime rates.
    Keywords: treatment effects, partially linear model, high-dimensional-sparse regression, inference under imperfect model selection, uniformly valid inference after model selection, average treament effects, average treatment effects for the treated
    Date: 2013–06
  8. By: Alexandre Belloni; Victor Chernozhukov (Institute for Fiscal Studies and MIT); Kengo Kato
    Abstract: We develop uniformly valid confidence regions for a regression coefficient in a high-dimensional sparse LAD (least absolute deviation or median) regression model. The setting is one where the number of regressors p could be large in comparison to the sample size n, but only s « n of them are needed to accurately describe the regression function. Our new methods are based on the instrumental LAD regression estimator that assembles the optimal estimating equation from either post l 1- penalised LAD regression or l 1- penalised LAD regression. The estimating equation is immunised against non-regular estimation of nuisance part of the regression function, in the sense of Neyman. We establish that in a homoscedastic regression model, under certain conditions, the instrumental LAD regression estimator of the regression coefficient is asymptotically root-n normal uniformly with respect to the underlying sparse model. The resulting confidence regions are vaild uniformly with respect to the underlying model. The new inference methods outperform the naive, 'oracle based' inference methods, which are known to be not uniformly valid- with coverage property failing to hold uniformly with respect the underlying model- even in the setting with p= 2. We also provide Monte-Carlo experiments which demonstrate that standard post-selection inference breaks down over large parts of the parameter space, and the proposed method does not.
    Keywords: median regression, uniformly valid inference, instrumets, Neymanisation, optimality, sparsity, post selection inference
    Date: 2013–06
  9. By: Imma Valentina Curato (Dipartimento di Economia e Management, Universita' degli Studi di Pisa)
    Abstract: In this paper, we define a new estimator of the leverage stochastic process based only on a pre-estimation of the Fourier coefficients of the volatility process. This feature constitutes a novelty in comparison with the leverage estimators proposed in the literature generally based on a pre-estimation of the spot volatility. Our estimator is proved to be consistent and in virtue of its definition it can be directly applied to estimate the leverage effect in case of irregular trading observations of the price path and microstructure noise contaminations.
    Keywords: leverage, non-parametric estimation, semi-martingale, Fourier transform, high frequency data.
    Date: 2013–06
  10. By: Robert Kollmann
    Abstract: This paper develops a novel approach for estimating latent state variables of Dynamic Stochastic General Equilibrium (DSGE) models that are solved using a second-order accurate approximation. I apply the Kalman filter to a state-space representation of the second-order solution based on the ‘pruning’ scheme of Kim, Kim, Schaumburg and Sims (2008). By contrast to particle filters, no stochastic simulations are needed for the filter here--the present method is thus much faster. In Monte Carlo experiments, the filter here generates more accurate estimates of latent state variables than the standard particle filter. The present filter is also more accurate than a conventional Kalman filter that treats the linearized model as the true data generating process. Due to its high speed, the filter presented here is suited for the estimation of model parameters; a quasi-maximum likelihood procedure can be used for that purpose.
    Keywords: Simulation modeling ; Forecasting
    Date: 2013
  11. By: Ana Badagián; Regina Kaiser; Daniel Peña
    Abstract: In this paper we explore, analyse and apply the change-points detection and location procedures to conditional heteroskedastic processes. We focus on processes that have constant conditional mean, but present a dynamic behavior in the conditional variance and which can also be affected by structural changes. Thus, the goal is to explore, analyse and apply the change-point detection and estimation methods to the situation when the conditional variance of a univariate process is heteroskedastic and exhibits change-points. Based on the fact that a GARCH process can be expressed as an ARMA model in the squares of the variable, we propose to detect and locate change-points by using the Bayesian Information Criterion as an extension of its application in linear models. The proposed procedure is characterized by its computational simplicity, reducing difficulties of the change-point detection in the complex non-linear processes. We compare this procedure with others available in the literature, which are based on cusum methods (Inclán and Tiao (1994), Kokoszka and Leipus (1999), Lee et al. (2004)), informational approach (Fukuda, 2010), minimum description length principle (Davis and Rodriguez-Yam (2008)), and the time varying spectrum (Ombao et al (2002)). We compute the empirical size and power properties by Monte Carlo simulation experiments considering several scenarios. We obtained a good size and power properties in detecting even small magnitudes of change and for low levels of persistence. The procedures were applied to the S\&P500 log returns time series, in order to compare with the results in Andreou and Ghysels (2002) and Davis and Rodriguez-Yam (2008). Changepoints detected by the proposed procedure were similar to the breaks found by the other procedures, and their location can be related with the Southeast Asia financial crisis and with other known financial events.
    Keywords: Heteroskedastic time series, Segmentation, Change-points
    Date: 2013–06
  12. By: Victor Chernozhukov (Institute for Fiscal Studies and MIT); Christian Hansen (Institute for Fiscal Studies and Chicago GSB)
    Abstract: In this article, we review quantile models with endogeneity. We focus on models that achieve indentification through the use of instrumental variables and discuss conditions under which partial and point identification are obtained. We discuss key conditions, which include monotonicity and full-rank-type conditions, in detail. In providing this review, we update the identification results of Chernozhukov and Hansen (2005). We illustrate the modelling assumptions through economically motivated examples. We also briefly review the literature on estimation and inference.
    Keywords: identification, treatment effects, structural models, instrumental variables
    Date: 2013–06
  13. By: Rachida Ouysse (School of Economics, the University of New South Wales)
    Abstract: We study the performance of Bayesian model averaging as a forecasting method for a large panel of time series and compare its performance to principal components regression (PCR). We show empirically that these forecasts are highly correlated implying similar mean-square forecast errors. Applied to forecasting Industrial production and in ation in the United States, we find that the set of variables deemed informative changes over time which suggest temporal instability due to collinearity and to the of Bayesian variable selection method to minor perturbations of the data. In terms of mean-squared forecast error, principal components based forecasts have a slight marginal advantage over BMA. However, this marginal edge of PCR in the average global out-of-sample performance hides important changes in the local forecasting power of the two approaches. An analysis of the Theil index indicates that the loss of performance of PCR is due mainly to its exuberant biases in matching the mean of the two series especially the in ation series. BMA forecasts series matches the first and second moments of the GDP and in ation series very well with practically zero biases and very low volatility. The fluctuation statistic that measures the relative local performance shows that BMA performed consistently better than PCR and the naive benchmark (random walk) over the period prior to 1985. Thereafter, the performance of both BMA and PCR was relatively modest compared to the naive benchmark.
    Date: 2013–04
  14. By: Joakim Westerlund (Deakin University); Paresh K Narayan (Deakin University)
    Abstract: While the literature concerned with the predictability of stock returns is huge, surprisingly little is known when it comes to role of the choice of estimator of the predictive regression. Ideally, the choice of estimator should be rooted in the salient features of the data. In case of predictive regressions of returns there are at least three such features; (i) returns are heteroskedastic, (ii) predictors are persistent, and (iii) regression errors are correlated with predictor innovations. In this paper we examine if the accounting of these features in the estimation process has any bearing on our ability to forecast future returns. The results suggest that it does.
    Keywords: Predictive regression; Stock return predictability; Heteroskedasticity; Predictor endogeneity
    JEL: C22 C23 G1 G12
    Date: 2012–05–11
  15. By: Massimiliano Caporin; Michael McAleer (University of Canterbury)
    Abstract: The purpose of the paper is to discuss ten things potential users should know about the limits of the Dynamic Conditional Correlation (DCC) representation for estimating and fore¬cast¬ing time-varying conditional correlations. The reasons given for caution about the use of DCC include the following: DCC represents the dynamic conditional covariances of the stand¬ard¬ized residuals, and hence does not yield dynamic conditional correlations; DCC is stated rather than derived; DCC has no moments; DCC does not have testable regularity conditions; DCC yields inconsistent two step estimators; DCC has no asymptotic properties; DCC is not a spe¬cial case of GARCC, which has testable regularity conditions and standard asymptotic prop¬er¬ties; DCC is not dynamic empirically as the effect of news is typically extremely small; DCC can¬not be distinguished empirically from diagonal BEKK in small systems; and DCC may be a use¬ful filter or a diagnostic check, but it is not a model.
    Keywords: DCC representation, BEKK, GARCC, stated representation, derived model, con¬di¬tion¬al covariances, conditional correlations, regularity conditions, moments, two step estimators, assumed properties, asymptotic properties, filter, diagnostic check
    JEL: C18 C32 C58 G17
    Date: 2013–06–08
  16. By: Cordero, José Manuel; Santín, Daniel; Sicilia, Gabriela
    Abstract: Endogeneity, and the distortions on the estimation of economic models that it causes, is a familiar problem in the econometrics literature. Although non-parametric methods like data envelopment analysis (DEA) are among the most used techniques for measuring technical efficiency, the effects of endogeneity on such efficiency estimates have received little attention. The aim of this paper is twofold. First, we further illustrate the endogeneity problem and its causes in production processes like the correlation between one input and the efficiency level. Second, we use synthetic data generated in a Monte Carlo experiment to analyze how different levels of positive and negative endogeneity can impair DEA estimations. We conclude that although DEA is robust to negative endogeneity, a high positive endogeneity level, i.e., a high positive correlation between one input and the true efficiency level, significantly and severely biases DEA performance.
    Keywords: Technical efficiency, DEA, Endogeneity, Monte Carlo.
    JEL: C6 C9
    Date: 2013–04
  17. By: Dogan Gursoy (School of Hospitality Business Management, Washington State University); Anna Maria Parroco (Department of Economics, Business and Finance, University of Palermo); Raffaele Scuderi (Free University of Bolzanoâ€Bozen, School of Economics and Management.)
    Abstract: The purpose of this study is to examine the development of Italian tourist areas (circoscrizioni turistiche) through a cluster analysis of short time series. The technique is an adaptation of the functional data analysis approach developed by Abraham et al (2003), which combines spline interpolation with k-means clustering. The findings indicate the presence of two patterns (increasing and stable) averagely characterizing groups of territories. Moreover, tests of spatial contiguity suggest the presence of ‘space–time clusters’; that is, areas in the same ‘time cluster’ are also spatially contiguous. These findings appear to be more robust in particular for those series characterized by an increasing trend.
    Keywords: cluster analysis; short time series; spline interpolation; K-means; join count test; Italian tourist areas
    JEL: L83 C14 C21 C22 C38
    Date: 2013–06
  18. By: Hong il Yoo (University of New South Wales)
    Abstract: The Cramer-Ridder test is a popular procedure for testing if some outcome states can be pooled into one state in the multinomial logit model. We show that, in the presence of binary regressors, the test is overly stringent and poolability may not be tested unambiguously.
    Keywords: multinomial logit, pooling, statistical test
    JEL: C35
    Date: 2012–11
  19. By: Roberto Casarin (Department of Economics, University of Venice Cà Foscari); Marco Tronzano (Department of Economics, University of Genova); Domenico Sartore (Department of Economics, University of Venice Cà Foscari)
    Abstract: This paper builds on Asai and McAleer (2009) and develops a new multivariate Dynamic Conditional Correlation (DCC) model where the parameters of the correlation dynamics and those of the log-volatility process are driven by two latent Markov chains. We outline a suitable Bayesian inference procedure, based on sequential MCMC estimation algorithms, and discuss some preliminary results on simulated data. We then apply the model to three major cross rates against the US Dollar (Euro, Yen, Pound), using high-frequency data since the beginning of the European Monetary Union. Estimated volatility paths reveal significant increases since mid-2007, documenting the destabilizing effects of the US sub-prime crisis and of the European sovereign debt crisis. Moreover, we find strong evidence supporting the existence of a time-varying correlation structure. Correlation paths display frequent shifts along the whole sample, both in low and in high volatility phases, pointing out the existence of contagion effects closely in line with the mechanisms outlined in the recent contagion literature (Forbes and Rigobon (2002) and Corsetti at al. (2005)).
    Keywords: Stochastic Correlation; Multivariate Stochastic Volatility; Markov-switching; Bayesian Inference; Monte Carlo Markov Chain.
    JEL: C1 C11 C15 C32 F31 G15
    Date: 2013
  20. By: Le, Vo Phuong Mai (Cardiff Business School); Meenagh, David (Cardiff Business School)
    Abstract: In this short article we explain how to test an economic model using Indirect Inference. We then go on to show how you can use this test to estimate the model.
    JEL: C01 C13 C52 E27
    Date: 2013–06
  21. By: Mauro Bernardi; Ghislaine Gayraud; Lea Petrella
    Abstract: Recent financial disasters emphasised the need to investigate the consequence associated with the tail co-movements among institutions; episodes of contagion are frequently observed and increase the probability of large losses affecting market participants' risk capital. Commonly used risk management tools fail to account for potential spillover effects among institutions because they provide individual risk assessment. We contribute to analyse the interdependence effects of extreme events providing an estimation tool for evaluating the Conditional Value-at-Risk (CoVaR) defined as the Value-at-Risk of an institution conditioned on another institution being under distress. In particular, our approach relies on Bayesian quantile regression framework. We propose a Markov chain Monte Carlo algorithm exploiting the Asymmetric Laplace distribution and its representation as a location-scale mixture of Normals. Moreover, since risk measures are usually evaluated on time series data and returns typically change over time, we extend the CoVaR model to account for the dynamics of the tail behaviour. Application on U.S. companies belonging to different sectors of the Standard and Poor's Composite Index (S&P500) is considered to evaluate the marginal contribution to the overall systemic risk of each individual institution
    Date: 2013–06
  22. By: Shali Luo; J. Isaac Miller (Department of Economics, University of Missouri-Columbia)
    Abstract: We examine spatially correlated interregional flows measured as binary choice outcomes. Since the dependent variable is not only binary and dyadic, but also spatially correlated, we propose a spatial origin-destination probit model and a Bayesian estimation methodology that avoids inconsistent maximum likelihood estimates. We apply the model to militarized interstate dispute initiations, observations of which are clearly binary and dyadic and which may be spatially correlated due to their geographic distribution. Using a cross-section of 26 European countries drawn from the period leading up to WWII, we find empirical evidence for target-based spatial correlation and sizable network effects resulting from the correlation. In particular, we find that the effect of national military capability of the potential aggressor, which is a significant determinant of conflict in either case, is overstated in a benchmark model that ignores spatial correlation. This effect is further differentiated by the geographic location of a country.
    Keywords: spatial correlation, origin-destination flows, probit, militarized interstate disputes, correlates of war
    JEL: C21 C25 F51 N4
    Date: 2013–05–24
  23. By: Maurice J.G. Bun; Sarafidis, V. (Monash University)
    Abstract: This paper reviews the recent literature on dynamic panel data models. Throughout the discussion we consider the linear dynamic panel data model with additional endogenous regressors. First we give a broad overview of available methods. We next discuss in more detail the assumption of mean stationarity underlying the system GMM estimator. We discuss causes of deviations from mean stationarity, their consequences and tests for mean stationarity.
    Date: 2013–03–07
  24. By: Thijs Dekker (Delft University of Technology); Paul Koster (VU University Amsterdam); Roy Brouwer (VU University Amsterdam)
    Abstract: This paper contrasts the discovered preference hypothesis against the theory of coherent arbitrariness in a split-sample stated choice experiment on flood risk exposure in the Netherlands. A semi-parametric local multinomial logit model (L-MNL) is developed as an alternative to the Swait and Louviere (1993) procedure to control for preference dynamics within and between samples. The L-MNL model finds empirical support for the discovered preference hypothesis in the form of a declining starting point bias induced by the first choice task. These results differ from the Swait and Louviere procedure which, due to its limited flexibility, accepts the standard assumption underlying microeconomic theory of stable preference parameters throughout the choice sequence. The observed preference dynamics puts the use of choice experiments at risk of generating biased welfare estimates if not controlled for.
    Keywords: Preference dynamics; Discovered preference hypothesis; Coherent arbitrariness; Preference uncertainty; Local multinomial logit model
    JEL: C14 D12 Q51 Q54
    Date: 2013–05–27
  25. By: Gaul, Jürgen; Theissen, Erik
    Abstract: This paper considers the dynamics of spot and futures prices in the presence of arbitrage. A partially linear error correction model is proposed where the adjustment coefficient is allowed to depend non-linearly on the lagged price difference. The model is estimated using data on the DAX index and the DAX futures contract. We find that the adjustment is indeed nonlinear. The linear alternative is rejected. The speed of price adjustment is increasing almost monotonically with the magnitude of the price difference. --
    Keywords: Futures Markets,Cointegration,Partially linear models,Nonparametric methods
    JEL: C32 C14 G13 G14
    Date: 2012
  26. By: Ming Chien Lo (Department of Economics, St. Cloud State University); James Morley (School of Economics, the University of New South Wales)
    Abstract: We investigate the persistence of real exchange rates using Bayesian methods. First, an algorithm for Bayesian estimation of nonlinear threshold models is developed. Unlike standard grid-based estimation, the Bayesian approach fully captures joint parameter uncertainty and uncertainty about complicated functions of the parameters, such as the half-life measure of persistence based on generalized impulse response functions. Second, model comparison is conducted via marginal likelihoods, which reflect the relative abilities of models to predict the data given prior beliefs about model parameters. This comparison is conducted for a range of linear and nonlinear models and provides a direct evaluation of the importance of nonlinear dynamics in modeling exchange rates. The marginal likelihoods also imply weights for a modelaveraged measure of persistence. The empirical results for real exchange rate data from the G7 countries suggest general support for nonlinearity, but the strength of the evidence depends on which country pair is considered. However, the model-averaged estimates of half-lives are uniformly smaller than for the linear models alone, suggesting that the purchasing power parity persistence puzzle is less of a puzzle than previously thought.
    Keywords: Bayesian Analysis, Real Exchange Rate Dynamics, Purchasing Power Parity, Nonlinear Threshold Models, Bayesian Model Averaging; Half lives
    JEL: C11 C22 F31
    Date: 2013–05
  27. By: Hong il Yoo (University of New South Wales)
    Abstract: The problem of unstable coecients in the rank-ordered logit model has been traditionally interpreted as a sign that survey respondents fail to provide reliable ranking responses. This paper shows that the problem may embody the inherent sensitivity of the model to stochastic misspecification instead. Even a minor departure from the postulated random utility function can induce the problem, for instance when rank-ordered logit is estimated whereas the true additive disturbance is iid normal over alternatives. Related implications for substantive analyses and further modelling are explored. In general, a well-specied random coecient rank-ordered logit model can mitigate, though not eliminate, the problem and produce analytically useful results. The model can also be generalised to be more suitable for forecasting purposes, by accommodating that stochastic misspecification matters less for individuals with more deterministic preferences. An empirical analysis using an Australian nursing job preferences survey shows that the estimates behave in accordance with these implications.
    Keywords: rank-ordered logit, mixed logit, latent class, stated ranking
    JEL: C25 C52 C81
    Date: 2012–11

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.