nep-ecm New Economics Papers
on Econometrics
Issue of 2016‒06‒09
thirteen papers chosen by
Sune Karlsson
Örebro universitet

  1. A unified approach to mortality modelling using state-space framework: characterisation, identification, estimation and forecasting By Man Chung Fung; Gareth W. Peters; Pavel V. Shevchenko
  2. Spatial Autoregressive Conditional Heteroscedasticity Model and Its Application By Takaki Sato; Yasumasa Matsuda
  3. Soultion and Estimation of Dynamic Discrete Choice Structural Models Using Euler Equations By Victor Aguirregabiria; Arvind Magesan
  4. Asymptotic Theory for Beta-t-GARCH By Ryoko Ito
  5. Stock prices prediction via tensor decomposition and links forecast By Alessandro Spelta
  6. Estimation of a semiparametric transformation model in the presence of endogeneity By Van Keilegom, Ingrid; Vanhems, Anne
  7. Recovering Marginal Willingness to Pays from Hedonic Prices under Imperfect Competition By Huang, Ju-Chin
  8. Change Detection and the Casual Impact of the Yield Curve By Stan Hurn; Peter C B Phillips; Shuping Shi
  9. What Proportion of Time is a particular Market inefficient?...Analysing market efficiency when equity prices follow Threshold Autoregressions. By Muhammad Farid Ahmed; Stephen Satchell
  10. Two-Stage Estimation to Control for Unobservables in a Recreation Demand Model with Unvisited Sites By Melstrom, Richard T.; Jayasekera, Deshamithra H.W.
  11. The Effects of Honesty Oath and Consequentiality in Choice Experiments By Kemper, Nathan; Nayga, Rodolfo M. Jr.; Popp, Jennie; Bazzani, Claudia
  12. Asymptotic Theory for Aggregate Efficiency By Léopold Simar; Valentin Zelenyuk

  1. By: Man Chung Fung; Gareth W. Peters; Pavel V. Shevchenko
    Abstract: This paper explores and develops alternative statistical representations and estimation approaches for dynamic mortality models. The framework we adopt is to reinterpret popular mortality models such as the Lee-Carter class of models in a general state-space modelling methodology, which allows modelling, estimation and forecasting of mortality under a unified framework. Furthermore, we propose an alternative class of model identification constraints which is more suited to statistical inference in filtering and parameter estimation settings based on maximization of the marginalized likelihood or in Bayesian inference. We then develop a novel class of Bayesian state-space models which incorporate apriori beliefs about the mortality model characteristics as well as for more flexible and appropriate assumptions relating to heteroscedasticity that present in observed mortality data. We show that multiple period and cohort effect can be cast under a state-space structure. To study long term mortality dynamics, we introduce stochastic volatility to the period effect. The estimation of the resulting stochastic volatility model of mortality is performed using a recent class of Monte Carlo procedure specifically designed for state and parameter estimation in Bayesian state-space models, known as the class of particle Markov chain Monte Carlo methods. We illustrate the framework we have developed using Danish male mortality data, and show that incorporating heteroscedasticity and stochastic volatility markedly improves model fit despite an increase of model complexity. Forecasting properties of the enhanced models are examined with long term and short term calibration periods on the reconstruction of life tables.
    Date: 2016–05
  2. By: Takaki Sato; Yasumasa Matsuda
    Abstract: This paper proposes spatial autoregressive conditional heteroscedasticity (S- ARCH) models to estimate spatial volatility in spatial data. S-ARCH model is a spatial extension of time series ARCH model. S-ARCH models specify conditional variances as the variances given the values of surrounding observations in spatial data, which is regarded as a spatial extension of time series ARCH models that spec- ify conditional variances as the variances given the values of past observations. We consider parameter estimation for S-ARCH models by maximum likelihood method and propose test statistics for ARCH effects in spatial data. We demonstrate the empirical properties by simulation studies and real data analysis of land price data in Tokyo.
    Date: 2016–04–26
  3. By: Victor Aguirregabiria; Arvind Magesan (University of Calgary)
    Abstract: This paper extends the Euler Equation (EE) representation of dynamic decision problems to a general class of discrete choice models and shows that the advantages of this approach apply not only to the estimation of structural parameters but also to the computation of a solution and to the evaluation of counterfactual experiments. We use a choice probabilities representation of the discrete decision problem to derive marginal conditions of optimality with the same features as the standard EEs in continuous decision problems. These EEs imply a Â…fixed point mapping in the space of conditional choice values, that we denote the Euler equation-value (EE-value) operator. We show that, in contrast to Euler equation operators in continuous decision models, this operator is a contraction. We present numerical examples that illustrate how solving the model by iterating in the EE-value mapping implies substantial computational savings relative to iterating in the Bellman equation (that requires a much larger number of iterations) or in the policy function (that involves a costly valuation step). We deÂ…fine a sample version of the EE-value operator and use it to construct a sequence of consistent estimators of the structural parameters, and to evaluate counterfactual experiments. The computational cost of evaluating this sample-based EE-value operator increases linearly with sample size, and provides an unbiased (in fiÂ…nite samples) and consistent estimator the counterfactual. As such there is no curse of dimensionality in the consistent estimation of the model and in the evaluation of counterfactual experiments. We illustrate the computational gains of our methods using several Monte Carlo experiments.
    Date: 2016–05–24
  4. By: Ryoko Ito
    Abstract: The dynamic conditional score (DCS) models with variants of Student's t innovation are gaining popularity in volatility modeling, and studies have found that they outperform GARCH-type models of comparable specifications. DCS is typically estimated by the method of maximum likelihood, but there is so far limited asymptotic theories for justifying the use of this estimator for non-Gaussian distributions. This paper develops asymptotic theory for Beta-t-GARCH, which is DCS with Student's t innovation and the benchmark volatility model of this class. We establish the necessary and sufficient condition for strict stationarity of the first-order Beta-t-GARCH using one simple moment equation, and show that its MLE is consistent and asymptotically normal under this condition. The results of this paper theoretically justify applying DCS with Student's t innovation to heavy-tailed data with a high degree of kurtosis, and performing standard statistical inference for model selection using the estimator. Since GARCH is Beta-t-GARCH with infinite degrees of freedom, our results imply that Beta-t-GARCH can capture the size of the tail or the degree of kurtosis that is too large for GARCH.
    Keywords: robustness, score, consistency, asymptotic normality.
    JEL: C22 C58
    Date: 2016–01–24
  5. By: Alessandro Spelta (Università Cattolica del Sacro Cuore; Dipartimento di Economia e Finanza, Università Cattolica del Sacro Cuore)
    Abstract: Many complex systems display fluctuations between alternative states in correspondence to tipping points. These critical shifts are usually associated with generic empirical phenomena such as strengthening correlations between entities composing the system. In finance, for instance, market crashes are the consequence of herding behaviors that make the units of the system strongly correlated, lowering their distances. Consequently, determining future distances between stocks can be a valuable starting point for predicting market down-turns. This is the scope of the work. It introduces a multi-way procedure for forecasting stock prices by decomposing a distance tensor. This multidimensional method avoids aggregation processes that could lead to the loss of crucial features of the system. The technique is applied to a basket of stocks composing the S&P500 composite index and to the index itself so as to demonstrate its ability to predict the large market shifts that arise in times of turbulence, such as the ongoing financial crisis.
    Keywords: Stock prices, Correlations, Tensor Decomposition, Forecast.
    JEL: C02 C63 C63
    Date: 2016–05
  6. By: Van Keilegom, Ingrid; Vanhems, Anne
    Abstract: We consider a semiparametric transformation model, in which the regression func- tion has an additive nonparametric structure and the transformation of the response is assumed to belong to some parametric family. We suppose that endogeneity is present in the explanatory variables. Using a control function approach, we show that the pro- posed model is identified under suitable assumptions, and propose a profile estimation method for the transformation. The proposed estimator is shown to be asymptotically normal under certain regularity conditions. A simulation study shows that the esti- mator behaves well in practice. Finally, we give an empirical example using the U.K. Family Expenditure Survey.
    Keywords: Causal inference; Semiparametric regression; Transformation models; Profiling; Endogeneity; Instrumental variable; Control function; Additive models.
    Date: 2016–05
  7. By: Huang, Ju-Chin
    Abstract: In this paper, hedonic price analysis under imperfect competition is studied. We demonstrate a means to simultaneously recover the price-cost markup and the marginal values of product attributes from hedonic price estimation under imperfect competition. Our theoretical results provide guidance to the empirical specification of the hedonic price model, increasing both the applicability and reliability of hedonic valuation methods. We conduct a Monte Carlo simulation to evaluate various specifications of hedonic price models under imperfect competition. An application to estimating marginal willingness to pays for characteristics of a ski resort is presented.
    Keywords: Hedonic Method, Imperfect Competition, Taxation, Price-Cost Markup, Monte Carlo Simulation, Consumer/Household Economics, Demand and Price Analysis, Environmental Economics and Policy, Q51, L10,
    Date: 2016
  8. By: Stan Hurn (QUT); Peter C B Phillips (Yale); Shuping Shi (Macquarie)
    Abstract: Causal relationships in econometrics are typically based on the concept of predictability and are established in terms of tests for Granger causality. These causal relationships are susceptible to change, especially during times of financial turbulence, making the real-time detection of instability an important practical issue. This paper develops a test for detecting changes in causal relationships based on a recursive rolling window, which is analogous to the procedure used in recent work on financial bubble detection. The limiting distribution of the test takes a simple form under the null hypothesis and is easy to implement in conditions of homoskedasticity, conditional heteroskedasticity and unconditional heteroskedasticity. Simulation experiments compare the efficacy of the proposed test with two other commonly used tests, the forward recursive and the rolling window tests. The results indicate that both the rolling and the recursive rolling approaches offer good finite sample performance in situations where there are one or two changes in the causal relationship over the sample period. The testing strategies are illustrated in an empirical application that explores the causal impact of the slope of the yield curve on output and inflation in the U.S. over the period 1985-2013.
    Keywords: Causality, Forward recursion, Hypothesis testing, Inflation, Output, Recursvie rolling test, Rolling Window, Yield curve
    JEL: C12 C15 C32 G17
    Date: 2015–08–27
  9. By: Muhammad Farid Ahmed; Stephen Satchell
    Abstract: We assume that log equity prices follow multi-state threshold autoregressions and generalize existing results for threshold autoregressive models, presented in Knight and Satchell (2012) for the existence of a stationary process and the conditions necessary for the existence of a mean and a variance; we also present formulae for these moments. Using a simulation study we explore what these results entail with respect to the impact they can have on tests for detecting bubbles or market efficiency. We find that bubbles are easier to detect in processes where a stationary distribution does not exist. Furthermore, we explore how threshold autoregressive models with i.i.d trigger variables may enable us to identify how often asset markets are inefficient. We find, unsurprisingly, that the fraction of time spent in an efficient state depends upon the full specification of the model; the notion of how efficient a market is, in this context at least, a model-dependent concept. However, our methodology allows us to compare efficiency across different asset markets.
    Date: 2016–04–04
  10. By: Melstrom, Richard T.; Jayasekera, Deshamithra H.W.
    Abstract: The role of unobserved site attributes is a growing concern in recreation demand modeling. One solution in random utility models (RUM) involves separating estimation into two stages, where the RUM model is estimated with alternative-speci c constants (ASCs) in the rst stage, and the estimated ASCs are regressed on the observed site attributes in the second stage. Prior work estimates the second stage with OLS and 2SLS regression. We present an application with censored regression in the second stage. We show OLS produces inconsistent parameters when there are unvisited sites with no estimable ASCs and that censored regression avoids this problem.
    Keywords: Random utility model, non-market valuation, recreational shing, Research Methods/ Statistical Methods, C25, Q26, Q51,
    Date: 2016–05
  11. By: Kemper, Nathan; Nayga, Rodolfo M. Jr.; Popp, Jennie; Bazzani, Claudia
    Abstract: Choice experiments are now one of the most popular stated preference methods used by economists. A highly documented limitation of stated preference methods is the formation of hypothetical bias in the estimation of consumers’ willingness-to-pay (WTP) for a good or a service. Honesty oaths and consequentiality scripts are two ex ante approaches that show promise in their ability to reduce or eliminate hypothetical bias. We examine these approaches independently and together and measure their effectiveness by comparing the resulting WTP values. We also explore a potential connection between consequentiality, honesty oaths, and attribute non-attendance (ANA). We infer patterns of ANA resulting from our various treatments (i.e., consequentiality script only, honesty oath only, combined script and oath, inconsequential, and control) and examine the differences. Our results suggest that the combined ex ante approach of consequentiality script and honesty oath provided significantly lower WTP values than all other experimental treatments. Conditioning our data for both consequentiality and ANA resulted in significant improvements in model fit across all treatments. Results indicate that not accounting for ANA has important implications for welfare estimates. While we cannot fully explain the connection, the combination of the consequentiality script, honesty oath, and inferred ANA allowed us to better see the differences between respondents’ attending attributes and those ignoring.
    Keywords: consequentiality, honesty oath, attribute non-attendance, choice experiment, Marketing, Research Methods/ Statistical Methods,
    Date: 2016
  12. By: Léopold Simar (Institut de Statistique, Biostatistique et Sciences Actuarielles, Université Catholique de Louvain); Valentin Zelenyuk (School of Economics, The University of Queensland)
    Abstract: Applied researchers in the field of efficiency and productivity analysis often need to estimate and inference about aggregate efficiency, such as industry efficiency or aggregate efficiency of a group of distinct firms within an industry (e.g., public vs. private firms, regulated vs. unregulated firms, etc.). While there are approaches to obtain point estimates for such important measures, no asymptotic theory have been derived for it–the gap in the literature that we fill with this paper. Specifically, we develop full asymptotic theory for aggregate efficiency measures when the individual true efficiency scores being aggregated are observed as well as when they are unobserved and estimated via DEA or FDH. As a result, the developed theory opens a path for more accurate and theoretically better grounded statistical inference on aggregate efficiency estimates such as industry efficiency, etc.
    Keywords: DEA, FDH, Efficiency, Aggregation, Industry Efficiency, Asymptotics, Limiting distribution, Consistency, Convergence, Jackknife, Bias correction
    Date: 2016–05
  13. By: Atwood, Joseph; Joglekar, Alison
    Keywords: Simultaneous Equation Model, Limited Dependent Variable, Discrete Choice, Theil Correction, Agribusiness, Agricultural and Food Policy, Research and Development/Tech Change/Emerging Technologies, Research Methods/ Statistical Methods, Resource /Energy Economics and Policy,
    Date: 2016

This nep-ecm issue is ©2016 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.