nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒07‒05
sixteen papers chosen by
Sune Karlsson
Orebro University

  1. Bayesian Inference of Asymmetric Stochastic Conditional Duration Models By Zhongxian Men; Adam W. Kolkiewicz; Tony S. Wirjanto
  2. Inference in Semiparametric Binary Response Models with Interval Data By Yuanyuan Wan; Haiqing Xu
  3. Stochastic Conditional Duration Models with Mixture Processes By Tony S. Wirjanto; Adam W. Kolkiewicz; Zhongxian Men
  4. Mixed frequency structural models: estimation, and policy analysis By Claudia Foroni; Massimiliano Marcellino
  5. Semi-Parametric, Generalized Additive Vector Autoregressive Models of Spatial Price Dynamics By Guney, Selin; Goodwin, Barry K.
  6. Testing for equality of an increasing number of spectral density functions By Javier Hidalgo; Pedro Souza; Pedro Souza
  7. Cost-effective estimation of the population mean using prediction estimators By Fujii, Tomoki; van der Weide, Roy
  8. A multivariate extension of a vector of Poisson- Dirichlet processes By W. Zhu; Frabrizio Leisen
  9. Measuring Firm Performance using Nonparametric Quantile-type Distances By Daouia, Abdelaati; Simar, Léopold; Wilson, Paul
  10. A Threshold Stochastic Conditional Duration Model for Financial Transaction Data By Zhongxian Men; Tony S. Wirjanto; Adam W. Kolkiewicz
  11. A gamma-moment approach to monotonic boundaries estimation By Daouia, Abdelaati; Girard, Stéphane; Guillou, Armelle
  12. Evaluating the Importance of Multiple Imputations of Missing Data on Stochastic Frontier Analysis Efficiency Measures By Shaik, Saleem; Tokovenko, Oleksiy
  13. Underidentified SVAR models: A framework for combining short and long-run restrictions with sign-restrictions By Andrew Binning
  14. Measuring poverty dynamics with synthetic panels based on cross-sections By Dang, Hai-Anh; Lanjouw, Peter
  15. Tying up loose ends: A note on the impact of omitting MA residuals from panel energy demand models based on the Koyck lag transformation By David C Broadstock; Lester C Hunt
  16. Moderate deviations for importance sampling estimators of risk measures By Pierre Nyquist

  1. By: Zhongxian Men (Department of Statistics & Actuarial Science, University of Waterloo, Canada); Adam W. Kolkiewicz (Department of Statistics & Actuarial Science, University of Waterloo, Canada); Tony S. Wirjanto (Department of Statistics & Actuarial Science, University of Waterloo, Canada)
    Abstract: This paper extends stochastic conditional duration (SCD) models for financial transaction data to allow for correlation between error processes or innovations of observed duration process and latent log duration process. Novel algorithms of Markov Chain Monte Carlo (MCMC) are developed to fit the resulting SCD models under various distributional assumptions about the innovation of the measurement equation. Unlike the estimation methods commonly used to estimate the SCD models in the literature, we work with the original specification of the model, without subjecting the observation equation to a logarithmic transformation. Results of simulation studies suggest that our proposed models and corresponding estimation methodology perform quite well. We also apply an auxiliary particle filter technique to construct one-step-ahead in-sample and out-of-sample duration forecasts of the fitted models. Applications to the IBM transaction data allows comparison of our models and methods to those existing in the literature.
    Keywords: Stochastic Duration; Bayesian Inference; Markov Chain Monte Carlo; Leverage Effect; Acceptance-rejection; Slice Sampler
    Date: 2013–05
  2. By: Yuanyuan Wan; Haiqing Xu
    Abstract: This paper studies the semiparametric binary response model with interval data investigated by Manski and Tamer (2002, MT). In this partially identified model, we propose a new estimator based on MT's modified maximum score (MMS) method by introducing density weights to the objective function, which allows us to develop asymptotic properties of the proposed set estimator for inference. We show that the density-weighted MMS estimator converges to the identified set at a nearly cube-root-n rate. Further, we propose an asymptotically valid inference procedure for the identified region based on subsampling. Monte Carlo experiments provide supports to our inference procedure.
    Keywords: Interval data, semiparametrc binary response model, density weights, U-process
    JEL: C12 C14 C24
    Date: 2013–06–25
  3. By: Tony S. Wirjanto (Department of Statistics and Actuarial Science, University of Waterloo, Canada); Adam W. Kolkiewicz (Department of Statistics and Actuarial Science, University of Waterloo, Canada); Zhongxian Men (Department of Statistics and Actuarial Science, University of Waterloo, Canada)
    Abstract: This paper studies a stochastic conditional duration (SCD) model with a mixture of distribution processes for financial asset’s transaction data. Specifically it imposes a mixture of two positive distributions on the innovations of the observed duration process, where the mixture component distributions could be either Exponential, Gamma or Weibull. The model also allows for correlation between the observed durations and the logarithm of the latent conditionally expected durations in order to capture a leverage effect known to exist in the equity market. In addition the proposed mixture SCD model is shown to be able to accommodate possibly heavy tails of the marginal distribution of durations. Novel Markov Chain Monte Carlo (MCMC) algorithms are developed for Bayesian inference of parameters and duration forecasting of these models. Simulation studies and empirical applications to two stock duration data sets are provided to assess the performance of the proposed mixture SCD models and the accompanying MCMC algorithms.
    Keywords: Stochastic conditional duration; Mixture of distributions; Bayesian inference; Markov Chain Monte Carlo; Leverage effect; Slice sampler
    Date: 2013–05
  4. By: Claudia Foroni (Norges Bank (Central Bank of Norway)); Massimiliano Marcellino (European University Institute, Bocconi University and CEPR)
    Abstract: In this paper we show analytically, with simulation experiments and with actual data that a mismatch between the time scale of a DSGE model and that of the time series data used for its estimation generally creates identfication problems, introduces estimation bias and distorts the results of policy analysis. On the constructive side, we prove that the use of mixed frequency data, combined with a proper estimation approach, can alleviate the temporal aggregation bias, mitigate the identfication issues, and yield more reliable policy conclusions. The problems and possible remedy are illustrated in the context of standard structural monetary policy models.
    Keywords: Structural VAR, DSGE models, temporal aggregation, mixed frequency data, estimation. policy analysis
    JEL: C32 C43 E32
    Date: 2013–06–11
  5. By: Guney, Selin; Goodwin, Barry K.
    Abstract: An extensive empirical literature addressing the behavior of prices over time and across spatially distinct markets has grown substantially over time. A fundamental axiom of economics|the Law of One Price"|underlies the arbi- trage behavior thought to characterize such relationships. This literature has progressed from a simple consideration of correlation coecents and linear re- gression models to classes of models that address particular time series prop- erties of price data and consider nonlinear price linkages. In recent years, this literature has focused on models capable of accommodating structural change and regime switching behavior. This regime switching behavior has been ad- dressed through the application of nonlinear time series models such smooth and discrete threshold autoregressive models. The regime switching behavior arises because of unobservable transactions costs which may result in discrete trade/no trade regimes or smooth, continuous transitions among dierent states of the market. As the empirical literature has evolved, it has applied increas- ingly exible models of regime switching. For example, Goodwin, Holt, and Prestemon (2012) applied smooth transition autoregressive models to consider regional linkages in markets for oriented strand board lumber products. En- ders and Holt (2012) examined commodity price relationships using a series of overlapping smooth transition functions to capture structural changes and mean shifting behavior. This literature has also involved an evolution in the methods for statistically testing structural change and regime switching behav- iors. Chow tests with known break points have evolved into tests of discrete and gradual mean shifting with unknown break points and variable speeds of adjustment among regimes. These tests address the widely recognized prob- lems associated with nonstandard test statistics and parameters that may be unidentied under null hypotheses. In this paper, we propose a new class of semi-parametric models that accommodate mean shifting behavior in a vector autoregressive modeling framework. We view this approach as a natural next step in the evolution of nonlinear time series models of spatial and regional price behavior. To this end, we consider recent advances in semiparametric modeling that have developed methods for additive models that consist of a mixture of parametric and nonparametric components. Our vector autoregressive models adopt the \Generalized Additive Models" (GAM) estimation procedures Hastie and Tibshirani (1986) and Linton (2000). In particular, we use the backtting and integration algorithms developed for GAM model estimation to incorpo- rate a non-parametric mean shift in the linkages describing individual pairs and larger groups of market prices. Our empirical specication involves simple and 1 vector error correction models that relate price dierences to lagged values of prices and price dierentials. Our application is to daily data collected from a number of important corn and soybean markets at spatially distinct markets in North Carolina. These data have been previously utilized to evaluate regional price linkages and spatial market integration (see, for example, Goodwin and Piggott (2001)). We use generalized impulse response functions to evaluate the dynamics of regional price adjustments to localized shocks in individual mar- kets. Implications for regional price adjustments and, in particular, adjustments during recent periods of high volatility, are discussed in the paper. Finally, we oer suggestions for further extensions of the semi-parametric
    Keywords: Demand and Price Analysis, Research Methods/ Statistical Methods,
    Date: 2013
  6. By: Javier Hidalgo; Pedro Souza; Pedro Souza
    Abstract: Nowadays it is very frequent that a practitioner faces the problem of modelling large data sets. Relevant examples include spatio-temporal or panel data models with large N and T. In these cases deciding a particular dynamic model for each individual/population, which plays a crucial role in prediction and inferences, can be a very onerous and complex task. The aim of this paper is thus to examine a nonparametric test for the equality of the linear dynamic models as the number of individuals increases without bound. The test has two main features: (a) there is no need to choose any bandwidth parameter and (b) the asymptotic distribution of the test is a normal random variable.
    Date: 2013–06
  7. By: Fujii, Tomoki; van der Weide, Roy
    Abstract: This paper considers the prediction estimator as an efficient estimator for the population mean. The study may be viewed as an earlier study that proved that the prediction estimator based on the iteratively weighted least squares estimator outperforms the sample mean. The analysis finds that a certain moment condition must hold in general for the prediction estimator based on a Generalized-Method-of-Moment estimator to be at least as efficient as the sample mean. In an application to cost-effective double sampling, the authors show how prediction estimators may be adopted to maximize statistical precision (minimize financial costs) under a budget constraint (statistical precision constraint). This approach is particularly useful when the outcome variable of interest is expensive to observe relative to observing its covariates.
    Date: 2013–06–01
  8. By: W. Zhu; Frabrizio Leisen
    Abstract: Recently, Leisen and Lijoi (2011) introduced a bivariate vector of random probability measures with Poisson-Dirichlet marginals where the dependence is induced through a Lévy's Copula. In this paper the same approach is used for generalizing such a vector to the multivariate setting. Some non-trivial results are proved in the multidimensional case, in particular, the Laplace transform and the Exchangeable Partition Probability function (EPPF). Finally, some numerical illustrations of the EPPF are provided
    Keywords: Bayesian inference, Dirichlet process, Vectors of Poisson-Dirichlet processes, Multivariate Lévy measure, Partial exchangeability, Partition probability function
    Date: 2013–06
  9. By: Daouia, Abdelaati (TSE,UCL); Simar, Léopold (UCL); Wilson, Paul (University of Clemson)
    Abstract: hen faced with multiple inputs X " Rp+ and outputs Y " Rq+, traditional quantile regression of Y conditional on X = x for measuring economic efficiency in the output (input) direction is thwarted by the absence of a natural ordering of Euclidean space for dimensions q (p) greater than one. Daouia and Simar (2007) used nonstandard conditional quantiles to address this problem, conditioning on Y # y (X $ x) in the output (input) orientation, but the resulting quantiles depend on the a priori chosen direction. This paper uses a dimensionless transformation of the (p + q)-dimensional production process to develop an alternative formulation of distance from a realization of (X, Y ) to the efficient support boundary, motivating a new, unconditional quantile frontier lying inside the joint support of (X, Y ), but near the full, efficient frontier. The interpretation is analogous to univariate quantiles and corrects some of the disappointing properties of the conditional quantile-based approach. By contrast with the latter, our approach determines a unique partial-quantile frontier independent of the chosen orientation (input, output, hyperbolic or directional distance). We prove that both the resulting efficiency score and its estimator share desirable monotonicity properties. Simple arguments from extreme-value theory are used to derive the asymptotic distributional properties of the corresponding empirical efficiency scores (both full and partial). The usefulness of the quantile-type estimator is shown from an infinitesimal and global robustness theory viewpoints via a comparison with the previous conditional quantile-based approach. A diagnostic tool is developed to find the appropriate quantile-order; in the literature to date, this trimming order has been fixed a priori. The methodology is used to analyze the performance of U.S. credit unions, where outliers are likely to affect traditional approaches.
    Date: 2013–03
  10. By: Zhongxian Men (Department of Statistics & Actuarial Science, University of Waterloo, Canada); Tony S. Wirjanto (Department of Statistics & Actuarial Science, University of Waterloo, Canada; School of Accounting and Finance, University of Waterloo, Canada); Adam W. Kolkiewicz (Department of Statistics & Actuarial Science, University of Waterloo, Canada)
    Abstract: This paper proposes a threshold stochastic conditional duration (TSCD) model to capture the asymmetric property of financial transactions. The innovation of the observable duration equation is assumed to follow a threshold distribution with two component distributions switching between two regimes. The distributions in different regimes are assumed to be Exponential, Gamma or Weibull. To account for uncertainty in the unobserved threshold level, the observed durations are treated as self-exciting threshold variables. Adopting a Bayesian approach, we develop novel Markov Chain Monte Carlo algorithms to estimate all of the unknown parameters and latent states. To forecast the one-step ahead durations, we employ an auxiliary particle filter where the filter and prediction distributions of the latent states are approximated. The proposed model and the developed MCMC algorithms are illustrated by using both simulated and actual financial transaction data. For model selection, a Bayesian deviance information criterion is calculated to compare our model with other competing models in the literature. Overall, we find that the threshold SCD model performs better than the SCD model when a single positive distribution is assumed for the innovation of the duration equation.
    Keywords: Stochastic conditional duration; Threshold; Markov Chain Monte Carlo; Auxiliary particle filter; Deviance information criterion
    Date: 2013–05
  11. By: Daouia, Abdelaati (TSE,UCL); Girard, Stéphane (INRIA - Grenoble Rhône-Alpes); Guillou, Armelle (IRMA-Université de Strasbourg)
    Abstract: The estimation of optimal support boundaries under the monotonicity constraint is relatively unexplored and still in full development. This article examines a new extreme-value based model which provides a valid alternative for completely envelopment frontier models that often super from lack of precision, and for purely stochastic ones that are known to be sensitive to model misspecification. We provide different motivating applications including the estimation of the minimal cost in production activity and the assessment of the reliability of nuclear reactors.
    Keywords: cost function, edge data, extreme-value index, free disposal hull, moment frontier
    JEL: C13 C14 D20
    Date: 2013–05
  12. By: Shaik, Saleem; Tokovenko, Oleksiy
    Abstract: The robustness of the multiple imputation of missing data on parame- ter coefficients and efficiency measures is evaluated using stochastic frontier analysis in the panel Bayesian context. Second, the implications of multi- ple imputations on stochastic frontier analysis technical efficiency measures under alternative distributional assumptions−half-normal, truncation and exponential is evaluated. Empirical estimates indicate difference in the between-variance and within-variance of parameter coefficients estimated from stochastic frontier analysis and generalized linear models. Within stochastic frontier analysis, the between-variance and within-variance of technical efficiency are different across the three alternative distributional assumptions. Finally, results from this study indicate that even though the between- and within variance of multiple imputed data is close to zero, between- and within-variance of production function parameters, as well as, the technical efficiency measures are different.
    Keywords: Agricultural and Food Policy, Research Methods/ Statistical Methods,
    Date: 2013
  13. By: Andrew Binning (Norges Bank (Central Bank of Norway))
    Abstract: I describe a new method for imposing zero restrictions (both short and long-run) in combination with conventional sign-restrictions. In particular I extend the Rubio-Ramrez et al.(2010) algorithm for applying short and long-run restrictions for exactly identified models to models that are underidentified. In turn this can be thought of as a unifying framework for short-run, long-run and sign restrictions. I demonstrate my algorithm with two examples. In the first example I estimate a VAR model using the Smets & Wouters (2007) dataset and impose sign and zero restrictions based on the impulse responses from their DSGE model. In the second example I estimate a BVAR model using the Mountford & Uhlig (2009) data set and impose the same sign and zero restrictions they use to identify an anticipated government revenue shock.
    Keywords: SVAR, Identification, Impulse responses, Short-run restrictions, Long-run restrictions, Sign restrictions
    Date: 2013–06–10
  14. By: Dang, Hai-Anh; Lanjouw, Peter
    Abstract: Panel data conventionally underpin the analysis of poverty mobility over time. However, such data are not readily available for most developing countries. Far more common are the"snap-shots"of welfare captured by cross-section surveys. This paper proposes a method to construct synthetic panel data from cross sections which can provide point estimates of poverty mobility. In contrast to traditional pseudo-panel methods that require multiple rounds of cross-sectional data to study poverty at the cohort level, the proposed method can be applied to settings with as few as two survey rounds and also permits investigation at the more disaggregated household level. The procedure is implemented using cross-section survey data from several countries, spanning different income levels and geographical regions. Estimates fall within the 95 percent confidence interval -- or even one standard error in many cases -- of those based on actual panel data. The method is not only restricted to studying poverty mobility but can also accommodate investigation of other welfare outcome dynamics.
    Keywords: Statistical&Mathematical Sciences,Regional Economic Development,Poverty Lines,Rural Poverty Reduction,Science Education
    Date: 2013–06–01
  15. By: David C Broadstock (Research Institute of Economics and Management (RIEM), Southwestern University of Finance and Economics, Sichuan, China and Surrey Energy Economics Centre (SEEC), School of Economics, University of Surrey, UK.); Lester C Hunt (Surrey Energy Economics Centre (SEEC), University of Surrey, UK.)
    Abstract: Energy demand functions based on Koyck lag transformation result in an MA error process that is generally ignored in estimated panel data models. This note explores the implications of this assumption by estimating panel energy demand functions with asymmetric price responses and an MA process modelled explicitly. It is found that although the models with an MA term might be preferred statistically, they result in inferential problems implying that there might be a need to revisit the specification of panel energy demand functions used in a number of previous studies.
    Keywords: Koyck-lag transformation, Moving average errors, Panel data, Aggregate energy demand.
    JEL: C8 Q4
    Date: 2013–03
  16. By: Pierre Nyquist
    Abstract: Importance sampling has become an important tool for the computation of tail-based risk measures. Since such quantities are often determined mainly by rare events standard Monte Carlo can be inefficient and importance sampling provides a way to speed up computations. This paper considers moderate deviations for the weighted empirical process, the process analogue of the weighted empirical measure, arising in importance sampling. The moderate deviation principle is established as an extension of existing results. Using a delta method for large deviations established by Gao and Zhao (Ann. Statist., 2011) together with classical large deviation techniques, the moderate deviation principle for the weighted empirical process is extended to functionals of the weighted empirical process which correspond to risk measures. The main results are moderate deviation principles for importance sampling estimators of the quantile function of a distribution and Expected Shortfall.
    Date: 2013–06

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.