
on Econometrics 
By:  Zhongxian Men (Department of Statistics & Actuarial Science, University of Waterloo, Canada); Adam W. Kolkiewicz (Department of Statistics & Actuarial Science, University of Waterloo, Canada); Tony S. Wirjanto (Department of Statistics & Actuarial Science, University of Waterloo, Canada) 
Abstract:  This paper extends stochastic conditional duration (SCD) models for financial transaction data to allow for correlation between error processes or innovations of observed duration process and latent log duration process. Novel algorithms of Markov Chain Monte Carlo (MCMC) are developed to fit the resulting SCD models under various distributional assumptions about the innovation of the measurement equation. Unlike the estimation methods commonly used to estimate the SCD models in the literature, we work with the original specification of the model, without subjecting the observation equation to a logarithmic transformation. Results of simulation studies suggest that our proposed models and corresponding estimation methodology perform quite well. We also apply an auxiliary particle filter technique to construct onestepahead insample and outofsample duration forecasts of the fitted models. Applications to the IBM transaction data allows comparison of our models and methods to those existing in the literature. 
Keywords:  Stochastic Duration; Bayesian Inference; Markov Chain Monte Carlo; Leverage Effect; Acceptancerejection; Slice Sampler 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:28_13&r=ecm 
By:  Yuanyuan Wan; Haiqing Xu 
Abstract:  This paper studies the semiparametric binary response model with interval data investigated by Manski and Tamer (2002, MT). In this partially identified model, we propose a new estimator based on MT's modified maximum score (MMS) method by introducing density weights to the objective function, which allows us to develop asymptotic properties of the proposed set estimator for inference. We show that the densityweighted MMS estimator converges to the identified set at a nearly cuberootn rate. Further, we propose an asymptotically valid inference procedure for the identified region based on subsampling. Monte Carlo experiments provide supports to our inference procedure. 
Keywords:  Interval data, semiparametrc binary response model, density weights, Uprocess 
JEL:  C12 C14 C24 
Date:  2013–06–25 
URL:  http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa492&r=ecm 
By:  Tony S. Wirjanto (Department of Statistics and Actuarial Science, University of Waterloo, Canada); Adam W. Kolkiewicz (Department of Statistics and Actuarial Science, University of Waterloo, Canada); Zhongxian Men (Department of Statistics and Actuarial Science, University of Waterloo, Canada) 
Abstract:  This paper studies a stochastic conditional duration (SCD) model with a mixture of distribution processes for financial asset’s transaction data. Specifically it imposes a mixture of two positive distributions on the innovations of the observed duration process, where the mixture component distributions could be either Exponential, Gamma or Weibull. The model also allows for correlation between the observed durations and the logarithm of the latent conditionally expected durations in order to capture a leverage effect known to exist in the equity market. In addition the proposed mixture SCD model is shown to be able to accommodate possibly heavy tails of the marginal distribution of durations. Novel Markov Chain Monte Carlo (MCMC) algorithms are developed for Bayesian inference of parameters and duration forecasting of these models. Simulation studies and empirical applications to two stock duration data sets are provided to assess the performance of the proposed mixture SCD models and the accompanying MCMC algorithms. 
Keywords:  Stochastic conditional duration; Mixture of distributions; Bayesian inference; Markov Chain Monte Carlo; Leverage effect; Slice sampler 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:29_13&r=ecm 
By:  Claudia Foroni (Norges Bank (Central Bank of Norway)); Massimiliano Marcellino (European University Institute, Bocconi University and CEPR) 
Abstract:  In this paper we show analytically, with simulation experiments and with actual data that a mismatch between the time scale of a DSGE model and that of the time series data used for its estimation generally creates identfication problems, introduces estimation bias and distorts the results of policy analysis. On the constructive side, we prove that the use of mixed frequency data, combined with a proper estimation approach, can alleviate the temporal aggregation bias, mitigate the identfication issues, and yield more reliable policy conclusions. The problems and possible remedy are illustrated in the context of standard structural monetary policy models. 
Keywords:  Structural VAR, DSGE models, temporal aggregation, mixed frequency data, estimation. policy analysis 
JEL:  C32 C43 E32 
Date:  2013–06–11 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2013_15&r=ecm 
By:  Guney, Selin; Goodwin, Barry K. 
Abstract:  An extensive empirical literature addressing the behavior of prices over time and across spatially distinct markets has grown substantially over time. A fundamental axiom of economicsthe Law of One Price"underlies the arbi trage behavior thought to characterize such relationships. This literature has progressed from a simple consideration of correlation coecents and linear re gression models to classes of models that address particular time series prop erties of price data and consider nonlinear price linkages. In recent years, this literature has focused on models capable of accommodating structural change and regime switching behavior. This regime switching behavior has been ad dressed through the application of nonlinear time series models such smooth and discrete threshold autoregressive models. The regime switching behavior arises because of unobservable transactions costs which may result in discrete trade/no trade regimes or smooth, continuous transitions among dierent states of the market. As the empirical literature has evolved, it has applied increas ingly exible models of regime switching. For example, Goodwin, Holt, and Prestemon (2012) applied smooth transition autoregressive models to consider regional linkages in markets for oriented strand board lumber products. En ders and Holt (2012) examined commodity price relationships using a series of overlapping smooth transition functions to capture structural changes and mean shifting behavior. This literature has also involved an evolution in the methods for statistically testing structural change and regime switching behav iors. Chow tests with known break points have evolved into tests of discrete and gradual mean shifting with unknown break points and variable speeds of adjustment among regimes. These tests address the widely recognized prob lems associated with nonstandard test statistics and parameters that may be unidentied under null hypotheses. In this paper, we propose a new class of semiparametric models that accommodate mean shifting behavior in a vector autoregressive modeling framework. We view this approach as a natural next step in the evolution of nonlinear time series models of spatial and regional price behavior. To this end, we consider recent advances in semiparametric modeling that have developed methods for additive models that consist of a mixture of parametric and nonparametric components. Our vector autoregressive models adopt the \Generalized Additive Models" (GAM) estimation procedures Hastie and Tibshirani (1986) and Linton (2000). In particular, we use the backtting and integration algorithms developed for GAM model estimation to incorpo rate a nonparametric mean shift in the linkages describing individual pairs and larger groups of market prices. Our empirical specication involves simple and 1 vector error correction models that relate price dierences to lagged values of prices and price dierentials. Our application is to daily data collected from a number of important corn and soybean markets at spatially distinct markets in North Carolina. These data have been previously utilized to evaluate regional price linkages and spatial market integration (see, for example, Goodwin and Piggott (2001)). We use generalized impulse response functions to evaluate the dynamics of regional price adjustments to localized shocks in individual mar kets. Implications for regional price adjustments and, in particular, adjustments during recent periods of high volatility, are discussed in the paper. Finally, we oer suggestions for further extensions of the semiparametric 
Keywords:  Demand and Price Analysis, Research Methods/ Statistical Methods, 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea13:151149&r=ecm 
By:  Javier Hidalgo; Pedro Souza; Pedro Souza 
Abstract:  Nowadays it is very frequent that a practitioner faces the problem of modelling large data sets. Relevant examples include spatiotemporal or panel data models with large N and T. In these cases deciding a particular dynamic model for each individual/population, which plays a crucial role in prediction and inferences, can be a very onerous and complex task. The aim of this paper is thus to examine a nonparametric test for the equality of the linear dynamic models as the number of individuals increases without bound. The test has two main features: (a) there is no need to choose any bandwidth parameter and (b) the asymptotic distribution of the test is a normal random variable. 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:cep:stiecm:/2013/563&r=ecm 
By:  Fujii, Tomoki; van der Weide, Roy 
Abstract:  This paper considers the prediction estimator as an efficient estimator for the population mean. The study may be viewed as an earlier study that proved that the prediction estimator based on the iteratively weighted least squares estimator outperforms the sample mean. The analysis finds that a certain moment condition must hold in general for the prediction estimator based on a GeneralizedMethodofMoment estimator to be at least as efficient as the sample mean. In an application to costeffective double sampling, the authors show how prediction estimators may be adopted to maximize statistical precision (minimize financial costs) under a budget constraint (statistical precision constraint). This approach is particularly useful when the outcome variable of interest is expensive to observe relative to observing its covariates. 
Date:  2013–06–01 
URL:  http://d.repec.org/n?u=RePEc:wbk:wbrwps:6509&r=ecm 
By:  W. Zhu; Frabrizio Leisen 
Abstract:  Recently, Leisen and Lijoi (2011) introduced a bivariate vector of random probability measures with PoissonDirichlet marginals where the dependence is induced through a Lévy's Copula. In this paper the same approach is used for generalizing such a vector to the multivariate setting. Some nontrivial results are proved in the multidimensional case, in particular, the Laplace transform and the Exchangeable Partition Probability function (EPPF). Finally, some numerical illustrations of the EPPF are provided 
Keywords:  Bayesian inference, Dirichlet process, Vectors of PoissonDirichlet processes, Multivariate Lévy measure, Partial exchangeability, Partition probability function 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws132220&r=ecm 
By:  Daouia, Abdelaati (TSE,UCL); Simar, Léopold (UCL); Wilson, Paul (University of Clemson) 
Abstract:  hen faced with multiple inputs X " Rp+ and outputs Y " Rq+, traditional quantile regression of Y conditional on X = x for measuring economic efficiency in the output (input) direction is thwarted by the absence of a natural ordering of Euclidean space for dimensions q (p) greater than one. Daouia and Simar (2007) used nonstandard conditional quantiles to address this problem, conditioning on Y # y (X $ x) in the output (input) orientation, but the resulting quantiles depend on the a priori chosen direction. This paper uses a dimensionless transformation of the (p + q)dimensional production process to develop an alternative formulation of distance from a realization of (X, Y ) to the efficient support boundary, motivating a new, unconditional quantile frontier lying inside the joint support of (X, Y ), but near the full, efficient frontier. The interpretation is analogous to univariate quantiles and corrects some of the disappointing properties of the conditional quantilebased approach. By contrast with the latter, our approach determines a unique partialquantile frontier independent of the chosen orientation (input, output, hyperbolic or directional distance). We prove that both the resulting efficiency score and its estimator share desirable monotonicity properties. Simple arguments from extremevalue theory are used to derive the asymptotic distributional properties of the corresponding empirical efficiency scores (both full and partial). The usefulness of the quantiletype estimator is shown from an infinitesimal and global robustness theory viewpoints via a comparison with the previous conditional quantilebased approach. A diagnostic tool is developed to find the appropriate quantileorder; in the literature to date, this trimming order has been fixed a priori. The methodology is used to analyze the performance of U.S. credit unions, where outliers are likely to affect traditional approaches. 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:27253&r=ecm 
By:  Zhongxian Men (Department of Statistics & Actuarial Science, University of Waterloo, Canada); Tony S. Wirjanto (Department of Statistics & Actuarial Science, University of Waterloo, Canada; School of Accounting and Finance, University of Waterloo, Canada); Adam W. Kolkiewicz (Department of Statistics & Actuarial Science, University of Waterloo, Canada) 
Abstract:  This paper proposes a threshold stochastic conditional duration (TSCD) model to capture the asymmetric property of financial transactions. The innovation of the observable duration equation is assumed to follow a threshold distribution with two component distributions switching between two regimes. The distributions in different regimes are assumed to be Exponential, Gamma or Weibull. To account for uncertainty in the unobserved threshold level, the observed durations are treated as selfexciting threshold variables. Adopting a Bayesian approach, we develop novel Markov Chain Monte Carlo algorithms to estimate all of the unknown parameters and latent states. To forecast the onestep ahead durations, we employ an auxiliary particle filter where the filter and prediction distributions of the latent states are approximated. The proposed model and the developed MCMC algorithms are illustrated by using both simulated and actual financial transaction data. For model selection, a Bayesian deviance information criterion is calculated to compare our model with other competing models in the literature. Overall, we find that the threshold SCD model performs better than the SCD model when a single positive distribution is assumed for the innovation of the duration equation. 
Keywords:  Stochastic conditional duration; Threshold; Markov Chain Monte Carlo; Auxiliary particle filter; Deviance information criterion 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:rim:rimwps:30_13&r=ecm 
By:  Daouia, Abdelaati (TSE,UCL); Girard, Stéphane (INRIA  Grenoble RhôneAlpes); Guillou, Armelle (IRMAUniversité de Strasbourg) 
Abstract:  The estimation of optimal support boundaries under the monotonicity constraint is relatively unexplored and still in full development. This article examines a new extremevalue based model which provides a valid alternative for completely envelopment frontier models that often super from lack of precision, and for purely stochastic ones that are known to be sensitive to model misspecification. We provide different motivating applications including the estimation of the minimal cost in production activity and the assessment of the reliability of nuclear reactors. 
Keywords:  cost function, edge data, extremevalue index, free disposal hull, moment frontier 
JEL:  C13 C14 D20 
Date:  2013–05 
URL:  http://d.repec.org/n?u=RePEc:tse:wpaper:27252&r=ecm 
By:  Shaik, Saleem; Tokovenko, Oleksiy 
Abstract:  The robustness of the multiple imputation of missing data on parame ter coefficients and efficiency measures is evaluated using stochastic frontier analysis in the panel Bayesian context. Second, the implications of multi ple imputations on stochastic frontier analysis technical efficiency measures under alternative distributional assumptions−halfnormal, truncation and exponential is evaluated. Empirical estimates indicate difference in the betweenvariance and withinvariance of parameter coefficients estimated from stochastic frontier analysis and generalized linear models. Within stochastic frontier analysis, the betweenvariance and withinvariance of technical efficiency are different across the three alternative distributional assumptions. Finally, results from this study indicate that even though the between and within variance of multiple imputed data is close to zero, between and withinvariance of production function parameters, as well as, the technical efficiency measures are different. 
Keywords:  Agricultural and Food Policy, Research Methods/ Statistical Methods, 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:ags:aaea13:150792&r=ecm 
By:  Andrew Binning (Norges Bank (Central Bank of Norway)) 
Abstract:  I describe a new method for imposing zero restrictions (both short and longrun) in combination with conventional signrestrictions. In particular I extend the RubioRamrez et al.(2010) algorithm for applying short and longrun restrictions for exactly identified models to models that are underidentified. In turn this can be thought of as a unifying framework for shortrun, longrun and sign restrictions. I demonstrate my algorithm with two examples. In the first example I estimate a VAR model using the Smets & Wouters (2007) dataset and impose sign and zero restrictions based on the impulse responses from their DSGE model. In the second example I estimate a BVAR model using the Mountford & Uhlig (2009) data set and impose the same sign and zero restrictions they use to identify an anticipated government revenue shock. 
Keywords:  SVAR, Identification, Impulse responses, Shortrun restrictions, Longrun restrictions, Sign restrictions 
Date:  2013–06–10 
URL:  http://d.repec.org/n?u=RePEc:bno:worpap:2013_14&r=ecm 
By:  Dang, HaiAnh; Lanjouw, Peter 
Abstract:  Panel data conventionally underpin the analysis of poverty mobility over time. However, such data are not readily available for most developing countries. Far more common are the"snapshots"of welfare captured by crosssection surveys. This paper proposes a method to construct synthetic panel data from cross sections which can provide point estimates of poverty mobility. In contrast to traditional pseudopanel methods that require multiple rounds of crosssectional data to study poverty at the cohort level, the proposed method can be applied to settings with as few as two survey rounds and also permits investigation at the more disaggregated household level. The procedure is implemented using crosssection survey data from several countries, spanning different income levels and geographical regions. Estimates fall within the 95 percent confidence interval  or even one standard error in many cases  of those based on actual panel data. The method is not only restricted to studying poverty mobility but can also accommodate investigation of other welfare outcome dynamics. 
Keywords:  Statistical&Mathematical Sciences,Regional Economic Development,Poverty Lines,Rural Poverty Reduction,Science Education 
Date:  2013–06–01 
URL:  http://d.repec.org/n?u=RePEc:wbk:wbrwps:6504&r=ecm 
By:  David C Broadstock (Research Institute of Economics and Management (RIEM), Southwestern University of Finance and Economics, Sichuan, China and Surrey Energy Economics Centre (SEEC), School of Economics, University of Surrey, UK.); Lester C Hunt (Surrey Energy Economics Centre (SEEC), University of Surrey, UK.) 
Abstract:  Energy demand functions based on Koyck lag transformation result in an MA error process that is generally ignored in estimated panel data models. This note explores the implications of this assumption by estimating panel energy demand functions with asymmetric price responses and an MA process modelled explicitly. It is found that although the models with an MA term might be preferred statistically, they result in inferential problems implying that there might be a need to revisit the specification of panel energy demand functions used in a number of previous studies. 
Keywords:  Koycklag transformation, Moving average errors, Panel data, Aggregate energy demand. 
JEL:  C8 Q4 
Date:  2013–03 
URL:  http://d.repec.org/n?u=RePEc:sur:seedps:140&r=ecm 
By:  Pierre Nyquist 
Abstract:  Importance sampling has become an important tool for the computation of tailbased risk measures. Since such quantities are often determined mainly by rare events standard Monte Carlo can be inefficient and importance sampling provides a way to speed up computations. This paper considers moderate deviations for the weighted empirical process, the process analogue of the weighted empirical measure, arising in importance sampling. The moderate deviation principle is established as an extension of existing results. Using a delta method for large deviations established by Gao and Zhao (Ann. Statist., 2011) together with classical large deviation techniques, the moderate deviation principle for the weighted empirical process is extended to functionals of the weighted empirical process which correspond to risk measures. The main results are moderate deviation principles for importance sampling estimators of the quantile function of a distribution and Expected Shortfall. 
Date:  2013–06 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1306.6588&r=ecm 