
on Econometrics 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Xu Cheng (Dept. of Economics, University of Pennsylvania) 
Abstract:  This paper determines the properties of standard generalized method of moments (GMM) estimators, tests, and confidence sets (CS's) in moment condition models in which some parameters are unidentified or weakly identified in part of the parameter space. The asymptotic distributions of GMM estimators are established under a full range of drifting sequences of true parameters and distributions. The asymptotic sizes (in a uniform sense) of standard GMM tests and CS's are established. The paper also establishes the correct asymptotic sizes of "robust" GMMbased Wald, t, and quasilikelihood ratio tests and CS's whose critical values are designed to yield robustness to identification problems. The results of the paper are applied to a nonlinear regression model with endogeneity and a probit model with endogeneity and possibly weak instrumental variables. 
Keywords:  Asymptotic size, Confidence set, Generalized method of moments, GMM estimator, Identification, Nonlinear models, Test, Wald test, Weak identification 
JEL:  C12 C15 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1828&r=ecm 
By:  Paulo M.M. Rodrigues; Nazarii Salish 
Abstract:  Over recent years several methods to deal with highfrequency data (economic, financial and other) have been proposed in the literature. An interesting example is for instance interval valued time series described by the temporal evolution of high and low prices of an asset. In this paper a new class of threshold models capable of capturing asymmetric e¤ects in intervalvalued data is introduced as well as new forecast loss functions and descriptive statistics of the forecast quality proposed. Least squares estimates of the threshold parameter and the regression slopes are obtained; and forecasts based on the proposed threshold model computed. A new forecast procedure based on the combination of this model with the k nearest neighbors method is introduced. To illustrate this approach, we report an application to a weekly sample of S&P500 index returns. The results obtained are encouraging and compare very favorably to available procedures.<br> 
JEL:  C12 C22 C52 C53 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:ptu:wpaper:w201128&r=ecm 
By:  Yonghui Zhang (School of Economics, Singapore Management University); Liangjun Su (School of Economics, Singapore Management University); Peter C.B. Phillips (Cowles Foundation, Yale University) 
Abstract:  This paper proposes a nonparametric test for common trends in semiparametric panel data models with fixed effects based on a measure of nonparametric goodnessoffit (R^2). We first estimate the model under the null hypothesis of common trends by the method of profile least squares, and obtain the augmented residual which consistently estimates the sum of the fixed effect and the disturbance under the null. Then we run a local linear regression of the augmented residuals on a time trend and calculate the nonparametric R^2 for each cross section unit. The proposed test statistic is obtained by averaging all cross sectional nonparametric R^2's, which is close to zero under the null and deviates from zero under the alternative. We show that after appropriate standardization the test statistic is asymptotically normally distributed under both the null hypothesis and a sequence of Pitman local alternatives. We prove test consistency and propose a bootstrap procedure to obtain pvalues. Monte Carlo simulations indicate that the test performs well in finite samples. Empirical applications are conducted exploring the commonality of spatial trends in UK climate change data and idiosyncratic trends in OECD real GDP growth data. Both applications reveal the fragility of the widely adopted common trends assumption. 
Keywords:  Common trends, Local polynomial estimation, Nonparametric goodnessoffit, Panel data, Profile least squares 
JEL:  C12 C14 C23 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1832&r=ecm 
By:  Lorenzo Pascual; Esther Ruiz; Diego Fresoli 
Abstract:  In this paper, we show how to simplify the construction of bootstrap prediction densities in multivariate VAR models by avoiding the backward representation. Bootstrap prediction densities are attractive because they incorporate the parameter uncertainty a any particular assumption about the error distribution. What is more, the construction of densities for more than onestep unknown asymptotically. The main advantage of the new simple without loosing the good performance of bootstrap procedures. Furthermore, by avoiding a backward representation, its asymptotic validity can be proved without relying on the assumption of Gaussian errors as proposed in this paper can be implemented to obtain prediction densities in models without a backward representation as, for example, models with MA components or GARCH disturbances. By comparing the finite sample performance of the proposed procedure with those of alternatives, we show that nothing is lost when using it. Finally, we implement the procedure to obtain prediction regions for US quarterly future inflation, unemployment and GDP growth 
Keywords:  NonGaussian VAR models, Prediction cubes, Prediction density, Prediction regions, Prediction ellipsoids, Resampling methods 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws113426&r=ecm 
By:  Theodoridis, Konstantinos (Bank of England) 
Abstract:  Recent studies illustrate that under some conditions dynamic stochastic general equilibrium models can be expressed as structural vector autoregressive models of infinite order. Based on this mapping and the theoretical results about vector autoregressive models of infinite order this paper proposes a minimum distance estimator that: A) matches the kperiod responses of the whole vector of the observable variables described by the structural model – caused after a small perturbation to the entire vector of the structural errors – with those observed in the historical data, which have been recovered through the use of a structurally identified vector autoregressive model, and B) minimises the distance between the reducedform error covariance matrix implied by the structural model and the one estimated in the data. This estimator encompasses those in the literature, is asymptotically consistent, normally distributed and efficient. The Jtype overidentifying restrictions statistic that results from this methodology can be used for the evaluation of the structural model. Finally, this study also develops the theory of the bootstrapped version of the estimator and the statistic introduced here. Monte Carlo simulation evidences based on a mediumscale DSGE model reveal very encouraging results for the proposed estimator when it is compared against modern – Bayesian maximum likelihood – and less modern – maximum likelihood and nonefficient IR matching – DSGE estimators. 
Keywords:  Minimum distance estimation; asymptotic efficiency; DSGE model estimation and evaluation; SVAR; IRFs 
JEL:  C50 C51 C52 
Date:  2011–10–31 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0439&r=ecm 
By:  Pesaran, M.H.; Pick, A.; Pranovich, M. 
Abstract:  This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for continuous and discrete break processes. Under continuous breaks, our approach recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for the weights in models with a single regressor and asympotically for larger models. It is shown that in these cases the value of the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain a forecasting procedure based on robust weights is proposed. Monte Carlo experiments and an empirical application to the predictive power of the yield curve analyze the performance of our approach relative to other forecasting methods. 
JEL:  C22 C53 
Date:  2011–10–31 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:1163&r=ecm 
By:  Guofang Huang; Yingyao Hu 
Abstract:  This paper proposes a new seminonparametric maximum likelihood estimation method for estimating production functions. The method extends the literature on structural estimation of production functions, started by the seminal work of Olley and Pakes (1996), by relaxing the scalarunobservable assumption about the proxy variables. The key additional assumption needed in the identification argument is the existence of two conditionally independent proxy variables. The assumption seems reasonable in many important cases. The new method is straightforward to apply, and a consistent estimate of the asymptotic covariance matrix of the structural parameters can be easily computed. 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:jhu:papers:583&r=ecm 
By:  Jonsson, Robert (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  A threestate nonhomogeneous Markov chain (MC) of order 0m, denoted M(m), was previously introduced by the author. The model was used to analyze work resumption among sicklisted patients. It was demonstrated that wrong assumptions about the Markov order m and about homogeneity can seriously invalidate predictions of future health states. In this paper focus is on tests (estimation) of m and of homogeneity. When testing for Markov order it is suggested to test M(m) against M(m+1) with m sequentially chosen as 0, 1, 2,…, until the null hypothesis can’t be rejected. Two test statistics are used, one based on the Maximum Likelihood ratio (MLR) and one based on a chisquare criterion. Also more formal test strategies based on Akaike’s and Baye’s information criteria are considered. Tests of homogeneity are based on MLR statistics. The performance of the tests is evaluated in simulation studies. The tests are applied to rehabilitation data where it is concluded that the rehabilitation process develops according to a nonhomogeneous Markov chain of order 2, possibly changing to a homogeneous chain of order 1 towards the end of the period. 
Keywords:  Likelihood ratio; Test power; Bias of tests 
JEL:  C10 
Date:  2011–10–31 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_007&r=ecm 
By:  David F. Hendry (Economics Department and Institute for New Economic Thinking at the Oxford Martin School, University of Oxford, UK); Søren Johansen (Economics Department, University of Copenhagen and CREATES, Aarhus University, Denmark) 
Abstract:  Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theoryparameter estimates when the theory is correct, yet protects against the theory being underspecified because some w{t} are relevant. 
Keywords:  Model selection, theory retention 
Date:  2011–10–24 
URL:  http://d.repec.org/n?u=RePEc:kud:kuiedp:1125&r=ecm 
By:  Fabian Y. R. P. Bocart; Christian M. Hafner 
Abstract:  A new heteroskedastic hedonic regression model is suggested which takes into account timevarying volatility and is applied to a blue chips art market. A nonparametric local likelihood estimator is proposed, and this is more precise than the often used dummy variables method. The empirical analysis reveals that errors are considerably nonGaussian, and that a student distribution with timevarying scale and degrees of freedom does well in explaining deviations of prices from their expectation. The art price index is a smooth function of time and has a variability that is comparable to the volatility of stock indices. 
Keywords:  Volatility, art markets, hedonic regression, semiparametric estimation 
JEL:  C14 C43 Z11 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011071&r=ecm 
By:  Chris D. Orme; Takashi Yamagata 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:man:sespap:1124&r=ecm 
By:  Prehn, Sören; Brümmer, Bernhard 
Abstract:  French (2011) can analytically show that the standard Anderson and van Wincoop (2003) gravity trade model is only correctly specified for disaggregate data; gravity trade model analysis should be done at product level and then estimation results should be reaggregated. If however gravity trade model analysis is to be done at product level then also estimation issues in disaggregate gravity trade models should come to the fore. As is shown, previous estimators suffer under different statistical problems. This paper proposes a zeroin ated Poisson QuasiLikelihood (PQL) and a Gamma TwoPart Model (G2PM) as reliable alternatives. Estimated within a Generalised Estimating Equation (GEE) framework, both estimators are consistent and have more or less conservative test statistics. Further, for model selection a QuasiLikelihood under the Independence Model Criterion (QIC) is recommend since this statistic is conform with GEE approaches. Both estimators PQL and G2PM and the model selection technique QIC should become standard tools for disaggregate gravity trade model estimation.  
Keywords:  gravity model,excess zeros,Poisson QuasiLikelihood,Gamma TwoPart Model,Generalised Estimating Equation Approach 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:zbw:daredp:1107&r=ecm 
By:  Thomas Lux; Leonardo MoralesArias; Cristina Sattarhoff 
Abstract:  The volatility specification of the Markovswitching Multifractal (MSM) model is proposed as an alternative mechanism for realized volatility (RV). We estimate the RVMSM model via Generalized Method of Moments and perform forecasting by means of best linear forecasts derived via the LevinsonDurbin algorithm. The outofsample performance of the RVMSM is compared against other popular time series specfications usually employed to model the dynamics of RV as well as other standard volatility models of asset returns. An intraday data set for five major international stock market indices is used to evaluate the various models outofsample. We find that the RVMSM seems to improve upon forecasts of its baseline MSM counterparts and many other volatility models in terms of mean squared errors (MSE). While the more conventional RVARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and criteria), the new RVMSM model seems often very close in its performance and in a nonnegligible number of cases even dominates over the RVARFIMA model 
Keywords:  Realized volatility, multiplicative volatility models, long memory, international volatility forecasting 
JEL:  C20 G12 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:kie:kieliw:1737&r=ecm 
By:  Bruno Feunou; Roméo Tedongap 
Abstract:  We develop a discretetime affine stochastic volatility model with timevarying conditional skewness (SVS). Importantly, we disentangle the dynamics of conditional volatility and conditional skewness in a coherent way. Our approach allows current asset returns to be asymmetric conditional on current factors and past information, what we term contemporaneous asymmetry. Conditional skewness is an explicit combination of the conditional leverage effect and contemporaneous asymmetry. We derive analytical formulas for various return moments that are used for generalized method of moments estimation. Applying our approach to S&P500 index daily returns and option data, we show that one and twofactor SVS models provide a better fit for both the historical and the riskneutral distribution of returns, compared to existing affine generalized autoregressive conditional heteroskedasticity (GARCH) models. Our results are not due to an overparameterization of the model: the onefactor SVS models have the same number of parameters as their onefactor GARCH competitors. 
Keywords:  Asset Pricing; Econometric and statistical methods 
JEL:  C1 C5 G1 G12 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:1120&r=ecm 
By:  Dobromil Serwa (Narodowy Bank Polski; Warsaw School of Economics) 
Abstract:  This research proposes a new method to identify the differing states of the market with respect to lending to households. We use an econometric multiregime regression model where each regime is associated with a different economic state of the credit market (i.e. a normal regime or a boom regime). The credit market alternates between regimes when some specific variable increases above or falls below the estimated threshold level. A new method for estimating multiregime threshold regression models for dynamic panel data is also demonstrated. 
Keywords:  credit boom, threshold regression, dynamic panel 
JEL:  E51 C23 C51 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:nbp:nbpmis:99&r=ecm 
By:  Mumtaz, Haroon (Bank of England) 
Abstract:  A large empirical literature has examined the transmission mechanism of structural shocks in great detail. The possible role played by changes in the volatility of shocks has largely been overlooked in vector autoregression based applications. This paper proposes an extended vector autoregression where the volatility of structural shocks is allowed to be timevarying and to have a direct impact on the endogenous variables included in the model. The proposed model is applied to US data to consider the potential impact of changes in the volatility of monetary policy shocks. The results suggest that while an increase in this volatility has a statistically significant impact on GDP growth and inflation, the relative contribution of these shocks to the forecast error variance of these variables is estimated to be small. 
Keywords:  Vector autoregression; stochastic volatility; particle filter. 
JEL:  E30 E32 
Date:  2011–10–31 
URL:  http://d.repec.org/n?u=RePEc:boe:boeewp:0437&r=ecm 
By:  Lawrence R. Thorne 
Abstract:  I report a new statistical distribution formulated to confront the infamous, longstanding, computational/modeling challenge presented by highly skewed and/or leptokurtic ("fat or heavytailed") data. The distribution is straightforward, flexible and effective. Even when working with far fewer data points than are routinely required, it models nonGaussian data samples, from peak center through far tails, within the context of a single probability density function (PDF) that is valid over an extremely broad range of dispersions and probability densities. The distribution is a precision tool to characterize the great risk and the great opportunity inherent in fattailed data. 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1110.6553&r=ecm 
By:  Jonsson, Robert (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University) 
Abstract:  Markov chains (MCs) have been used to study how the health states of patients are progressing in time. With few exceptions the studies have been based on the questionable assumptions that the MC has order m=1 and is homogeneous in time. In this paper a threestate nonhomogeneous MC model is introduced that allows m to vary. It is demonstrated how wrong assumptions about homogeneity and about the value of m can invalidate predictions of future health states. This can in turn seriously bias a costbenefit analysis when costs are attached to the predicted outcomes. The present paper only considers problems connected with model construction and estimation. Problems of testing for a proper value of m and of homogeneity is treated in a subsequent paper. Data of work resumption among sicklisted women and men are used to illustrate the theory. A nonhomogeneous MC with m = 2 was well fitted to data for both sexes. The essential difference between the rehabilitation processes for the two sexes was that men had a higher chance to move from the intermediate health state to the state ‘healthy’, while women tended to remain in the intermediate state for a longer time. 
Keywords:  Rehabilitation; transition probability; prediction; Maximum Likelihood 
JEL:  C10 
Date:  2011–10–31 
URL:  http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_006&r=ecm 
By:  Tierney, Heather L.R. 
Abstract:  This paper presents three local nonparametric forecasting methods that are able to utilize the isolated periods of revised realtime PCE and core PCE for 62 vintages within a historic framework with respect to the nonparametric exclusionfromcore inflation persistence model. The flexibility, provided by the kernel and window width, permits the incorporation of the forecasted value into the appropriate time frame. For instance, a low inflation measure can be included in other low inflation time periods in order to form more optimal forecasts by combining values that are similar in terms of metric distance as opposed to chronological time. The most efficient nonparametric forecasting method is the third model, which uses the flexibility of nonparametrics to its utmost by making forecasts conditional on the forecasted value. 
Keywords:  Inflation Persistence; RealTime Data; Monetary Policy; Nonparametrics; Forecasting 
JEL:  C53 C14 E52 
Date:  2011–11–01 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:34439&r=ecm 
By:  ADOLFO SACHSIDA (IPEADIMAC / IBMECDF); MARIO JORGE CARDOSO DE MENDONÇA (IPEA); FABIO CARLUCCI WALNUT (UCB) 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:anp:en2010:204&r=ecm 
By:  Agostino Tarsitano; Rosetta Lombardo (Dipartimento di Economia e Statistica, Università della Calabria) 
Abstract:  Rank association is a fundamental tool for expressing dependence in cases in which data are arranged in order. Measures of rank correlation have been accumulated in several contexts for more than a century and we were able to cite more than thirty of these coefficients, from simple ones to relatively complicated definitions invoking one or more systems of weights. However, only a few of these can actually be considered to be admissible substitutes for Pearson’s correlation. The main drawback with the vast majority of coefficients is their “resistancetochange” which appears to be of limited value for the purposes of rank comparisons that are intrinsically robust. In this article, a new nonparametric correlation coefficient is defined that is based on the principle of maximization of a ratio of two ranks. In comparing it with existing rank correlations, it was found to have extremely high sensitivity to permutation patterns. We have illustrated the potential improvement that our index can provide in economic contexts by comparing published results with those obtained through the use of this new index. The success that we have had suggests that our index may have important applications wherever the discriminatory power of the rank correlation coefficient should be particularly strong. 
Keywords:  Ordinal data, Nonparametric agreement, Economic applications 
JEL:  C14 A12 
Date:  2011–10 
URL:  http://d.repec.org/n?u=RePEc:clb:wpaper:201111&r=ecm 