nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒11‒07
twenty-one papers chosen by
Sune Karlsson
Orebro University

  1. GMM Estimation and Uniform Subvector Inference with Possible Identification Failure By Donald W.K. Andrews; Xu Cheng
  2. Modeling and Forecasting Interval Time Series with Threshold Models: An Application to S&P500 Index Returns By Paulo M.M. Rodrigues; Nazarii Salish
  3. Testing for Common Trends in Semiparametric Panel Data Models with Fixed Effects By Yonghui Zhang; Liangjun Su; Peter C.B. Phillips
  4. Bootstrap forecast of multivariate VAR models without using the backward representation By Lorenzo Pascual; Esther Ruiz; Diego Fresoli
  5. An efficient minimum distance estimator for DSGE models By Theodoridis, Konstantinos
  6. Optimal Forecasts in the Presence of Structural Breaks By Pesaran, M.H.; Pick, A.; Pranovich, M.
  7. Estimating Production Functions with Robustness Against Errors in the Proxy Variables By Guofang Huang; Yingyao Hu
  8. Tests of Markov Order and Homogeneity in a Markov Chain By Jonsson, Robert
  9. The Properties of Model Selection when Retaining Theory Variables By David F. Hendry; Søren Johansen
  10. Econometric analysis of volatile art markets By Fabian Y. R. P. Bocart; Christian M. Hafner
  11. A Heteroskedasticity-Robust F-Test Statistic for Individual Effects By Chris D. Orme; Takashi Yamagata
  12. Estimation issues in disaggregate gravity trade models By Prehn, Sören; Brümmer, Bernhard
  13. A Markov-switching Multifractal Approach to Forecasting Realized Volatility By Thomas Lux; Leonardo Morales-Arias; Cristina Sattarhoff
  14. A Stochastic Volatility Model with Conditional Skewness By Bruno Feunou; Roméo Tedongap
  15. Identifying multiple regimes in the model of credit to households By Dobromil Serwa
  16. Estimating the impact of the volatility of shocks: a structural VAR approach By Mumtaz, Haroon
  17. Fat Tails Quantified and Resolved: A New Distribution to Reveal and Characterize the Risk and Opportunity Inherent in Leptokurtic Data By Lawrence R. Thorne
  18. A Markov Chain Model for Analysing the Progression of Patient’s Health States By Jonsson, Robert
  19. Forecasting and tracking real-time data revisions in inflation persistence By Tierney, Heather L.R.
  20. RICARDIANEQUIVALENCE AND LUCAS CRITIQUE: AN ALTERNATIVE TEST OF RICARDIANEQUIVALENCE USING SUPER EXOGENEITY TESTS IN SIMULATED SERIES By ADOLFO SACHSIDA; MARIO JORGE CARDOSO DE MENDONÇA; FABIO CARLUCCI WALNUT
  21. AN EXHAUSTIVE COEFFICIENT OF RANK CORRELATION By Agostino Tarsitano; Rosetta Lombardo

  1. By: Donald W.K. Andrews (Cowles Foundation, Yale University); Xu Cheng (Dept. of Economics, University of Pennsylvania)
    Abstract: This paper determines the properties of standard generalized method of moments (GMM) estimators, tests, and confidence sets (CS's) in moment condition models in which some parameters are unidentified or weakly identified in part of the parameter space. The asymptotic distributions of GMM estimators are established under a full range of drifting sequences of true parameters and distributions. The asymptotic sizes (in a uniform sense) of standard GMM tests and CS's are established. The paper also establishes the correct asymptotic sizes of "robust" GMM-based Wald, t, and quasi-likelihood ratio tests and CS's whose critical values are designed to yield robustness to identification problems. The results of the paper are applied to a nonlinear regression model with endogeneity and a probit model with endogeneity and possibly weak instrumental variables.
    Keywords: Asymptotic size, Confidence set, Generalized method of moments, GMM estimator, Identification, Nonlinear models, Test, Wald test, Weak identification
    JEL: C12 C15
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1828&r=ecm
  2. By: Paulo M.M. Rodrigues; Nazarii Salish
    Abstract: Over recent years several methods to deal with high-frequency data (economic, financial and other) have been proposed in the literature. An interesting example is for instance interval valued time series described by the temporal evolution of high and low prices of an asset. In this paper a new class of threshold models capable of capturing asymmetric e¤ects in interval-valued data is introduced as well as new forecast loss functions and descriptive statistics of the forecast quality proposed. Least squares estimates of the threshold parameter and the regression slopes are obtained; and forecasts based on the proposed threshold model computed. A new forecast procedure based on the combination of this model with the k nearest neighbors method is introduced. To illustrate this approach, we report an application to a weekly sample of S&P500 index returns. The results obtained are encouraging and compare very favorably to available procedures.<br>
    JEL: C12 C22 C52 C53
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:ptu:wpaper:w201128&r=ecm
  3. By: Yonghui Zhang (School of Economics, Singapore Management University); Liangjun Su (School of Economics, Singapore Management University); Peter C.B. Phillips (Cowles Foundation, Yale University)
    Abstract: This paper proposes a nonparametric test for common trends in semiparametric panel data models with fixed effects based on a measure of nonparametric goodness-of-fit (R^2). We first estimate the model under the null hypothesis of common trends by the method of profile least squares, and obtain the augmented residual which consistently estimates the sum of the fixed effect and the disturbance under the null. Then we run a local linear regression of the augmented residuals on a time trend and calculate the nonparametric R^2 for each cross section unit. The proposed test statistic is obtained by averaging all cross sectional nonparametric R^2's, which is close to zero under the null and deviates from zero under the alternative. We show that after appropriate standardization the test statistic is asymptotically normally distributed under both the null hypothesis and a sequence of Pitman local alternatives. We prove test consistency and propose a bootstrap procedure to obtain p-values. Monte Carlo simulations indicate that the test performs well in finite samples. Empirical applications are conducted exploring the commonality of spatial trends in UK climate change data and idiosyncratic trends in OECD real GDP growth data. Both applications reveal the fragility of the widely adopted common trends assumption.
    Keywords: Common trends, Local polynomial estimation, Nonparametric goodness-of-fit, Panel data, Profile least squares
    JEL: C12 C14 C23
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1832&r=ecm
  4. By: Lorenzo Pascual; Esther Ruiz; Diego Fresoli
    Abstract: In this paper, we show how to simplify the construction of bootstrap prediction densities in multivariate VAR models by avoiding the backward representation. Bootstrap prediction densities are attractive because they incorporate the parameter uncertainty a any particular assumption about the error distribution. What is more, the construction of densities for more than one-step unknown asymptotically. The main advantage of the new simple without loosing the good performance of bootstrap procedures. Furthermore, by avoiding a backward representation, its asymptotic validity can be proved without relying on the assumption of Gaussian errors as proposed in this paper can be implemented to obtain prediction densities in models without a backward representation as, for example, models with MA components or GARCH disturbances. By comparing the finite sample performance of the proposed procedure with those of alternatives, we show that nothing is lost when using it. Finally, we implement the procedure to obtain prediction regions for US quarterly future inflation, unemployment and GDP growth
    Keywords: Non-Gaussian VAR models, Prediction cubes, Prediction density, Prediction regions, Prediction ellipsoids, Resampling methods
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws113426&r=ecm
  5. By: Theodoridis, Konstantinos (Bank of England)
    Abstract: Recent studies illustrate that under some conditions dynamic stochastic general equilibrium models can be expressed as structural vector autoregressive models of infinite order. Based on this mapping and the theoretical results about vector autoregressive models of infinite order this paper proposes a minimum distance estimator that: A) matches the k-period responses of the whole vector of the observable variables described by the structural model – caused after a small perturbation to the entire vector of the structural errors – with those observed in the historical data, which have been recovered through the use of a structurally identified vector autoregressive model, and B) minimises the distance between the reduced-form error covariance matrix implied by the structural model and the one estimated in the data. This estimator encompasses those in the literature, is asymptotically consistent, normally distributed and efficient. The J-type overidentifying restrictions statistic that results from this methodology can be used for the evaluation of the structural model. Finally, this study also develops the theory of the bootstrapped version of the estimator and the statistic introduced here. Monte Carlo simulation evidences based on a medium-scale DSGE model reveal very encouraging results for the proposed estimator when it is compared against modern – Bayesian maximum likelihood – and less modern – maximum likelihood and non-efficient IR matching – DSGE estimators.
    Keywords: Minimum distance estimation; asymptotic efficiency; DSGE model estimation and evaluation; SVAR; IRFs
    JEL: C50 C51 C52
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0439&r=ecm
  6. By: Pesaran, M.H.; Pick, A.; Pranovich, M.
    Abstract: This paper considers the problem of forecasting under continuous and discrete structural breaks and proposes weighting observations to obtain optimal forecasts in the MSFE sense. We derive optimal weights for continuous and discrete break processes. Under continuous breaks, our approach recovers exponential smoothing weights. Under discrete breaks, we provide analytical expressions for the weights in models with a single regressor and asympotically for larger models. It is shown that in these cases the value of the optimal weight is the same across observations within a given regime and differs only across regimes. In practice, where information on structural breaks is uncertain a forecasting procedure based on robust weights is proposed. Monte Carlo experiments and an empirical application to the predictive power of the yield curve analyze the performance of our approach relative to other forecasting methods.
    JEL: C22 C53
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1163&r=ecm
  7. By: Guofang Huang; Yingyao Hu
    Abstract: This paper proposes a new semi-nonparametric maximum likelihood estimation method for estimating production functions. The method extends the literature on structural estimation of production functions, started by the seminal work of Olley and Pakes (1996), by relaxing the scalar-unobservable assumption about the proxy variables. The key additional assumption needed in the identification argument is the existence of two conditionally independent proxy variables. The assumption seems reasonable in many important cases. The new method is straightforward to apply, and a consistent estimate of the asymptotic covariance matrix of the structural parameters can be easily computed.
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:jhu:papers:583&r=ecm
  8. By: Jonsson, Robert (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: A three-state non-homogeneous Markov chain (MC) of order 0m, denoted M(m), was previously introduced by the author. The model was used to analyze work resumption among sick-listed patients. It was demonstrated that wrong assumptions about the Markov order m and about homogeneity can seriously invalidate predictions of future health states. In this paper focus is on tests (estimation) of m and of homogeneity. When testing for Markov order it is suggested to test M(m) against M(m+1) with m sequentially chosen as 0, 1, 2,…, until the null hypothesis can’t be rejected. Two test statistics are used, one based on the Maximum Likelihood ratio (MLR) and one based on a chi-square criterion. Also more formal test strategies based on Akaike’s and Baye’s information criteria are considered. Tests of homogeneity are based on MLR statistics. The performance of the tests is evaluated in simulation studies. The tests are applied to rehabilitation data where it is concluded that the rehabilitation process develops according to a non-homogeneous Markov chain of order 2, possibly changing to a homogeneous chain of order 1 towards the end of the period.
    Keywords: Likelihood ratio; Test power; Bias of tests
    JEL: C10
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_007&r=ecm
  9. By: David F. Hendry (Economics Department and Institute for New Economic Thinking at the Oxford Martin School, University of Oxford, UK); Søren Johansen (Economics Department, University of Copenhagen and CREATES, Aarhus University, Denmark)
    Abstract: Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant.
    Keywords: Model selection, theory retention
    Date: 2011–10–24
    URL: http://d.repec.org/n?u=RePEc:kud:kuiedp:1125&r=ecm
  10. By: Fabian Y. R. P. Bocart; Christian M. Hafner
    Abstract: A new heteroskedastic hedonic regression model is suggested which takes into account time-varying volatility and is applied to a blue chips art market. A nonparametric local likelihood estimator is proposed, and this is more precise than the often used dummy variables method. The empirical analysis reveals that errors are considerably non-Gaussian, and that a student distribution with time-varying scale and degrees of freedom does well in explaining deviations of prices from their expectation. The art price index is a smooth function of time and has a variability that is comparable to the volatility of stock indices.
    Keywords: Volatility, art markets, hedonic regression, semiparametric estimation
    JEL: C14 C43 Z11
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011-071&r=ecm
  11. By: Chris D. Orme; Takashi Yamagata
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:man:sespap:1124&r=ecm
  12. By: Prehn, Sören; Brümmer, Bernhard
    Abstract: French (2011) can analytically show that the standard Anderson and van Wincoop (2003) gravity trade model is only correctly specified for disaggregate data; gravity trade model analysis should be done at product level and then estimation results should be reaggregated. If however gravity trade model analysis is to be done at product level then also estimation issues in disaggregate gravity trade models should come to the fore. As is shown, previous estimators suffer under different statistical problems. This paper proposes a zero-in ated Poisson Quasi-Likelihood (PQL) and a Gamma Two-Part Model (G2PM) as reliable alternatives. Estimated within a Generalised Estimating Equation (GEE) framework, both estimators are consistent and have more or less conservative test statistics. Further, for model selection a Quasi-Likelihood under the Independence Model Criterion (QIC) is recommend since this statistic is conform with GEE approaches. Both estimators PQL and G2PM and the model selection technique QIC should become standard tools for disaggregate gravity trade model estimation. --
    Keywords: gravity model,excess zeros,Poisson Quasi-Likelihood,Gamma Two-Part Model,Generalised Estimating Equation Approach
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:zbw:daredp:1107&r=ecm
  13. By: Thomas Lux; Leonardo Morales-Arias; Cristina Sattarhoff
    Abstract: The volatility specification of the Markov-switching Multifractal (MSM) model is proposed as an alternative mechanism for realized volatility (RV). We estimate the RV-MSM model via Generalized Method of Moments and perform forecasting by means of best linear forecasts derived via the Levinson-Durbin algorithm. The out-of-sample performance of the RV-MSM is compared against other popular time series specfications usually employed to model the dynamics of RV as well as other standard volatility models of asset returns. An intra-day data set for five major international stock market indices is used to evaluate the various models out-of-sample. We find that the RV-MSM seems to improve upon forecasts of its baseline MSM counterparts and many other volatility models in terms of mean squared errors (MSE). While the more conventional RV-ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and criteria), the new RV-MSM model seems often very close in its performance and in a non-negligible number of cases even dominates over the RV-ARFIMA model
    Keywords: Realized volatility, multiplicative volatility models, long memory, international volatility forecasting
    JEL: C20 G12
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:kie:kieliw:1737&r=ecm
  14. By: Bruno Feunou; Roméo Tedongap
    Abstract: We develop a discrete-time affine stochastic volatility model with time-varying conditional skewness (SVS). Importantly, we disentangle the dynamics of conditional volatility and conditional skewness in a coherent way. Our approach allows current asset returns to be asymmetric conditional on current factors and past information, what we term contemporaneous asymmetry. Conditional skewness is an explicit combination of the conditional leverage effect and contemporaneous asymmetry. We derive analytical formulas for various return moments that are used for generalized method of moments estimation. Applying our approach to S&P500 index daily returns and option data, we show that one- and two-factor SVS models provide a better fit for both the historical and the risk-neutral distribution of returns, compared to existing affine generalized autoregressive conditional heteroskedasticity (GARCH) models. Our results are not due to an overparameterization of the model: the one-factor SVS models have the same number of parameters as their one-factor GARCH competitors.
    Keywords: Asset Pricing; Econometric and statistical methods
    JEL: C1 C5 G1 G12
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:bca:bocawp:11-20&r=ecm
  15. By: Dobromil Serwa (Narodowy Bank Polski; Warsaw School of Economics)
    Abstract: This research proposes a new method to identify the differing states of the market with respect to lending to households. We use an econometric multi-regime regression model where each regime is associated with a different economic state of the credit market (i.e. a normal regime or a boom regime). The credit market alternates between regimes when some specific variable increases above or falls below the estimated threshold level. A new method for estimating multi-regime threshold regression models for dynamic panel data is also demonstrated.
    Keywords: credit boom, threshold regression, dynamic panel
    JEL: E51 C23 C51
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:nbp:nbpmis:99&r=ecm
  16. By: Mumtaz, Haroon (Bank of England)
    Abstract: A large empirical literature has examined the transmission mechanism of structural shocks in great detail. The possible role played by changes in the volatility of shocks has largely been overlooked in vector autoregression based applications. This paper proposes an extended vector autoregression where the volatility of structural shocks is allowed to be time-varying and to have a direct impact on the endogenous variables included in the model. The proposed model is applied to US data to consider the potential impact of changes in the volatility of monetary policy shocks. The results suggest that while an increase in this volatility has a statistically significant impact on GDP growth and inflation, the relative contribution of these shocks to the forecast error variance of these variables is estimated to be small.
    Keywords: Vector autoregression; stochastic volatility; particle filter.
    JEL: E30 E32
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:boe:boeewp:0437&r=ecm
  17. By: Lawrence R. Thorne
    Abstract: I report a new statistical distribution formulated to confront the infamous, long-standing, computational/modeling challenge presented by highly skewed and/or leptokurtic ("fat- or heavy-tailed") data. The distribution is straightforward, flexible and effective. Even when working with far fewer data points than are routinely required, it models non-Gaussian data samples, from peak center through far tails, within the context of a single probability density function (PDF) that is valid over an extremely broad range of dispersions and probability densities. The distribution is a precision tool to characterize the great risk and the great opportunity inherent in fat-tailed data.
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1110.6553&r=ecm
  18. By: Jonsson, Robert (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: Markov chains (MCs) have been used to study how the health states of patients are progressing in time. With few exceptions the studies have been based on the questionable assumptions that the MC has order m=1 and is homogeneous in time. In this paper a three-state non-homogeneous MC model is introduced that allows m to vary. It is demonstrated how wrong assumptions about homogeneity and about the value of m can invalidate predictions of future health states. This can in turn seriously bias a cost-benefit analysis when costs are attached to the predicted outcomes. The present paper only considers problems connected with model construction and estimation. Problems of testing for a proper value of m and of homogeneity is treated in a subsequent paper. Data of work resumption among sick-listed women and men are used to illustrate the theory. A nonhomogeneous MC with m = 2 was well fitted to data for both sexes. The essential difference between the rehabilitation processes for the two sexes was that men had a higher chance to move from the intermediate health state to the state ‘healthy’, while women tended to remain in the intermediate state for a longer time.
    Keywords: Rehabilitation; transition probability; prediction; Maximum Likelihood
    JEL: C10
    Date: 2011–10–31
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2011_006&r=ecm
  19. By: Tierney, Heather L.R.
    Abstract: This paper presents three local nonparametric forecasting methods that are able to utilize the isolated periods of revised real-time PCE and core PCE for 62 vintages within a historic framework with respect to the nonparametric exclusion-from-core inflation persistence model. The flexibility, provided by the kernel and window width, permits the incorporation of the forecasted value into the appropriate time frame. For instance, a low inflation measure can be included in other low inflation time periods in order to form more optimal forecasts by combining values that are similar in terms of metric distance as opposed to chronological time. The most efficient nonparametric forecasting method is the third model, which uses the flexibility of nonparametrics to its utmost by making forecasts conditional on the forecasted value.
    Keywords: Inflation Persistence; Real-Time Data; Monetary Policy; Nonparametrics; Forecasting
    JEL: C53 C14 E52
    Date: 2011–11–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:34439&r=ecm
  20. By: ADOLFO SACHSIDA (IPEA-DIMAC / IBMEC-DF); MARIO JORGE CARDOSO DE MENDONÇA (IPEA); FABIO CARLUCCI WALNUT (UCB)
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:anp:en2010:204&r=ecm
  21. By: Agostino Tarsitano; Rosetta Lombardo (Dipartimento di Economia e Statistica, Università della Calabria)
    Abstract: Rank association is a fundamental tool for expressing dependence in cases in which data are arranged in order. Measures of rank correlation have been accumulated in several contexts for more than a century and we were able to cite more than thirty of these coefficients, from simple ones to relatively complicated definitions invoking one or more systems of weights. However, only a few of these can actually be considered to be admissible substitutes for Pearson’s correlation. The main drawback with the vast majority of coefficients is their “resistance-tochange” which appears to be of limited value for the purposes of rank comparisons that are intrinsically robust. In this article, a new nonparametric correlation coefficient is defined that is based on the principle of maximization of a ratio of two ranks. In comparing it with existing rank correlations, it was found to have extremely high sensitivity to permutation patterns. We have illustrated the potential improvement that our index can provide in economic contexts by comparing published results with those obtained through the use of this new index. The success that we have had suggests that our index may have important applications wherever the discriminatory power of the rank correlation coefficient should be particularly strong.
    Keywords: Ordinal data, Nonparametric agreement, Economic applications
    JEL: C14 A12
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:clb:wpaper:201111&r=ecm

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.