nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒07‒14
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. The limiting properties of the QMLE in a general class of asymmetric volatility models By Christian M. Dahl; Emma M. Iglesias
  2. Jackknifing stock return predictions By Benjamin Chiquoine; Erik Hjalmarsson
  3. Block Kalman filtering for large-scale DSGE models By Strid, Ingvar; Walentin, Karl
  4. Semiparametric estimation of outbreak regression By Frisén, Marianne; Andersson, Eva; Pettersson, Kjell
  5. Short and long run causality measures: theory and inference By Jean-Marie Dufour; Abderrahim Taamouti
  6. A Monte Carlo Study of the Necessary and Sufficient Conditions for Weak Separability By Hjertstrand, Per
  7. Efficient Prediction of Excess Returns By Jon Faust; Jonathan H. Wright
  8. Asymptotic properties of the Bernstein density copula for dependent data By Taoufik Bouezmarni; Jeroen V. K. Rombouts; Abderrahim Taamouti
  9. Semiparametric surveillance of outbreaks By Frisén, Marianne; Andersson, Eva
  10. Instrumental Variables in Models with Multiple Outcomes: The General Unordered Case By Heckman, James J.; Urzua, Sergio; Vytlacil, Edward
  11. Getting PPP Right: Identifying Mean-Reverting Real Exchange Rates in Panels By Georgios Chortareas; George Kapetanios
  12. Building composite leading indexes in a dynamic factor model framework: a new proposal By Massimiliano Serati; Gianni Amisano
  13. The Nature of Occupational Unemployment Rates in the United States: Hysteresis or Structural? By Candelon, Bertrand; Dupuy, Arnaud; Gil-Alana, Luis A.
  14. Economists, Incentives, Judgement and Empirical Work By Dave Colander
  15. Calibration and IV Estimation of a Wage Outcome Equation in a Dynamic Environment By Belzil, Christian; Hansen, Jörgen

  1. By: Christian M. Dahl; Emma M. Iglesias (School of Economics and Management, University of Aarhus, Denmark)
    Abstract: In this paper we analyze the limiting properties of the estimated parameters in a general class of asymmetric volatility models which are closely related to the traditional exponential GARCH model. The new representation has three main advantages over the traditional EGARCH: (1) It allows a much more flexible representation of the conditional variance function. (2) It is possible to provide a complete characterization of the asymptotic distribution of the QML estimator based on the new class of nonlinear volatility models, something which has proven very difficult even for the traditional EGARCH. (3) It can produce asymmetric news impact curves where, contrary to the traditional EGARCH, the resulting variances do not excessively exceed the ones associated with the standard GARCH model, irrespectively of the sign of an impact of moderate size. Furthermore, the new class of models considered can create a wide array of news impact curves which provide the researcher with a richer choice set relative to the traditional. We also show in a Monte Carlo experiment the good finite sample performance of our asymptotic theoretical results and we compare them with those obtained from a parametric and the residual based bootstrap. Finally, we provide an empirical illustration.
    Keywords: Asymmetric volatility models; Asymmetric news impact curves; Quasi maximum likelihood estimation; Asymptotic Theory; Bootstrap
    JEL: C12 C13 C15 C22 C51 C52 E43
    Date: 2008–07–04
    URL: http://d.repec.org/n?u=RePEc:aah:create:2008-38&r=ecm
  2. By: Benjamin Chiquoine; Erik Hjalmarsson
    Abstract: We show that the general bias reducing technique of jackknifing can be successfully applied to stock return predictability regressions. Compared to standard OLS estimation, the jackknifing procedure delivers virtually unbiased estimates with mean squared errors that generally dominate those of the OLS estimates. The jackknifing method is very general, as well as simple to implement, and can be applied to models with multiple predictors and overlapping observations. Unlike most previous work on inference in predictive regressions, no specific assumptions regarding the data generating process for the predictors are required. A set of Monte Carlo experiments show that the method works well in finite samples and the empirical section finds that out-of-sample forecasts based on the jackknifed estimates tend to outperform those based on the plain OLS estimates. The improved forecast ability also translates into economically relevant welfare gains for an investor who uses the predictive regression, with jackknifed estimates, to time the market.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:fip:fedgif:932&r=ecm
  3. By: Strid, Ingvar (Stockholm School of Economics); Walentin, Karl (Research Department, Central Bank of Sweden)
    Abstract: In this paper block Kalman filters for Dynamic Stochastic General Equilibrium models are presented and evaluated. Our approach is based on the simple idea of writing down the Kalman filter recursions on block form and appropriately sequencing the operations of the prediction step of the algorithm. It is argued that block filtering is the only viable serial algorithmic approach to significantly reduce Kalman filtering time in the context of large DSGE models. For the largest model we evaluate the block filter reduces the computation time by roughly a factor 2. Block filtering compares favourably with the more general method for faster Kalman filtering outlined by Koopman and Durbin (2000) and, furthermore, the two approaches are largely complementary
    Keywords: Kalman filter; DSGE model; Bayesian estimation; Computational speed; Algorithm; Fortran; Matlab
    JEL: C10 C60
    Date: 2008–06–01
    URL: http://d.repec.org/n?u=RePEc:hhs:rbnkwp:0224&r=ecm
  4. By: Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University); Andersson, Eva (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University); Pettersson, Kjell (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: A regression may be constant for small values of the independent variable (for example time), but then a monotonic increase starts. Such an “outbreak” regression is of interest for example in the study of the outbreak of an epidemic disease. We give the least square estimators for this outbreak regression without assumption of a parametric regression function. It is shown that the least squares estimators are also the maximum likelihood estimators for distributions in the regular exponential family such as the Gaussian or Poisson distribution. The approach is thus semiparametric. The method is applied to Swedish data on influenza, and the properties are demonstrated by a simulation study. The consistency of the estimator is proved.
    Keywords: Constant Base-line; Monotonic change; Exponential family
    JEL: C10
    Date: 2008–02–04
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2007_013&r=ecm
  5. By: Jean-Marie Dufour; Abderrahim Taamouti
    Abstract: The concept of causality introduced by Wiener (1956) and Granger (1969) is defined in terms of predictability one period ahead. This concept can be generalized by considering causality at a given horizon h, and causality up to any given horizon h [Dufour and Renault (1998)]. This generalization is motivated by the fact that, in the presence of an auxiliary variable vector Z, it is possible that a variable Y does not cause variable X at horizon 1, but causes it at horizon h > 1. In this case, there is an indirect causality transmitted by Z. Another related problem consists in measuring the importance of causality between two variables. Existing causality measures have been defined only for the horizon 1 and fail to capture indirect causal effects. This paper proposes a generalization of such measures for any horizon h. We propose nonparametric and parametric measures of unidirectional and instantaneous causality at any horizon h. Parametric measures are defined in the context of autoregressive processes of unknown order and expressed in terms of impulse response coefficients. On noting that causality measures typically involve complex functions of model parameters in VAR and VARMA models, we propose a simple method to evaluate these measures which is based on the simulation of a large sample from the process of interest. We also describe asymptotically valid nonparametric confidence intervals, using a bootstrap technique. Finally, the proposed measures are applied to study causality relations at different horizons between macroeconomic, monetary and financial variables in the U.S. These results show that there is a strong effect of nonborrowed reserves on federal funds rate one month ahead, the effect of real gross domestic product on federal funds rate is economically important for the first three months, the effect of federal funds rate on gross domestic product deflator is economically weak one month ahead, and finally federal fundsrate causes the real gross domestic product until 16 months.
    Keywords: Time series, Granger causality, Indirect causality, Multiple horizon causality, Causality measure, Predictability, Autoregressive model, Vector autoregression, VAR, Bootstrap, Monte Carlo, Macroeconomics, Money, Interest rates, Output, Inflation
    JEL: C1 C12 C15 C32 C51 C53 E3 E4 E52
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we083720&r=ecm
  6. By: Hjertstrand, Per (Department of Economics, Lund University)
    Abstract: Weak separability plays an important role in many different fields in economic theory. In this paper we investigate the properties of newly developed nonparametric revealed preference tests for weak separability by means of Monte Carlo experiments. A main finding is that the size properties of the tests for weak separability proposed by Swofford and Whitney (A revealed preference test for weakly separable utility maximization with incomplete adjustment. Journal of Econometrics 60, 235-249, 1994) and Fleissig and Whitney (A New PC-Based Test for Varian's Weak Separability Conditions, Journal of Business and Economic Statistics 21, 133-143, 2003) are good in many of the settings considered. As a further source of information, we also perform sensitivity analysis on the nonparametric revealed preference tests when measurement errors are added to the data.
    Keywords: GARP; LP test; Monte Carlo simulations; NONPAR; Weak separability.
    JEL: C14 C15 C43
    Date: 2008–01–14
    URL: http://d.repec.org/n?u=RePEc:hhs:lunewp:2008_010&r=ecm
  7. By: Jon Faust; Jonathan H. Wright
    Abstract: It is well known that augmenting a standard linear regression model with variables that are correlated with the error term but uncorrelated with the original regressors will increase asymptotic efficiency of the original coefficients. We argue that in the context of predicting excess returns, valid augmenting variables exist and are likely to yield substantial gains in estimation efficiency and, hence, predictive accuracy. The proposed augmenting variables are ex post measures of an unforecastable component of excess returns: ex post errors from macroeconomic survey forecasts and the surprise components of asset price movements around macroeconomic news announcements. These "surprises" cannot be used directly in forecasting--they are not observed at the time that the forecast is made--but can nonetheless improve forecasting accuracy by reducing parameter estimation uncertainty. We derive formal results about the benefits and limits of this approach and apply it to standard examples of forecasting excess bond and equity returns. We find substantial improvements in out-of-sample forecast accuracy for standard excess bond return regressions; gains for forecasting excess stock returns are much smaller.
    JEL: C22 C53 G14
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14169&r=ecm
  8. By: Taoufik Bouezmarni; Jeroen V. K. Rombouts; Abderrahim Taamouti
    Abstract: Copulas are extensively used for dependence modeling. In many cases the data does not reveal how the dependence can be modeled using a particular parametric copula. Nonparametric copulas do not share this problem since they are entirely data based. This paper proposes nonparametric estimation of the density copula for a-mixing data using Bernstein polynomials. We study the asymptotic properties of the Bernstein density copula, i.e., we provide the exact asymptotic bias and variance, we establish the uniform strong consistency and the asymptotic normality.
    Keywords: Nonparametric estimation, Copula, Bernstein polynomial, a-mixing, Asymptotic properties, Boundary bias
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we083619&r=ecm
  9. By: Frisén, Marianne (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University); Andersson, Eva (Statistical Research Unit, Department of Economics, School of Business, Economics and Law, Göteborg University)
    Abstract: The detection of a change from a constant level to a monotonically increasing (or decreasing) regression is of special interest for the detection of outbreaks of, for example, epidemics. A maximum likelihood ratio statistic for the sequential surveillance of an “outbreak” situation is derived. The method is semiparametric in the sense that the regression model is nonparametric while the distribution belongs to the regular exponential family. The method is evaluated with respect to timeliness and predicted value in a simulation study that imitates the influenza outbreaks in Sweden. To illustrate its performance, the method is applied to Swedish influenza data for six years. The advantage of this semiparametric surveillance method, which does not rely on an estimated baseline, is illustrated by a Monte Carlo study. The proposed method is successively accumulating the information. Such accumulation is not made by the commonly used approach where the current observation is compared to a baseline. The advantage of information accumulation is illustrated.
    Keywords: Monitoring; Change-points; Generalised likelihood; Ordered regression; Robust regression; Exponential family
    JEL: C10
    Date: 2008–02–04
    URL: http://d.repec.org/n?u=RePEc:hhs:gunsru:2007_011&r=ecm
  10. By: Heckman, James J. (University of Chicago); Urzua, Sergio (Northwestern University); Vytlacil, Edward (Yale University)
    Abstract: This paper develops the method of local instrumental variables for models with multiple, unordered treatments when treatment choice is determined by a nonparametric version of the multinomial choice model. Responses to interventions are permitted to be heterogeneous in a general way and agents are allowed to select a treatment (e.g., participate in a program) with at least partial knowledge of the idiosyncratic response to the treatments. We define treatment effects in a general model with multiple treatments as differences in counterfactual outcomes that would have been observed if the agent faced different choice sets. We show how versions of local instrumental variables can identify the corresponding treatment parameters. Direct application of local instrumental variables identifies the marginal treatment effect of one option versus the next best alternative without requiring knowledge of any structural parameters from the choice equation or any large support assumptions. Using local instrumental variables to identify other treatment parameters requires either large support assumptions or knowledge of the latent index function of the multinomial choice model.
    Keywords: treatment effects, multinomial, nonparametric
    JEL: C31
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3565&r=ecm
  11. By: Georgios Chortareas (University of Athens); George Kapetanios (Queen Mary, University of London)
    Abstract: Recent advances in testing for the validity of Purchasing Power Parity (PPP) focus on the time series properties of real exchange rates in panel frameworks. One weakness of such tests, however, is that they fail to inform the researcher as to which cross-section units are stationary. As a consequence, a reservation for PPP analyses based on such tests is that a small number of real exchange rates in a given panel may drive the results. In this paper we examine the PPP hypothesis focusing on the stationarity of the real exchange rates in up to 25 OECD countries. We introduce a methodology that when applied to a set of established panel-unit-root tests, allows the identification of the real exchange rates that are stationary. We apply procedures that account for cross-sectional dependence. Our results reveal evidence of mean-reversion that is significantly stronger as compared to that obtained by the existing literature, strengthening the case for PPP. Moreover, our approach can be used to provide half-lives estimates for the mean-reverting real exchange rates. We find that the half-lives are shorter than the literature consensus and therefore that the PPP puzzle is less pronounced than initially thought.
    Keywords: PPP, Panel unit root tests, Real exchange rates, Half-lives, PPP puzzle
    JEL: C12 C15 C23 F31
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp629&r=ecm
  12. By: Massimiliano Serati (Cattaneo University (LIUC)); Gianni Amisano (Brescia University)
    Abstract: One of the most problematic aspects in the work of policy makers and practitioners is having efficient forecasting tools combining two seemingly incompatible features: ease of use and completeness of the information set underlying the forecasts. Econometric literature provides different answers to these needs: Dynamic Factor Models (DFMs) optimally exploit the information coming from large datasets; composite leading indexes represent an immediate and flexible tool to anticipate the future evolution of a phenomenon. Curiously, the recent DFM literature has either ignored the construction of leading indexes or has made unsatisfactory choices as regards the criteria for aggregating the index components and the identification of factors that feed the index. This paper fills the gap and proposes a multi-step procedure for building composite leading indexes within a DFM framework. Once selected the target economic variable and estimated a DFM based on a large target-oriented dataset, we identify the common factor shocks through sign restrictions on the impact multipliers and simulate the structural form of the model. The Forecast Error Variance Decompositions obtained over a k steps-ahead simulation horizon define k sets of weights for aggregating factors (in a different way depending on the leading horizon) in order to get composite leading indexes. This procedure is used for a very preliminar empirical exercise aimed at forecasting crude nominal oil prices. The results seem to be encouraging and support the validity of the proposal: we generate a wide range of horizon-specific leading indexes with appreciable forecasting performances.
    Date: 2008–03
    URL: http://d.repec.org/n?u=RePEc:liu:liucec:212&r=ecm
  13. By: Candelon, Bertrand (Maastricht University); Dupuy, Arnaud (ROA, Maastricht University); Gil-Alana, Luis A. (University of Navarra)
    Abstract: This paper provides new evidence on the nature of occupational differences in unemployment dynamics, which is relevant for the debate between the structural or hysteresis hypotheses. We develop a procedure that permits us to test for the presence of a structural break at unknown date. Our approach allows the investigation of a broader range of persistence than the 0/1 paradigm about the order of integration, usually implemented for testing the hypothesis of hysteresis in occupational unemployment. In almost all occupations, we find support for both the structuralist and the hysteresis hypotheses, but stress the importance of estimating the degree of persistence of seasonal shocks along with the degree of long-run persistence on raw data without applying seasonal filters. Indeed hysteresis appears to be underestimated when data are initially adjusted using traditional seasonal filters.
    Keywords: occupational unemployment, structuralist, hysteresis, structural break, fractional integration
    JEL: E24 C22 J62
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3571&r=ecm
  14. By: Dave Colander
    Abstract: This paper asks the question: Why has the “general-to-specific” cointegrated VAR approach as developed in Europe had only limited success in the US as a tool for doing empirical macroeconomics, where what might be called a “theory comes first” approach dominates? The reason this paper highlights is the incompatibility of the European approach with the US focus on the journal publication metric for advancement. Specifically, the European “general-to specific” cointegrated VAR approach requires researcher judgment to be part of the analysis, and the US focus on a journal publication metric discourages such research methods. The US “theory comes first” approach fits much better with the journal publication metric.
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:mdl:mdlpap:0806&r=ecm
  15. By: Belzil, Christian (Ecole Polytechnique, Paris); Hansen, Jörgen (Concordia University)
    Abstract: We consider an artificial population of forward looking heterogeneous agents making decisions between schooling, employment, employment with training and household production, according to a behavioral model calibrated to a large set of stylized facts. Some of these agents are subject to policy interventions (a higher education subsidy) that vary according to their generosity. We evaluate the capacity of Instrumental Variable (IV) methods to recover the population Local Average Treatment Effect (LATE) and analyze the economic implications of using a strong instrument within a dynamic economic model. We also examine the performances of two sampling designs that may be used to improve classical linear IV; a Regression-Discontinuity (RD) design and an age-based sampling design targeting early career wages. Finally, we investigate the capacity of IV to estimate alternative "causal" parameters. The failure of classical linear IV is spectacular. IV fails to recover the true LATE, even in the static version of the model. In some cases, the estimates lie outside the support of the population distribution of returns to schooling and are nearly twice as large as the population LATE. The trade-off between the statistical power of the instrument and dynamic self-selection caused by the policy shock implies that access to a "strong instrument" is not necessarily desirable. There appears to be no obvious realistic sampling design that can guarantee IV accuracy. Finally, IV also fails to estimate the reduced-form marginal effect of schooling on wages of those affected by the experiment. Within a dynamic setting, IV is deprived of any “causal” substance.
    Keywords: dynamic discrete choice, dynamic programming, treatment effects, weak instruments, instrumental variable, returns to schooling
    JEL: B4 C1 C3
    Date: 2008–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3528&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.