nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒07‒01
seven papers chosen by
Sune Karlsson
Orebro University

  1. SEQUENTIAL ESTIMATION OF SHAPE PARAMETERS IN MULTIVARIATE DYNAMIC MODELS By Dante Amengual; Gabriele Fiorentini; Enrique Sentana
  2. Maximum likelihood estimation of time series models: the Kalman filter and beyond By Tommaso, Proietti; Alessandra, Luati
  3. Late again, whithout Monotonicity By Clément de Chaisemartin
  4. Continuous-Time Linear Models By John H. Cochrane
  5. Screening for Collusion: A Spatial Statistics Approach By Pim Heijnen; Marco A. Haan; Adriaan R. Soetevent
  6. Estimating the Latent Effect of Unemployment Benefits on Unemployment Duration By Lo, Simon M.S.; Stephan, Gesine; Wilke, Ralf
  7. Evaluating Macroeconomic Forecasts: A Concise Review of Some Recent Developments By Philip Hans Franses; Michael McAleer; Rianne Legerstee

  1. By: Dante Amengual (CEMFI, Centro de Estudios Monetarios y Financieros); Gabriele Fiorentini (Università di Firenze and RCEA); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros)
    Abstract: Sequential maximum likelihood and GMM estimators of distributional parameters obtained from the standardised innovations of multivariate conditionally heteroskedastic dynamic regression models evaluated at Gaussian PML estimators preserve the consistency of mean and variance parameters while allowing for realistic distributions. We assess the efficiency of those estimators, and obtain moment conditions leading to sequential estimators as efficient as their joint maximum likelihood counterparts. We also obtain standard errors for the quantiles required in VaR and CoVaR calculations, and analyse the effects on these measures of distributional misspecification. Finally, we illustrate the small sample performance of these procedures through Monte Carlo simulations.
    Keywords: Elliptical distributions, Efficient estimation, Systemic risk, Value at risk.
    JEL: C13 C32 G11
    Date: 2012–02
    URL: http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2012_1201&r=ecm
  2. By: Tommaso, Proietti; Alessandra, Luati
    Abstract: The purpose of this chapter is to provide a comprehensive treatment of likelihood inference for state space models. These are a class of time series models relating an observable time series to quantities called states, which are characterized by a simple temporal dependence structure, typically a first order Markov process. The states have sometimes substantial interpretation. Key estimation problems in economics concern latent variables, such as the output gap, potential output, the non-accelerating-inflation rate of unemployment, or NAIRU, core inflation, and so forth. Time-varying volatility, which is quintessential to finance, is an important feature also in macroeconomics. In the multivariate framework relevant features can be common to different series, meaning that the driving forces of a particular feature and/or the transmission mechanism are the same. The objective of this chapter is reviewing this algorithm and discussing maximum likelihood inference, starting from the linear Gaussian case and discussing the extensions to a nonlinear and non Gaussian framework.
    Keywords: Time series models; Unobserved components;
    JEL: C13 C22
    Date: 2012–04–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:39600&r=ecm
  3. By: Clément de Chaisemartin
    Abstract: Monotonicity is not necessary for the Wald ratio to identify a Local Average Treatment Effect. Under random assignment and exclusion restriction, if for every value of potential outcomes there are more compliers than defiers, the Wald ratio identifies the average treatment effect within a subpopulation of compliers. I use a simple Roy selection model to show that this "less defiers than compliers" condition is substantially weaker than monotonicity. It has two implications which are testable from the data, and it is closely related to those testable implications. Similarly, the local monotonicity condition in Huber & Mellace (2012) is not necessary for their identification results to hold and can also be replaced by a substantially weaker condition
    Keywords: local average treatment effect, instrumental variable, monotonicity, local monotonicity, defiers
    JEL: C21 C26
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:crs:wpaper:2012-12&r=ecm
  4. By: John H. Cochrane
    Abstract: I translate familiar concepts of discrete-time time-series to contnuous-time equivalent. I cover lag operators, ARMA models, the relation between levels and differences, integration and cointegration, and the Hansen-Sargent prediction formulas.
    JEL: C01 C5 C58 E17 G17
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:18181&r=ecm
  5. By: Pim Heijnen (University of Groningen); Marco A. Haan (University of Groningen); Adriaan R. Soetevent (University of Amsterdam)
    Abstract: We develop a method to screen for local cartels. We first test whether there is statistical evidence of clustering of outlets that score high on some characteristic that is consistent with collusive behavior. If so, we determine in a second step the most suspicious regions where further antitrust investigation would be warranted. We apply our method to build a variance screen for the Dutch gasoline market.
    Keywords: collusion; variance screen; spatial statistics; K-function
    JEL: C11 D40 L12 L41
    Date: 2012–06–18
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20120058&r=ecm
  6. By: Lo, Simon M.S. (Lingnan University); Stephan, Gesine (Institute for Employment Research (IAB), Nuremberg); Wilke, Ralf (University of York)
    Abstract: We estimate the effect of a shortening of unemployment benefit entitlements on unemployment duration. Previous studies on the same or related problems have not taken into account that the competing risks duration model is not identified and we shed first light on the question whether the non identification problem may preclude informative results. It turns out that the identification bounds for the parameters of interest are very wide in the absence of strong assumptions. We suggest an assumption on the dependence structure between risks which is milder than what conventional duration models assume. Under this assumption, the identification bounds are tighter and become informative for the direction of the treatment effect. We find evidence that the unemployed with higher pre-unemployment earnings are more likely to enter full-time employment and, in particular, subsidized self-employment.
    Keywords: dependent censoring, partial identification, difference-in-differences, unemployment duration, unemployment benefits
    JEL: C34 C41 J64
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp6650&r=ecm
  7. By: Philip Hans Franses (Econometric Institute Erasmus School of Economics, Erasmus University Rotterdam); Michael McAleer (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute, The Netherlands, Department of Quantitative Economics, Complutense University of Madrid, and Institute of Economic Research, Kyoto University.); Rianne Legerstee (Econometric Institute, Erasmus School of Economics, Erasmus University Rotterdam and Tinbergen Institute The Netherlands)
    Abstract: Macroeconomic forecasts are frequently produced, widely published, intensively discussed and comprehensively used. The formal evaluation of such forecasts has a long research history. Recently, a new angle to the evaluation of forecasts has been addressed, and in this review we analyse some recent developments from that perspective. The literature on forecast evaluation predominantly assumes that macroeconomic forecasts are generated from econometric models. In practice, however, most macroeconomic forecasts, such as those from the IMF, World Bank, OECD, Federal Reserve Board, Federal Open Market Committee (FOMC) and the ECB, are typically based on econometric model forecasts jointly with human intuition. This seemingly inevitable combination renders most of these forecasts biased and, as such, their evaluation becomes non-standard. In this review, we consider the evaluation of two forecasts in which: (i) the two forecasts are generated from two distinct econometric models; (ii) one forecast is generated from an econometric model and the other is obtained as a combination of a model and intuition; and (iii) the two forecasts are generated from two distinct (but unknown) combinations of different models and intuition. It is shown that alternative tools are needed to compare and evaluate the forecasts in each of these three situations. These alternative techniques are illustrated by comparing the forecasts from the (econometric) Staff of the Federal Reserve Board and the FOMC on inflation, unemployment and real GDP growth. It is shown that the FOMC does not forecast significantly better than the Staff, and that the intuition of the FOMC does not add significantly in forecasting the actual values of the economic fundamentals. This would seem to belie the purported expertise of the FOMC.
    Keywords: Macroeconomic forecasts, econometric models, human intuition, biased forecasts, forecast performance, forecast evaluation, forecast comparison.
    JEL: C22 C51 C52 C53 E27 E37
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:ucm:doicae:1214&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.