nep-ecm New Economics Papers
on Econometrics
Issue of 2011‒03‒12
twelve papers chosen by
Sune Karlsson
Orebro University

  1. Difference based Ridge and Liu type Estimators in Semiparametric Regression Models By Esra Akdeniz Duran; Wolfgang Karl Härdle; Maria Osipenko
  2. Testing for change in mean of heteroskedastic time series By Mohamed Boutahar
  3. Bayesian Model Choice of Grouped t-copula By Xiaolin Luo; Pavel V. Shevchenko
  4. Record statistics for biased random walks, with an application to financial data By Gregor Wergen; Miro Bogner; Joachim Krug
  5. Multilevel Modelling with Spatial Effects By Bernard Fingleton; Luisa Corrado
  6. Exact maximum-likelihood method to detect patterns in real networks By Tiziano Squartini; Diego Garlaschelli
  7. On Not Evaluating Economic Models by Forecast Outcomes By Jennifer L. Castle; David F. Hendry
  8. Econometric Estimation of Distance Functions and Associated Measures of Productivity and Efficiency Change By C.J. O’Donnell
  9. Sensitivity of Matching-Based Program Evaluations to the Availability of Control Variables By Lechner, Michael; Wunsch, Conny
  10. Measuring Uncertainty and Disagreement in the European Survey and Professional Forecasters By Cristina Conflitti
  11. Can We Predict Recessions? By Don Harding; Adrian Pagan
  12. Exogenous Treatment and Endogenous Factors: Vanishing of Omitted Variable Bias on the Interaction Term By Olena Nizalova; Irina Murtazashvili

  1. By: Esra Akdeniz Duran; Wolfgang Karl Härdle; Maria Osipenko
    Abstract: We consider a difference based ridge regression estimator and a Liu type estimator of the regression parameters in the partial linear semiparametric regression model, y = Xβ + f + ε. Both estimators are analysed and compared in the sense of mean-squared error. We consider the case of independent errors with equal variance and give conditions under which the proposed estimators are superior to the unbiased difference based estimation technique. We extend the results to account for heteroscedasticity and autocovariance in the error terms. Finally, we illustrate the performance of these estimators with an application to the determinants of electricity consumption in Germany.
    Keywords: Difference based estimator; Differencing estimator, Differencing matrix, Liu estimator, Liu type estimator, Multicollinearity, Ridge regression estimator, Semiparametric model
    JEL: C14 C51
    Date: 2011–03
  2. By: Mohamed Boutahar (GREQAM)
    Abstract: In this paper we consider a Lagrange Multiplier-type test (LM) to detect change in the mean of time series with heteroskedasticity of unknown form. We derive the limiting distribution under the null, and prove the consistency of the test against the alternative of either an abrupt or smooth changes in the mean. We perform also some Monte Carlo simulations to analyze the size distortion and the power of the proposed test. We conclude that for moderate sample size, the test has a good performance. We finally carry out an empirical application using the daily closing level of the S&P 500 stock index, in order to illustrate the usefulness of the proposed test.
    Date: 2011–02
  3. By: Xiaolin Luo; Pavel V. Shevchenko
    Abstract: One of the most popular copulas for modeling dependence structures is t-copula. Recently the grouped t-copula was generalized to allow each group to have one member only, so that a priori grouping is not required and the dependence modeling is more flexible. This paper describes a Markov chain Monte Carlo (MCMC) method under the Bayesian inference framework for estimating and choosing t-copula models. Using historical data of foreign exchange (FX) rates as a case study, we found that Bayesian model choice criteria overwhelmingly favor the generalized t-copula. In addition, all the criteria also agree on the second most likely model and these inferences are all consistent with classical likelihood ratio tests. Finally, we demonstrate the impact of model choice on the conditional Value-at-Risk for portfolios of six major FX rates.
    Date: 2011–03
  4. By: Gregor Wergen; Miro Bogner; Joachim Krug
    Abstract: We consider the occurrence of record-breaking events in random walks with asymmetric jump distributions. The statistics of records in symmetric random walks was previously analyzed by Majumdar and Ziff and is well understood. Unlike the case of symmetric jump distributions, in the asymmetric case the statistics of records depends on the choice of the jump distribution. We compute the record rate $P_n(c)$, defined as the probability for the $n$th value to be larger than all previous values, for a Gaussian jump distribution with standard deviation $\sigma$ that is shifted by a constant drift $c$. For small drift, in the sense of $c/\sigma \ll n^{-1/2}$, the correction to $P_n(c)$ grows proportional to arctan$(\sqrt{n})$ and saturates at the value $\frac{c}{\sqrt{2} \sigma}$. For large $n$ the record rate approaches a constant, which is approximately given by $1-(\sigma/\sqrt{2\pi}c)\textrm{exp}(-c^2/2\sigma^2)$ for $c/\sigma \gg 1$. These asymptotic results carry over to other continuous jump distributions with finite variance. As an application, we compare our analytical results to the record statistics of 366 daily stock prices from the Standard & Poors 500 index. The biased random walk accounts quantitatively for the increase in the number of upper records due to the overall trend in the stock prices, and after detrending the number of upper records is in good agreement with the symmetric random walk. However the number of lower records in the detrended data is significantly reduced by a mechanism that remains to be identified.
    Date: 2011–03
  5. By: Bernard Fingleton (Department of Economics, University of Strathclyde.); Luisa Corrado (Faculty of Economics, University of Cambridge)
    Abstract: In multilevel modelling, interest in modeling the nested structure of hierarchical data has been accompanied by increasing attention to di¤erent forms of spatial interactions across different levels of the hierarchy. Neglecting such interactions is likely to create problems of inference, which typically assumes independence. In this paper we review approaches to multilevel modelling with spatial e¤ects, and attempt to connect the two literatures, discussing the advantages and limitations of various approaches.
    Keywords: Multilevel Modelling, Spatial E¤ects, Fixed E¤ects, Random E¤ects, IGLS, FGS2SLS.
    JEL: C21 C31 R0
    Date: 2011–02
  6. By: Tiziano Squartini; Diego Garlaschelli
    Abstract: In order to detect patterns in real networks, randomized graph ensembles that preserve only part of the topology of an observed network are systematically used as fundamental null models. However, their generation is still problematic. The existing approaches are either computationally demanding and beyond analytic control, or analytically accessible but highly approximate. Here we propose a solution to this long-standing problem by introducing an exact and fast method that allows to obtain expectation values and standard deviations of any topological property analytically, for any binary, weighted, directed or undirected network. Remarkably, the time required to obtain the expectation value of any property is as short as that required to compute the same property on the single original network. Our method reveals that the null behavior of various correlation properties is different from what previously believed, and highly sensitive to the particular network considered. Moreover, our approach shows that important structural properties (such as the modularity used in community detection problems) are currently based on incorrect expressions, and provides the exact quantities that should replace them.
    Date: 2011–02–28
  7. By: Jennifer L. Castle; David F. Hendry
    Abstract: Even in scientific disciplines, forecast failures occur. Four possible states of nature (a model is good or bad, and it forecasts well or badly) are examined using a forecast-error taxonomy, which traces the many possible sources of forecast errors. This analysis shows that a valid model can forecast badly, and a poor model can forecast successfully. Delineating the main causes of forecast failure reveals transformations that can correct failure without altering the ‘quality’ of the model in use. We conclude that judging a model by the accuracy of its forecasts is more like fools’ gold than a gold standard.
    Keywords: Model evaluation, forecast failure, model selection
    JEL: C52
    Date: 2011
  8. By: C.J. O’Donnell (CEPA - School of Economics, The University of Queensland)
    Abstract: The economically-relevant characteristics of multi-input multi-output production technologies can be represented using distance functions. The econometric approach to estimating these functions typically involves factoring out one of the outputs or inputs and estimating the resulting equation using maximum likelihood methods. A problem with this approach is that the outputs or inputs that are not factored out may be correlated with the composite error term. Fernandez, Koop and Steel (2000, p. 58) have developed a Bayesian solution to this so-called ‘endogeneity’ problem. O'Donnell (2007) has adapted the approach to the estimation of directional distance functions. This paper shows how the approach can be used to estimate Shephard (1953) distance functions and an associated index of total factor productivity (TFP) change. The TFP index is a new multiplicatively-complete index that satisfies most, if not all, economically-relevant tests and axioms from index number theory. The fact that it is multiplicatively-complete means it can be exhaustively decomposed into a measure of technical change and various measures of efficiency change. The decomposition can be implemented without the use of price data and without making any assumptions concerning either the optimising behaviour of firms or the degree of competition in product markets. The methodology is illustrated using state-level quantity data on U.S. agricultural inputs and outputs over the period 1960-2004. Results are summarised in terms of the characteristics (e.g., means) of estimated probability densities for measures of TFP change, technical change and output-oriented measures of efficiency change.
    Date: 2011–03
  9. By: Lechner, Michael (University of St. Gallen); Wunsch, Conny (University of St. Gallen)
    Abstract: Based on new, exceptionally informative and large German linked employer-employee administrative data, we investigate the question whether the omission of important control variables in matching estimation leads to biased impact estimates of typical active labour market programs for the unemployed. Such biases would lead to false policy conclusions about the cost-effectiveness of these expensive policies. Using newly developed Empirical Monte Carlo Study methods, we find that besides standard personal characteristics, information on individual health and firm characteristics of the last employer are particularly important for selection correction. Moreover, it is important to account for past performance on the labour market in a very detailed and flexible way. Information on job search behaviour, timing of unemployment and program start, as well as detailed regional characteristics are also relevant.
    Keywords: training, job search assistance, matching estimation, active labour market policies
    JEL: J68
    Date: 2011–03
  10. By: Cristina Conflitti
    Abstract: Survey data on expectations and economic forecasts play an important role in providing better insights into how economic agents make their own forecasts, what factors do affect the accuracy of these forecasts and why agents disagree in making them. Uncertainty is also important for better understanding many areas of economic behavior. Several approaches to measure uncertainty and disagreement have been proposed but a lack of direct observations and information on uncertainty and disagreement lead to ambiguous definitions of these two concepts. Using data from the European Survey of Professional Forecasters (SPF), which provide forecast point estimates and probability density forecasts, we consider several measures of uncertainty and disagreement at both aggregate and individual level. We overcome the problem associated with distributional assumptions of probability density forecasts by using an approach that does not assume any functional form for the individual probability densities but just approximating the histogram by a piecewise linear function. We extend earlier works to the European context for the three macroeconomic variables: GDP, inflation and unemployment. Moreover, we analyze how these measures perform with respect to different forecasting horizons. Looking at point estimates and disregarding the individual probability information provides misestimates of disagreement and uncertainty. Comparing the three macroeconomic variables of interest, uncertainty and disagreement are higher for GDP and inflation than unemployment, at short and long horizons. Besides this, it is difficult to find a common behavior between uncertainty and disagreement among the variables: results do not support evidence that, if uncertainty or disagreement are relatively high for one of the variable than it is the same for the others
    Date: 2010–11
  11. By: Don Harding (La Trobe University); Adrian Pagan (QUT and UTS)
    Abstract: The fact that the Global Financial Crisis, and the Great Recession it ushered in, was largely unforeseen, has led to the common opinion that macroeconomic models and analysis is deficient in some way. Of course it has probably always been true that businessmen, journalists and politicians have agreed on the proposition that economists canÂ’t forecast recessions. Yet we see an enormous published literature that presents results which suggest it is possible to do so, either with some new model or some new estimation method e.g. Kaufman (2010), Galvao (2006), Dueker (2005), Wright (2006) and Moneta (2005). Moreover, there seem to be no shortage of papers still emerging that make claims along these lines. So a question that naturally arises is how one is to reconcile the existence of an expanding literature on predicting recessions with the scepticism noted above?
    Keywords: Global Financial Crisis, Great Recession,
    Date: 2010–12–09
  12. By: Olena Nizalova (Kyiv School of Economics, Kyiv Economic Institute); Irina Murtazashvili (University of Pittsburgh)
    Abstract: Whether interested in the differential impact of a particular factor in various institutional settings or in the heterogeneous effect of policy or random experiment, the empirical researcher confronts a problem if the factor of interest is correlated with an omitted variable. This paper presents the circumstances under which it is possible to arrive at a consistent estimate of the mentioned effect. We find that if the source of heterogeneity and omitted variable are jointly independent of policy or treatment, then the OLS estimate on the interaction term between the treatment and endogenous factor turns out to be consistent.
    Keywords: treatment effect; heterogeneity; policy evaluation; random experiments; omitted variable bias
    JEL: C21
    Date: 2011–03

This nep-ecm issue is ©2011 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.