
on Econometrics 
By:  Esra Akdeniz Duran; Wolfgang Karl Härdle; Maria Osipenko 
Abstract:  We consider a difference based ridge regression estimator and a Liu type estimator of the regression parameters in the partial linear semiparametric regression model, y = Xβ + f + ε. Both estimators are analysed and compared in the sense of meansquared error. We consider the case of independent errors with equal variance and give conditions under which the proposed estimators are superior to the unbiased difference based estimation technique. We extend the results to account for heteroscedasticity and autocovariance in the error terms. Finally, we illustrate the performance of these estimators with an application to the determinants of electricity consumption in Germany. 
Keywords:  Difference based estimator; Differencing estimator, Differencing matrix, Liu estimator, Liu type estimator, Multicollinearity, Ridge regression estimator, Semiparametric model 
JEL:  C14 C51 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2011014&r=ecm 
By:  Mohamed Boutahar (GREQAM) 
Abstract:  In this paper we consider a Lagrange Multipliertype test (LM) to detect change in the mean of time series with heteroskedasticity of unknown form. We derive the limiting distribution under the null, and prove the consistency of the test against the alternative of either an abrupt or smooth changes in the mean. We perform also some Monte Carlo simulations to analyze the size distortion and the power of the proposed test. We conclude that for moderate sample size, the test has a good performance. We finally carry out an empirical application using the daily closing level of the S&P 500 stock index, in order to illustrate the usefulness of the proposed test. 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1102.5431&r=ecm 
By:  Xiaolin Luo; Pavel V. Shevchenko 
Abstract:  One of the most popular copulas for modeling dependence structures is tcopula. Recently the grouped tcopula was generalized to allow each group to have one member only, so that a priori grouping is not required and the dependence modeling is more flexible. This paper describes a Markov chain Monte Carlo (MCMC) method under the Bayesian inference framework for estimating and choosing tcopula models. Using historical data of foreign exchange (FX) rates as a case study, we found that Bayesian model choice criteria overwhelmingly favor the generalized tcopula. In addition, all the criteria also agree on the second most likely model and these inferences are all consistent with classical likelihood ratio tests. Finally, we demonstrate the impact of model choice on the conditional ValueatRisk for portfolios of six major FX rates. 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1103.0606&r=ecm 
By:  Gregor Wergen; Miro Bogner; Joachim Krug 
Abstract:  We consider the occurrence of recordbreaking events in random walks with asymmetric jump distributions. The statistics of records in symmetric random walks was previously analyzed by Majumdar and Ziff and is well understood. Unlike the case of symmetric jump distributions, in the asymmetric case the statistics of records depends on the choice of the jump distribution. We compute the record rate $P_n(c)$, defined as the probability for the $n$th value to be larger than all previous values, for a Gaussian jump distribution with standard deviation $\sigma$ that is shifted by a constant drift $c$. For small drift, in the sense of $c/\sigma \ll n^{1/2}$, the correction to $P_n(c)$ grows proportional to arctan$(\sqrt{n})$ and saturates at the value $\frac{c}{\sqrt{2} \sigma}$. For large $n$ the record rate approaches a constant, which is approximately given by $1(\sigma/\sqrt{2\pi}c)\textrm{exp}(c^2/2\sigma^2)$ for $c/\sigma \gg 1$. These asymptotic results carry over to other continuous jump distributions with finite variance. As an application, we compare our analytical results to the record statistics of 366 daily stock prices from the Standard & Poors 500 index. The biased random walk accounts quantitatively for the increase in the number of upper records due to the overall trend in the stock prices, and after detrending the number of upper records is in good agreement with the symmetric random walk. However the number of lower records in the detrended data is significantly reduced by a mechanism that remains to be identified. 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1103.0893&r=ecm 
By:  Bernard Fingleton (Department of Economics, University of Strathclyde.); Luisa Corrado (Faculty of Economics, University of Cambridge) 
Abstract:  In multilevel modelling, interest in modeling the nested structure of hierarchical data has been accompanied by increasing attention to di¤erent forms of spatial interactions across different levels of the hierarchy. Neglecting such interactions is likely to create problems of inference, which typically assumes independence. In this paper we review approaches to multilevel modelling with spatial e¤ects, and attempt to connect the two literatures, discussing the advantages and limitations of various approaches. 
Keywords:  Multilevel Modelling, Spatial E¤ects, Fixed E¤ects, Random E¤ects, IGLS, FGS2SLS. 
JEL:  C21 C31 R0 
Date:  2011–02 
URL:  http://d.repec.org/n?u=RePEc:str:wpaper:1105&r=ecm 
By:  Tiziano Squartini; Diego Garlaschelli 
Abstract:  In order to detect patterns in real networks, randomized graph ensembles that preserve only part of the topology of an observed network are systematically used as fundamental null models. However, their generation is still problematic. The existing approaches are either computationally demanding and beyond analytic control, or analytically accessible but highly approximate. Here we propose a solution to this longstanding problem by introducing an exact and fast method that allows to obtain expectation values and standard deviations of any topological property analytically, for any binary, weighted, directed or undirected network. Remarkably, the time required to obtain the expectation value of any property is as short as that required to compute the same property on the single original network. Our method reveals that the null behavior of various correlation properties is different from what previously believed, and highly sensitive to the particular network considered. Moreover, our approach shows that important structural properties (such as the modularity used in community detection problems) are currently based on incorrect expressions, and provides the exact quantities that should replace them. 
Date:  2011–02–28 
URL:  http://d.repec.org/n?u=RePEc:ssa:lemwps:2011/07&r=ecm 
By:  Jennifer L. Castle; David F. Hendry 
Abstract:  Even in scientific disciplines, forecast failures occur. Four possible states of nature (a model is good or bad, and it forecasts well or badly) are examined using a forecasterror taxonomy, which traces the many possible sources of forecast errors. This analysis shows that a valid model can forecast badly, and a poor model can forecast successfully. Delineating the main causes of forecast failure reveals transformations that can correct failure without altering the ‘quality’ of the model in use. We conclude that judging a model by the accuracy of its forecasts is more like fools’ gold than a gold standard. 
Keywords:  Model evaluation, forecast failure, model selection 
JEL:  C52 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:oxf:wpaper:538&r=ecm 
By:  C.J. O’Donnell (CEPA  School of Economics, The University of Queensland) 
Abstract:  The economicallyrelevant characteristics of multiinput multioutput production technologies can be represented using distance functions. The econometric approach to estimating these functions typically involves factoring out one of the outputs or inputs and estimating the resulting equation using maximum likelihood methods. A problem with this approach is that the outputs or inputs that are not factored out may be correlated with the composite error term. Fernandez, Koop and Steel (2000, p. 58) have developed a Bayesian solution to this socalled â€˜endogeneityâ€™ problem. O'Donnell (2007) has adapted the approach to the estimation of directional distance functions. This paper shows how the approach can be used to estimate Shephard (1953) distance functions and an associated index of total factor productivity (TFP) change. The TFP index is a new multiplicativelycomplete index that satisfies most, if not all, economicallyrelevant tests and axioms from index number theory. The fact that it is multiplicativelycomplete means it can be exhaustively decomposed into a measure of technical change and various measures of efficiency change. The decomposition can be implemented without the use of price data and without making any assumptions concerning either the optimising behaviour of firms or the degree of competition in product markets. The methodology is illustrated using statelevel quantity data on U.S. agricultural inputs and outputs over the period 19602004. Results are summarised in terms of the characteristics (e.g., means) of estimated probability densities for measures of TFP change, technical change and outputoriented measures of efficiency change. 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:qld:uqcepa:61&r=ecm 
By:  Lechner, Michael (University of St. Gallen); Wunsch, Conny (University of St. Gallen) 
Abstract:  Based on new, exceptionally informative and large German linked employeremployee administrative data, we investigate the question whether the omission of important control variables in matching estimation leads to biased impact estimates of typical active labour market programs for the unemployed. Such biases would lead to false policy conclusions about the costeffectiveness of these expensive policies. Using newly developed Empirical Monte Carlo Study methods, we find that besides standard personal characteristics, information on individual health and firm characteristics of the last employer are particularly important for selection correction. Moreover, it is important to account for past performance on the labour market in a very detailed and flexible way. Information on job search behaviour, timing of unemployment and program start, as well as detailed regional characteristics are also relevant. 
Keywords:  training, job search assistance, matching estimation, active labour market policies 
JEL:  J68 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5553&r=ecm 
By:  Cristina Conflitti 
Abstract:  Survey data on expectations and economic forecasts play an important role in providing better insights into how economic agents make their own forecasts, what factors do affect the accuracy of these forecasts and why agents disagree in making them. Uncertainty is also important for better understanding many areas of economic behavior. Several approaches to measure uncertainty and disagreement have been proposed but a lack of direct observations and information on uncertainty and disagreement lead to ambiguous definitions of these two concepts. Using data from the European Survey of Professional Forecasters (SPF), which provide forecast point estimates and probability density forecasts, we consider several measures of uncertainty and disagreement at both aggregate and individual level. We overcome the problem associated with distributional assumptions of probability density forecasts by using an approach that does not assume any functional form for the individual probability densities but just approximating the histogram by a piecewise linear function. We extend earlier works to the European context for the three macroeconomic variables: GDP, inflation and unemployment. Moreover, we analyze how these measures perform with respect to different forecasting horizons. Looking at point estimates and disregarding the individual probability information provides misestimates of disagreement and uncertainty. Comparing the three macroeconomic variables of interest, uncertainty and disagreement are higher for GDP and inflation than unemployment, at short and long horizons. Besides this, it is difficult to find a common behavior between uncertainty and disagreement among the variables: results do not support evidence that, if uncertainty or disagreement are relatively high for one of the variable than it is the same for the others 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/64795&r=ecm 
By:  Don Harding (La Trobe University); Adrian Pagan (QUT and UTS) 
Abstract:  The fact that the Global Financial Crisis, and the Great Recession it ushered in, was largely unforeseen, has led to the common opinion that macroeconomic models and analysis is deficient in some way. Of course it has probably always been true that businessmen, journalists and politicians have agreed on the proposition that economists canÂ’t forecast recessions. Yet we see an enormous published literature that presents results which suggest it is possible to do so, either with some new model or some new estimation method e.g. Kaufman (2010), Galvao (2006), Dueker (2005), Wright (2006) and Moneta (2005). Moreover, there seem to be no shortage of papers still emerging that make claims along these lines. So a question that naturally arises is how one is to reconcile the existence of an expanding literature on predicting recessions with the scepticism noted above? 
Keywords:  Global Financial Crisis, Great Recession, 
Date:  2010–12–09 
URL:  http://d.repec.org/n?u=RePEc:qut:auncer:2010_16&r=ecm 
By:  Olena Nizalova (Kyiv School of Economics, Kyiv Economic Institute); Irina Murtazashvili (University of Pittsburgh) 
Abstract:  Whether interested in the differential impact of a particular factor in various institutional settings or in the heterogeneous effect of policy or random experiment, the empirical researcher confronts a problem if the factor of interest is correlated with an omitted variable. This paper presents the circumstances under which it is possible to arrive at a consistent estimate of the mentioned effect. We find that if the source of heterogeneity and omitted variable are jointly independent of policy or treatment, then the OLS estimate on the interaction term between the treatment and endogenous factor turns out to be consistent. 
Keywords:  treatment effect; heterogeneity; policy evaluation; random experiments; omitted variable bias 
JEL:  C21 
Date:  2011–03 
URL:  http://d.repec.org/n?u=RePEc:kse:dpaper:37&r=ecm 