
on Econometrics 
By:  Cees Diks (University of Amsterdam); Valentyn Panchenko (University of New South Wales); Dick van Dijk (Erasmus University Rotterdam) 
Abstract:  We propose new scoring rules based on partial likelihood for assessing the relative outofsample predictive accuracy of competing density forecasts over a specific region of interest, such as the left tail in financial risk management. By construction, existing scoring rules based on weighted likelihood or censored normal likelihood favor density forecasts with more probability mass in the given region, rendering predictive accuracy tests biased towards such densities. Our novel partial likelihoodbased scoring rules do not suffer from this problem, as illustrated by means of Monte Carlo simulations and an empirical application to daily S&P 500 index returns. 
Keywords:  density forecast evaluation; scoring rules; weighted likelihood ratio scores; partial likelihood; risk management 
JEL:  C12 C22 C52 C53 
Date:  2008–05–20 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20080050&r=ecm 
By:  Jan J. J. Groen; George Kapetanios 
Abstract:  This paper revisits a number of datarich prediction methods that are widely used in macroeconomic forecasting, such as factor models, Bayesian ridge regression, and forecast combinations, and compares these methods with a lesser known alternative: partial least squares regression. In this method, linear, orthogonal combinations of a large number of predictor variables are constructed such that the linear combinations maximize the covariance between the target variable and each of the common components constructed from the predictor variables. We provide a theorem that shows that when the data comply with a factor structure, principal components and partial least squares regressions provide asymptotically similar results. We also argue that forecast combinations can be interpreted as a restricted form of partial least squares regression. Monte Carlo experiments confirm our theoretical results that principal components and partial least squares regressions are asymptotically similar when the data has a factor structure. These experiments also indicate that when there is no factor structure in the data, partial least square regression outperforms both principal components and Bayesian ridge regressions. Finally, we apply partial least squares, principal components, and Bayesian ridge regressions on a large panel of monthly U.S. macroeconomic and financial data to forecast CPI inflation, core CPI inflation, industrial production, unemployment, and the federal funds rate across different subperiods. The results indicate that partial least squares regression usually has the best outofsample performance when compared with the two other datarich prediction methods. ; These experiments also indicate that when there is no factor structure in the data, partial least square regression outperforms both principal components and Bayesian ridge regressions. Finally, we apply partial least squares, principal components, and Bayesian ridge regressions on a large panel of monthly U.S. macroeconomic and financial data to forecast CPI inflation, core CPI inflation, industrial production, unemployment, and the federal funds rate across different subperiods. The results indicate that partial least squares regression usually has the best outofsample performance when compared with the two other datarich prediction methods. 
Keywords:  Timeseries analysis ; Economic forecasting ; Business cycles ; Econometric models 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:fip:fednsr:327&r=ecm 
By:  Byeong U. Park (Seoul National University); Leopold Simar (Universite Catholique de Louvain and Toulouse School of Economics); Valentin Zelenyuk (Kyiv School of Economics and Kyiv Economics Institute) 
Abstract:  In this paper we propose a very flexible estimator in the context of truncated regression that does not require parametric assumptions. To do this, we adapt the theory of local maximum likelihood estimation. We provide the asymptotic results and illustrate the performance of our estimator on simulated and real data sets. Our estimator performs as good as the fully parametric estimator when the assumptions for the latter hold, but as expected, much better when they do not (provided that the curse of dimensionality problem is not the issue). Overall, our estimator exhibits a fair degree of robustness to various deviations from linearity in the regression equation and also to deviations from the specification of the error term. So the approach shall prove to be very useful in practical applications, where the parametric form of the regression or of the distribution is rarely known. 
Keywords:  Nonparametric Truncated Regression, Local Likelihood 
JEL:  C14 C24 
Date:  2008–05 
URL:  http://d.repec.org/n?u=RePEc:kse:dpaper:7&r=ecm 
By:  Luati, Alessandra; Proietti, Tommaso 
Abstract:  The paper establishes the conditions under which the generalised least squares estimator of the regression parameters is equivalent to the weighted least squares estimator. The equivalence conditions have interesting applications in local polynomial regression and kernel smoothing. Specifically, they enable to derive the optimal kernel associated with a particular covariance structure of the measurement error, where optimality has to be intended in the GaussMarkov sense. For local polynomial regression it is shown that there is a class of covariance structures, associated with noninvertible moving average processes of given orders which yield the the Epanechnikov and the Henderson kernels as the optimal kernels. 
Keywords:  Local polynomial regression; Epanechnikov Kernel; Noninvertible Moving average processes. 
JEL:  C13 C14 C22 
Date:  2008–05–30 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:8910&r=ecm 
By:  Leopold Simar (Universite Catholique de Louvain and Toulouse School of Economics); Valentin Zelenyuk (Kyiv School of Economics and Kyiv Economics Institute) 
Abstract:  In this paper we extend the work of Simar (2007) introducing noise in nonparametric frontier models. We develop an approach that synthesizes the best features of the two main methods in the estimation of production efficiency. Specifically, our approach first allows for statistical noise, similar to Stochastic Frontier Analysis (even in a more flexible way), and second, it allows modelling multipleinputsmultipleoutputs technologies without imposing parametric assumptions on production relationship, similar to what is done in nonparametric methods (DEA, FDH, etc. . . ). The methodology is based on the theory of local maximum likelihood estimation and extends recent works of Park, Kumbhakar, Simar and Tsionas (2007) and Park, Simar and Zelenyuk (2006). Our method is suitable for modelling and estimation of the marginal effects onto inefficiency level jointly with estimation of marginal effects of input. The approach is robust to heteroskedastic cases and to various (unknown) distributions of statistical noise and inefficiency, despite assuming simple anchorage models. The method also improves DEA/FDH estimators, by allowing them to be quite robust to statistical noise and especially to outliers, which were the main problems of the original DEA/FDH. The procedure shows great performance for various simulated cases and is also illustrated for some real data sets. 
Keywords:  Stochastic Frontier, Nonparametric Frontier, Local Maximum Likelihood 
JEL:  C13 C14 C2 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:kse:dpaper:8&r=ecm 
By:  Basu, A; Polsky, D; Manning, W G 
Abstract:  Under the assumption of no unmeasured confounders, a large literature exists on methods that can be used to estimating average treatment effects (ATE) from observational data and that spans regression models, propensity score adjustments using stratification, weighting or regression and even the combination of both as in doublyrobust estimators. However, comparison of these alternative methods is sparse in the context of data generated via nonlinear models where treatment effects are heterogeneous, such as is in the case of healthcare cost data. In this paper, we compare the performance of alternative regression and propensity scorebased estimators in estimating average treatment effects on outcomes that are generated via nonlinear models. Using simulations, we find that in moderate size samples (n= 5000), balancing on estimated propensity scores balances the covariate means across treatment arms but fails to balance higherorder moments and covariances amongst covariates, raising concern about its use in nonlinear outcomes generating mechanisms. We also find that besides inverseprobability weighting (IPW) with propensity scores, no one estimator is consistent under all data generating mechanisms. The IPW estimator is itself prone to inconsistency due to misspecification of the model for estimating propensity scores. Even when it is consistent, the IPW estimator is usually extremely inefficient. Thus care should be taken before naively applying any one estimator to estimate ATE in these data. We develop a recommendation for an algorithm which may help applied researchers to arrive at the optimal estimator. We illustrate the application of this algorithm and also the performance of alternative methods in a cost dataset on breast cancer treatment. 
Keywords:  Propensity score, Nonlinear regression, average treatment effect, Healthcare costs 
JEL:  C01 C21 I10 
Date:  2008–05 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:08/11&r=ecm 
By:  Hammad Qureshi (Department of Economics, Ohio State University) 
Abstract:  Level vector autoregressive (VAR) models are used extensively in empirical macroeconomic research. However, estimated level VAR models may contain explosive roots, which is at odds with the widespread consensus among macroeconomists that roots are at most unity. This paper investigates the frequency of explosive roots in estimated level VAR models in the presence of stationary and nonstationary variables. Monte Carlo simulations based on datasets from Christiano, Eichenbaum, & Evans (1999,2005) and Eichenbaum & Evans (1995) reveal that the frequency of explosive roots exceeds 40% in the presence of unit roots. Even when all the variables are stationary, the frequency of explosive roots is substantial. Furthermore, explosion increases significantly, to as much as 100% when the estimated level VAR coefficients are corrected for smallsample bias. 
Keywords:  Level VAR Models, Explosive Roots, Bias Correction 
JEL:  F31 
Date:  2008–02 
URL:  http://d.repec.org/n?u=RePEc:osu:osuewp:0802&r=ecm 
By:  Klein, T.J. (Tilburg University, Center for Economic Research) 
Abstract:  A fundamental identification problem in program evaluation arises when idiosyncratic gains from participation and the treatment decision depend on each other. Imbens and Angrist (1994) were the first to exploit a monotonicity condition in order to identify a local average treatment effect parameter using instrumental variables. More recently, Heckman and Vytlacil (1999) suggested estimation of a variety of treatment effect parameters using a local version of their approach. However, identification hinges on the same monotonicity assumption that is fundamentally untestable. We investigate the sensitivity of respective estimates to reasonable departures from monotonicity that are likely to be encountered in practice. Approximations to respective bias terms are derived. In an empirical application the bias is calculated and bias corrected estimates are obtained. The accuracy of the approximation is investigated in a Monte Carlo study. 
Keywords:  Program evaluation;heterogeneity;identification;dummy endogenous variable;selection on unobservables;instrumental variables;monotonicity;nonseparable index selection model. 
JEL:  C21 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200845&r=ecm 
By:  Blanc, J.P.C.; Hertog, D. den (Tilburg University, Center for Economic Research) 
Abstract:  In this paper, a general method is described to determine uncertainty intervals for performance measures of Markov chains given an uncertainty region for the parameters of the Markov chains. We investigate the effects of uncertainties in the transition probabilities on the limiting distributions, on the state probabilities after n steps, on mean sojourn times in transient states, and on absorption probabilities for absorbing states. We show that the uncertainty effects can be calculated by solving linear programming problems in the case of interval uncertainty for the transition probabilities, and by second order cone optimization in the case of ellipsoidal uncertainty. Many examples are given, especially Markovian queueing examples, to illustrate the theory. 
Keywords:  Markov chain; Interval uncertainty; Ellipsoidal uncertainty; Linear Programming; Second Order Cone Optimization. 
JEL:  C61 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200850&r=ecm 
By:  Vázquez, Miguel; SánchezÚbeda, Eugenio F.; Berzosa, Ana; Barquín, Julián 
Abstract:  We propose in this paper a model for the description of electricity spot prices, which we use to describe the dynamics of forward curves. The spot price model is based on a longterm/shortterm decomposition, where the price is thought of as made up of two factors: A longterm equilibrium level and shortterm movements around the equilibrium. We use a nonparametric approach to model the equilibrium level of power prices, and a meanreverting process with GARCH volatility to describe the dynamics of the shortterm component. Then, the model is used to derive the expression of the shortterm dynamics of the forward curve implicit in spot prices. The rationale for the approach is that information concerning forward prices is not available in most of power markets, and the direct modeling of the forward curve is a difficult task. Moreover, power derivatives are typically written on forward contracts, and usually based on average prices of forward contracts. Then, it is difficult to obtain analytical expressions for the forward curves. The model of forward prices allows for the valuation of power derivatives, as well as the calculation of the volatilities and correlations required in risk management activities. Finally, the methodology is proven in the context of the Spanish wholesale market 
Keywords:  Forward curves;Power Markets;GARCH volatility;nonparametric regression 
JEL:  C32 D81 D84 C14 C15 
Date:  2008–02 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:8932&r=ecm 