
on Econometrics 
By:  Min Seong Kim (Department of Economics, Ryerson University, Toronto, Canada); Yixiao Sun (DEpartment of Economics, University of California, San Diego) 
Abstract:  This paper studies robust inference for linear panel models with fixed effects in the presence of heteroskedasticity and spatiotemporal dependence of unknown forms. We propose a bivariate kernel covariance estimator that is flexible to nest existing estimators as special cases with certain choices of bandwidths. For distributional approximations, we embed the level of smoothing and the sample size in two different limiting sequences. In the first case where the level of smoothing increases with the sample size, the proposed covariance estimator is consistent and the associated Wald statistic converges to a chi square distribution. We show that our covariance estimator improves upon existing estimators in terms of robustness and efficiency. In the second case where the level of smoothing is fixed, the covariance estimator has a random limit and we show by asymptotic expansion that the limiting distribution of the Wald statistic depends on the bandwidth parameters, the kernel function, and the number of restrictions being tested. As this distribution is nonstandard, we establish the validity of a convenient Fapproximation to this distribution. For bandwidth selection, we employ and optimize a modified asymptotic mean square error criterion. The fl exibility of our estimator and the proposed bandwidth selection procedure make our estimator adaptive to the dependence structure. This adaptiveness effectively automates the selection of covariance estimators. Simulation results show that our proposed testing procedure works reasonably well in finite samples. 
Keywords:  Adaptiveness, HAC estimator, Fapproximation, Fixedsmoothing asymptotics, Increasingsmoothing asymptotics, Panel data, Optimal bandwidth, Robust inference, Spatiotemporal dependence 
JEL:  C13 C14 C23 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:rye:wpaper:wp029&r=ecm 
By:  Jiti Gao; Peter C.B. Phillips 
Abstract:  A system of multivariate semiparametric nonlinear time series models is studied with possible dependence structures and nonstationarities in the parametric and nonparametric components. The parametric regressors may be endogenous while the nonparametric regressors are assumed to be strictly exogenous. The parametric regressors may be stationary or nonstationary and the nonparametric regressors are nonstationary integrated time series. Semiparametric least squares (SLS) estimation is considered and its asymptotic properties are derived. Due to endogeneity in the parametric regressors, SLS is not consistent for the parametric component and a semiparametric instrumental variable (SIV) method is proposed instead. Under certain regularity conditions, the SIV estimator of the parametric component is shown to have a limiting normal distribution. The rate of convergence in the parametric component depends on the properties of the regressors. The conventional √n rate may apply even when nonstationarity is involved in both sets of regressors. 
Keywords:  Endogeneity; integrated process, nonstationarity; partial linear model; simultaneity; vector semiparametric regression. 
JEL:  C23 C25 
Date:  2011–09–05 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201117&r=ecm 
By:  Jiti Gao; Dag Tjøstheim; Jiying Yin 
Abstract:  This paper treats estimation in a class of new nonlinear threshold autoregressive models with both a stationary and a unit root regime. Existing literature on nonstationary threshold models have basically focused on models where the nonstationarity can be removed by differencing and/or where the threshold variable is stationary. This is not the case for the process we consider, and nonstandard estimation problems are the result. This paper proposes a parameter estimation method for such nonlinear threshold autoregressive models using the theory of null recurrent Markov chains. Under certain assumptions, we show that the ordinary least squares (OLS) estimators of the parameters involved are asymptotically consistent. Furthermore, it can be shown that the OLS estimator of the coefficient parameter involved in the stationary regime can still be asymptotically normal while the OLS estimator of the coefficient parameter involved in the nonstationary regime has a nonstandard asymptotic distribution. In the limit, the rate of convergence in the stationary regime is asymptotically proportional to n1/4, whereas it is n1 in the nonstationary regime. The proposed theory and estimation method are illustrated by both simulated data and a real data example. 
Keywords:  Autoregressive process; nullrecurrent process; semiparametric model; threshold time series; unit root structure. 
JEL:  C14 C22 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201121&r=ecm 
By:  Heinen, Florian; Willert, Juliane 
Abstract:  We consider the detection of a change in persistence of a long range dependent time series. The usual approach is to use oneshot tests to detect a change in persistence a posteriori in a historical data set. However, as breaks can occur at any given time and data arrives steadily it is desirable to detect a change in persistence as soon as possible. We propose the use of a MOSUM type test which allows sequential application whenever new data arrives. We derive the asymptotic distribution of the test statistic and prove consistency. We further study the finite sample behavior of the test and provide an empirical application. 
Keywords:  Change in persistence, long range dependency, MOSUM test 
JEL:  C12 C22 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:han:dpaper:dp479&r=ecm 
By:  Jia Chen; Jiti Gao; Degui Li 
Abstract:  In this paper, we consider semiparametric estimation in a partially linear singleindex panel data model with fixed effects. Without taking the difference explicitly, we propose using a semiparametric minimum average variance estimation (SMAVE) based on a dummyvariable method to remove the fixed effects and obtain consistent estimators for both the parameters and the unknown link function. As both the cross section size and the time series length tend to infinity, we not only establish an asymptotically normal distribution for the estimators of the parameters in the single index and the linear component of the model, but also obtain an asymptotically normal distribution for the nonparametric local linear estimator of the unknown link function. The asymptotically normal distributions of the proposed estimators are similar to those obtained in the random effects case. In addition, we study several partially linear singleindex dynamic panel data models. The methods and results are augmented by simulation studies and illustrated by an application to a cigarettedemand data set in the US from 19631992 
Keywords:  Fixed effects, local linear smoothing, panel data, semiparametric estimation, singleindex models 
JEL:  C13 C14 C23 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201114&r=ecm 
By:  Jiti Gao; Maxwell King 
Abstract:  This paper considers a class of parametric models with nonparametric autoregressive errors. A new test is proposed and studied to deal with the parametric specification of the nonparametric autoregressive errors with either stationarity or nonstationarity. Such a test procedure can initially avoid misspecification through the need to parametrically specify the form of the errors. In other words, we propose estimating the form of the errors and testing for stationarity or nonstationarity simultaneously. We establish asymptotic distributions of the proposed test. Both the setting and the results differ from earlier work on testing for unit roots in parametric time series regression. We provide both simulated and realdata examples to show that the proposed nonparametric unitroot test works in practice. 
Keywords:  Autoregressive process; nonlinear time series; nonparametric method; random walk; semiparametric model; unit root test. 
JEL:  C12 C14 C22 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201120&r=ecm 
By:  Jiti Gao; Degui Li; Dag Tjøstheim 
Abstract:  This paper establishes a suite of uniform consistency results for nonparametric kernel density and regression estimators when the time series regressors concerned are nonstationary nullrecurrent Markov chains. Under suitable conditions, certain rates of convergence are also obtained for the proposed estimators. Our results can be viewed as an extension of some wellknown uniform consistency results for the stationary time series case to the nonstationary time series case. 
Keywords:  βnull recurrent Markov chain, nonparametric estimation, rate of convergence, uniform consistency 
JEL:  C13 C14 C22 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201113&r=ecm 
By:  Alberto Abadie; Guido W. Imbens; Fanying Zheng 
Abstract:  Following the work by White (1980ab; 1982) it is common in empirical work in economics to report standard errors that are robust against general misspecification. In a regression setting these standard errors are valid for the parameter that in the population minimizes the squared difference between the conditional expectation and the linear approximation, averaged over the population distribution of the covariates. In nonlinear settings a similar interpretation applies. In this note we discuss an alternative parameter that corresponds to the approximation to the conditional expectation based on minimization of the squared difference averaged over the sample, rather than the population, distribution of a subset of the variables. We argue that in some cases this may be a more interesting parameter. We derive the asymptotic variance for this parameter, generally smaller than the White robust variance, and we propose a consistent estimator for the asymptotic variance. 
JEL:  C01 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:17442&r=ecm 
By:  Lennart F. Hoogerheide (VU University Amsterdam); Francesco Ravazzolo (Norges Bank); Herman K. van Dijk (Erasmus University Rotterdam, VU University Amsterdam.) 
Abstract:  Patton and Timmermann (2011, 'Forecast Rationality Tests Based on MultiHorizon Bounds', <I>Journal of Business & Economic Statistics</I>, forthcoming) propose a set of useful tests for forecast rationality or optimality under squared error loss, including an easily implemented test based on a regression that only involves (longhorizon and shorthorizon) forecasts and no observations on the target variable. We propose an extension, a simulationbased procedure that takes into account the presence of errors in parameter estimates. This procedure can also be applied in the field of 'backtesting' models for ValueatRisk. Applications to simple AR and ARCH time series models show that its power in detecting certain misspecifications is larger than the power of wellknown tests for correct Unconditional Coverage and Conditional Coverage. 
Keywords:  ValueatRisk; backtest; optimal revision; forecast rationality 
JEL:  C12 C52 C53 G32 
Date:  2011–09–20 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110131&r=ecm 
By:  Chaohua Dong; Jiti Gao 
Abstract:  Two types of Brownian motion functionals, both timehomogeneous and timeinhomogeneous, are expanded in terms of orthonormal bases in respective Hilbert spaces. Meanwhile, different time horizons are treated from the applicability point of view. Moreover, the degrees of approximation of truncation series to the corresponding series are established. An asymptotic theory is established. Both the proposed expansions and asymptotic theory are applied to establish consistent estimators in a class of time series econometric models. 
Keywords:  Asymptotic theory; Brownian motion; econometric estimation, series expansion. 
JEL:  C14 C32 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201119&r=ecm 
By:  Prono, Todd 
Abstract:  A new method is proposed for estimating linear triangular models, where identification results from the structural errors following a bivariate and diagonal GARCH(1,1) process. The associated estimator is a GMM estimator shown to have the usual √Tasymptotics. A Monte Carlo study of the estimator is provided as is an empirical application of estimating market betas from the CAPM. These market beta estimates are found to be statistically distinct from their OLS counterparts and to display expanded crosssectional variation, the latter feature offering promise for their ability to provide improved pricing of crosssectional expected returns. 
Keywords:  Measurement error; triangular models; factor models; heteroskedasticity; identification; many moments; GMM 
JEL:  C32 C13 G12 C3 
Date:  2011–09–19 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:33593&r=ecm 
By:  YuChin Hsu (Department of Economics, University of MissouriColumbia); Jason Abrevaya 
Abstract:  Nonlinearity and heterogeneity complicate the estimation and interpretation of partial effects. This paper provides a systematic characterization of the various partial effects in nonlinear paneldata models that might be of interest to empirical researchers. The estimation and interpretation of the partial effects depends upon (i) whether the distribution of unobserved heterogeneity is treated as fixed or allowed to vary with covariates and (ii) whether one is interested in particular covariate values or an average over such values. The characterization covers partialeffects concepts already in the literature but also includes new concepts for partial effects. A simple panelprobit design highlights that the different partial effects can be quantitatively very different. An empirical application to panel data on health satisfaction is used to illustrate the partialeffects concepts and proposed estimation methods. 
Keywords:  Nonlinear panel data models; partial efects; correlated random effects. 
JEL:  C01 C33 
Date:  2011–09–07 
URL:  http://d.repec.org/n?u=RePEc:umc:wpaper:1110&r=ecm 
By:  Degui Li; Zudi Lu; Oliver Linton 
Abstract:  Local linear fitting is a popular nonparametric method in statistical and econometric modelling. Lu and Linton (2007) established the pointwise asymptotic distribution for the local linear estimator of a nonparametric regression function under the condition of near epoch dependence. In this paper, we further investigate the uniform consistency of this estimator. The uniform strong and weak consistencies with convergence rates for the local linear fitting are established under mild conditions. Furthermore, general results regarding uniform convergence rates for nonparametric kernelbased estimators are provided. The results of this paper will be of wide potential interest in time series semiparametric modelling. 
Keywords:  αmixing, local linear fitting, near epoch dependence, convergence rates, uniform consistency 
JEL:  C13 C14 C22 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201116&r=ecm 
By:  ElenaIvona Dumitrescu (LEO  Laboratoire d'économie d'Orleans  CNRS : UMR6221  Université d'Orléans); Christophe Hurlin (LEO  Laboratoire d'économie d'Orleans  CNRS : UMR6221  Université d'Orléans); Jaouad Madkour (LEO  Laboratoire d'économie d'Orleans  CNRS : UMR6221  Université d'Orléans) 
Abstract:  This paper proposes a new evaluation framework for interval forecasts. Our model free test can be used to evaluate intervals forecasts and High Density Regions, potentially discontinuous and/or asymmetric. Using a simple Jstatistic, based on the moments de ned by the orthonormal polynomials associated with the Binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypotheses. Third, MonteCarlo simulations show that for realistic sample sizes, our GMM test has good smallsample properties. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. It con rms that using this GMM test leads to major consequences for the expost evaluation of interval forecasts produced by linear versus nonlinear models. 
Keywords:  Interval forecasts, High Density Region, GMM. 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00618467&r=ecm 
By:  Jushan Bai (Department of Economics, Columbia University; CEMA, Central University of Finance and Economics); Shuzhong Shi (Department of Finance, Guanghua School of Management) 
Abstract:  Estimating covariance matrices is an important part of portfolio selection, risk management, and asset pricing. This paper reviews the recent development in estimating high dimensional covariance matrices, where the number of variables can be greater than the number of observations. The limitations of the sample covariance matrix are discussed. Several new approaches are presented, including the shrinkage method, the observable and latent factor method, the Bayesian approach, and the random matrix theory approach. For each method, the construction of covariance matrices is given. The relationships among these methods are discussed. 
Keywords:  Factor analysis, Principal components, Singular value decomposition, Random matrix theory, Empirical Bayes, Shrinkage method, Optimal portfolios, CAPM, APT, GMM 
JEL:  C33 
Date:  2011–11 
URL:  http://d.repec.org/n?u=RePEc:cuf:wpaper:516&r=ecm 
By:  Badunenko, Oleg; Henderson, Daniel J.; Kumbhakar, Subal C. 
Abstract:  In this paper we compare two flexible estimators of technical efficiency in a crosssectional setting: the nonparametric kernel SFA estimator of Fan, Li and Weersink (1996) to the nonparametric bias corrected DEA estimator of Kneip, Simar andWilson (2008). We assess the finite sample performance of each estimator via Monte Carlo simulations and empirical examples. We find that the reliability of efficiency scores critically hinges upon the ratio of the variation in efficiency to the variation in noise. These results should be a valuable resource to both academic researchers and practitioners. 
Keywords:  Bootstrap; Nonparametric kernel; Technical efficiency 
JEL:  C14 
Date:  2011–09–16 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:33467&r=ecm 
By:  Gabriel NúñezAntonio; Eduardo GutiérrezPeña 
Abstract:  The analysis of short longitudinal series of circular data may be problematic and to some extent has not been completely developed. In this paper we present a Bayesian analysis of a model for such data. The model is based on a radial projection onto the circle of a particular bivariate normal distribution. Inferences about the parameters of the model are based on samples from the corresponding joint posterior density which are obtained using a MetropoliswithinGibbs scheme after the introduction of suitable latent variables. The procedure is illustrated both using a simulated data set and a realdata set previously analyzed in the literature. 
Keywords:  Circular data, Longitudinal data, Gibbs sampler, Latent variables, Mixedeffects linear models, Projected normal distribution 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws112720&r=ecm 
By:  Pipat Wongsaart; Jiti Gao 
Abstract:  A crucially important advantage of the semiparametric regression approach to the nonlinear autoregressive conditional duration (ACD) model developed in Wongsaart et al. (2011), i.e. the socalled Semiparametric ACD (SEMIACD) model, is the fact that its estimation method does not require a parametric assumption on the conditional distribution of the standardized duration process and, therefore, the shape of the baseline hazard function. The research in this paper complements that of Wongsaart et al. (2011) by introducing a nonparametric procedure to test the parametric density function of ACD error through the use of the SEMIACD based residual. The hypothetical structure of the test is useful, not only to the establishment of a better parametric ACD model, but also to the specification testing of a number of financial market microstructure hypotheses, especially those related to the information asymmetry in finance. The testing procedure introduced in this paper differs in many ways from those discussed in existing literatures, for example AïtSahalia (1996), Gao and King (2004) and Fernandes and Grammig (2005). We show theoretically and experimentally the statistical validity of our testing procedure, while demonstrating its usefulness and practicality using datasets from New York and Australia Stock Exchange. 
Keywords:  Duration model, hazard rates and random measures, nonparametric kernel testing. 
JEL:  C14 C41 F31 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:201118&r=ecm 
By:  Siem Jan Koopman (VU University Amsterdam); Marcel Scharth (VU University Amsterdam) 
Abstract:  We develop a systematic framework for the joint modelling of returns and multiple daily realised measures. We assume a linear state space representation for the log realised measures, which are noisy and biased estimates of the log integrated variance, at least due to Jensen's inequality. We incorporate filtering methods for the estimation of the latent log volatility process. The endogeneity between daily returns and realised measures leads us to develop a consistent twostep estimation method for all parameters in our specification. This method is computationally straightforward even when the stochastic volatility model contains nonGaussian return innovations and leverage effects. The empirical results reveal that measurement errors become significantly smaller after filtering and that the forecasts from our model outperforms those from a set of recently developed alternatives. 
Keywords:  Kalman filter; leverage; realised volatility; simulated maximum likelihood 
JEL:  C22 
Date:  2011–09–20 
URL:  http://d.repec.org/n?u=RePEc:dgr:uvatin:20110132&r=ecm 
By:  Mattéo Luciani; David Veredas 
Abstract:  Realized volatilities, when observed through time, share the following stylized facts: co–movements, clustering, long–memory, dynamic volatility, skewness and heavy–tails. We propose a simple dynamic factor model that captures these stylized facts and that can be applied to vast panels of volatilities as it does not suffer from the curse of dimensionality. It is an enhanced version of Bai and Ng (2004) in the following respects: i) we allow for long–memory in both the idiosyncratic and the common components, ii) the common shocks are conditionally heteroskedastic, and iii) the idiosyncratic and common shocks are skewed and heavy–tailed. Estimation of the factors, the idiosyncratic components and the parameters is straightforward: principal components and low dimension maximum likelihood estimations. A throughout Monte Carlo study shows the usefulness of the approach and an application to 90 daily realized volatilities, pertaining to S&P100, from January 2001 to December 2008, evinces, among others, the following findings: i) All the volatilities have long–memory, more than half in the nonstationary range, that increases during financial turmoil. ii) Tests and criteria point towards one dynamic common factor driving the co–movements. iii) The factor has larger long–memory than the assets volatilities, suggesting that long–memory is a market characteristic. iv) The volatility of the realized volatility is not constant and common to all. v) A forecasting horse race against univariate short– and long–memory models and short–memory dynamic factor models shows that our model outperforms short–, medium–, and long–run predictions, in particular in periods of stress. 
Keywords:  realized volatilities; vast dimensions; factor models; longmemory; forecasting 
JEL:  C32 C51 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:eca:wpaper:2013/97304&r=ecm 
By:  Calhoun, Gray 
Abstract:  This paper weakens the size and moment conditions needed for typical block bootstrap methods (i.e. the moving blocks, circular blocks, and stationary bootstraps) to be valid for the sample mean of NearEpochDependent functions of mixing processes; they are consistent under the weakest conditions that ensure the original process obeys a Central Limit Theorem (those of de Jong, 1997, Econometric Theory). In doing so, this paper extends de Jong's method of proof, a blocking argument, to hold with random and unequal block lengths. This paper also proves that bootstrapped partial sums satisfy a Functional CLT under the same conditions. 
Keywords:  Resampling; Time Series; Near Epoch Dependence; Functional Central Limit Theorem 
JEL:  C12 C15 
Date:  2011–09–23 
URL:  http://d.repec.org/n?u=RePEc:isu:genres:34313&r=ecm 
By:  Qi Gao (The school of Public Finance and Taxation, Southwestern University of Finance and Economics); Jingping Gu (Department of Economics, University of Arkansas); Paula HernandezVerme (Department of Economics & Finance, University of Guanajuato, UCEACampus Marfil) 
Abstract:  In this paper, we propose a new semiparametric varying coefficient model which extends the existing semiparametric varying coefficient models to allow for a time trend regressor with smooth coefficient function. We propose to use the local linear method to estimate the coefficient functions and we provide the asymptotic theory to describe the asymptotic distribution of the local linear estimator. We present an application to evaluate credit rationing in the U.S. credit market. Using U.S. monthly data (1952.12008.1) and using inflation as the underlying state variable, we find that credit is not rationed for levels of inflation that are either very low or very high; and for the remaining values of inflation, we find that credit is rationed and the MundellTobin effect holds. 
Keywords:  nonstationarity, semiparametric smooth coefficients, nonlinearity, credit rationing 
JEL:  C14 C22 E44 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:cuf:wpaper:515&r=ecm 
By:  Adrian Pagan (School of Economics, University of Sydney); Don Harding (School of Economics and Finance, La Trobe University) 
Abstract:  Economic events such as expansions and recessions in economic activity, bull and bear markets in stock prices and financial crises have long attracted substantial interest. In recent times there has been a focus upon predicting the events and constructing Early Warning Systems of them. Econometric analysis of such recurrent events is however in its infancy. One can represent the events as a set of binary indicators. However they are different to the binary random variables studied in microeconometrics, being constructed from some (possibly) continuous data. The lecture discusses what difference this makes to their econometric analysis. It sets out a framework which deals with how the binary variables are constructed, what an appropriate estimation procedure would be, and the implications for the prediction of them. An example based on Turkish business cycles is used throughout the lecture. 
Keywords:  Business and Financial Cycles, Binary Time Series, BBQ Algorithm 
JEL:  C22 E32 E37 
Date:  2011–09–19 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201133&r=ecm 
By:  Chris Tofallis 
Abstract:  Beta is a widely used quantity in investment analysis. We review the common interpretations that are applied to beta in finance and show that the standard method of estimation  least squares regression  is inconsistent with these interpretations. We present the case for an alternative beta estimator which is more appropriate, as well as being easier to understand and to calculate. Unlike regression, the line fit we propose treats both variables in the same way. Remarkably, it provides a slope that is precisely the ratio of the volatility of the investment's rate of return to the volatility of the market index rate of return (or the equivalent excess rates of returns). Hence, this line fitting method gives an alternative beta, which corresponds exactly to the relative volatility of an investment  which is one of the usual interpretations attached to beta. 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1109.4422&r=ecm 
By:  Ravallion, Martin 
Abstract:  Randomized control trials are sometimes used to estimate the aggregate benefit from some policy or program. To address the potential bias from selective takeup, the randomization is used as an instrumental variable for treatment status. Does this (popular) method of impact evaluation help reduce the bias when takeup depends on unobserved gains from take up? Such"essential heterogeneity"is known to invalidate the instrumental variable estimator of mean causal impact, though one still obtains another parameter of interest, namely mean impact amongst those treated. However, if essential heterogeneity is the only problem then the naïve (ordinary least squares) estimator also delivers this parameter; there is no gain from using randomization as an instrumental variable. On allowing the heterogeneity to also alter counterfactual outcomes, the instrumental variable estimator may well be more biased for mean impact than the naïve estimator. Examples are given for various stylized programs, including a training program that attenuates the gains from higher latent ability, an insurance program that compensates for losses from unobserved risky behavior and a microcredit scheme that attenuates the gains from access to other sources of credit. Practitioners need to think carefully about the likely behavioral responses to social experiments in each context. 
Keywords:  Poverty Monitoring&Analysis,Disease Control&Prevention,Poverty Impact Evaluation,Scientific Research&Science Parks,Science Education 
Date:  2011–09–01 
URL:  http://d.repec.org/n?u=RePEc:wbk:wbrwps:5804&r=ecm 
By:  Sergey Ivashchenko 
Abstract:  This article compares properties of different nonlinear Kalman filters: wellknown Unscented Kalman filter (UKF), Central Difference Kalman Filter (CDKF) and unknown Quadratic Kalman filter (QKF). Small financial DSGE model is repeatedly estimated by maximum quasilikelihood methods with different filters for data generated by the model. Errors of parameters estimation are measure of filters quality. The result is that QKF has reasonable advantage in quality over CDKF and UKF with some loose in speed. 
Keywords:  DSGE, QKF, CDKF, UKF, quadratic approximation, Kalman filtering 
JEL:  C13 C32 E32 
Date:  2011–09–20 
URL:  http://d.repec.org/n?u=RePEc:eus:wpaper:ec0711&r=ecm 
By:  Charles F. Manski; John V. Pepper 
Abstract:  Researchers have long used repeated cross sectional observations of homicide rates and sanctions to examine the deterrent effect of the adoption and implementation of death penalty statutes. The empirical literature, however, has failed to achieve consensus. A fundamental problem is that the outcomes of counterfactual policies are not observable. Hence, the data alone cannot identify the deterrent effect of capital punishment. How then should research proceed? It is tempting to impose assumptions strong enough to yield a definitive finding, but strong assumptions may be inaccurate and yield flawed conclusions. Instead, we study the identifying power of relatively weak assumptions restricting variation in treatment response across places and time. The results are findings of partial identification that bound the deterrent effect of capital punishment. By successively adding stronger identifying assumptions, we seek to make transparent how assumptions shape inference. We perform empirical analysis using statelevel data in the United States in 1975 and 1977. Under the weakest restrictions, there is substantial ambiguity: we cannot rule out the possibility that having a death penalty statute substantially increases or decreases homicide. This ambiguity is reduced when we impose stronger assumptions, but inferences are sensitive to the maintained restrictions. Combining the data with some assumptions implies that the death penalty increases homicide, but other assumptions imply that the death penalty deters it. 
JEL:  C21 K14 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:17455&r=ecm 
By:  Fengler, Matthias; Hin, LinYee 
Abstract:  When studying the economic content of cross sections of option price data, researchers either explicitly or implicitly view the discrete ensemble of observed option prices as a realization from a smooth surface defined across exercise prices and expiry dates. Yet despite adopting a surface perspective for estimation, it is common practice to infer the option pricing function, for each expiry date separately, slice by slice. In this paper, we suggest a seminonparametric estimator for the entire call price surface based on a tensorproduct Bspline. To enforce noarbitrage constraints in strike and calendar dimension we establish sufficient noarbitrage conditions on the control net of the tensor product (TP) Bspline. Since these conditions are independent of the degrees of the underlying polynomials, the estimator can be parametrized with TP Bsplines of arbitrary order. As example we estimate a smooth call price surface from S&P500 option quotes. From this estimate we obtain families of state price densities and empirical pricing kernels and a local volatility surface. 
Keywords:  option pricing function, noarbitrage constraints, state price density, pricing kernel, implied volatility, local volatility, seminonparametric estimation, Bsplines 
JEL:  C14 G13 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:usg:econwp:2011:36&r=ecm 
By:  Igor Prünster; Matteo Ruggiero 
Abstract:  We propose a flexible stochastic framework for modeling the market share dynamics over time in a multiple markets setting, where firms interact within and between markets. Firms undergo stochastic idiosyncratic shocks, which contract their shares, and compete to consolidate their position by acquiring new ones in both the market where they operate and in new markets. The model parameters can meaningfully account for phenomena such as barriers to entry and exit, fixed and sunk costs, costs of expanding to new sectors with different technologies, competitive advantage among firms. The construction is obtained in a Bayesian framework by means of a collection of nonparametric hierarchical mixtures, which induce the dependence between markets and provide a generalization of the BlackwellMacQueen Polya urn scheme, which in turn is used to generate a partially exchangeable dynamical particle system. A Markov Chain Monte Carlo algorithm is provided for simulating trajectories of the system, by means of which we perform a simulation study for transitions to different economic regimes. Moreover, it is shown that the infinitedimensional properties of the system, when appropriately transformed and rescaled, are those of a collection of interacting FlemingViot diffusions. 
Keywords:  Bayesian Nonparametrics; Gibbs sampler; interacting Polya urns; particle system; species sampling models; market dynamics; interacting FlemingViot processes 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:cca:wpaper:217&r=ecm 
By:  Katarina Juselius; Niels Framroze; Finn Tarp 
Abstract:  Studies of aid effectiveness abound in the literature, often with opposing conclusions. Since most timeseries studies use data from the exact same publicly available data bases, our claim here is that such differences in results must be due to the use of different econometric models and methods. To investigate this we perform a comprehensive study of the longrun effect of foreign aid (ODA) on a set of key macroeconomic variables in 36 subSaharan African countries from mid1960s to 2007. We use a wellspecified (Cointegrated) VAR (CVAR) model as our statistical benchmark. It represents a muchneeded generaltospecific approach which can provide broad confidence intervals within which empirically relevant claims should fall. Based on stringent statistical testing, our results provide broad support for a positive longrun impact of ODA flows on the macroeconomy. For example, we find a positive effect of ODA on investment in 33 of the 36 included countries, but hardly any evidence supporting the view that aid has been harmful. From a methodological point of view our study documents the importance of transparency in results reporting in particular when the statistical null does not correspond to a natural economic null hypothesis. Our study identifies three reasons for econometrically unsatisfactory results in the literature: failure to adequately account for unit roots and breaks; imposing seemingly innocuous but invalid data transformations; and imposing aid endogeneity/exogeneity without testing. 
Keywords:  foreign aid, Africa, transmission channels, unit roots, Cointegrated VAR 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:unu:wpaper:wp201151&r=ecm 
By:  Juan Angel Lafuente; Rafaela Pérez; Jesús Ruiz 
Abstract:  This paper proposes an estimation method for persistent and transitory monetary shocks using the monetary policy modeling proposed in Andolfatto et al, [Journal of Monetary Economics, 55 (2008), pp.: 406422]. The contribution of the paper is threefold: a) to deal with nonGaussian innovations, we consider a convenient reformulation of the statespace representation that enables us to use the Kalman filter as an optimal estimation algorithm. Now the state equation allows expectations play a significant role in explaining the future time evolution of monetary shocks; b) it offers the possibility to perform maximum likelihood estimation for all the parameters involved in the monetary policy, and c) as a consequence, we can estimate the conditional probability that a regime change has occurred in the current period given an observed monetary shock. Empirical evidence on US monetary policy making is provided through the lens of a Taylor rule, suggesting that the Fed’s policy was implemented accordingly with the macroeconomic conditions after the Great Moderation. The use of the particle filter produces similar quantitative and qualitative findings. However, our procedure has much less computational cost. 
Keywords:  Kalman filter, Nonnormality, Particle filter, Monetary policy 
JEL:  C4 F3 
Date:  2011–09 
URL:  http://d.repec.org/n?u=RePEc:cte:wbrepe:wb113108&r=ecm 