
on Econometrics 
By:  Zhengyu Zhang (Center for Econometric Study, Shanghai Academy of Social Sciences) 
Abstract:  Su and Jin (2010) develop for partially linear spatial autoregressive (PLSAR) model a profile quasimaximum likelihood based estimation procedure. More recently, Su (2011) proposes for this model a semiparametric GMM estimator. However, both of them can be computationally challenging for applied researchers and are not easy to implement in practice. In this article, we propose a computationally simple estimator for the PLSAR model in the presence of either heteroscedastic or spatially correlated error terms. This estimator blends the essential features of both the GMM estimator for linear SAR model and the pairwise difference estimator for conventional partially linear model. Limiting distribution of the proposed estimator is established and consistent estimator for its asymptotic CV matrix is provided. Monte Carlo studies indicate that our estimator is attractive particularly when one is interested in estimating the finitedimensional parameters in the model. 
Keywords:  Spatial autoregression, Partially linear model, Pairwise difference 
JEL:  C13 C14 C21 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:cuf:wpaper:457&r=ecm 
By:  Tatyana Krivobokova (GeorgAugustUniversity Göttingen) 
Abstract:  There are two popular smoothing parameter selection methods for spline smoothing. First, criteria that approximate the average mean squared error of the estimator (e.g. generalized cross validation) are widely used. Alternatively, the maximum likelihood paradigm can be employed under the assumption that the underlying function to be estimated is a realization of some stochastic process. In this article the asymptotic properties of both smoothing parameter estimators are studied and compared in the frequentist and stochastic framework for penalized spline smoothing. Consistency and asymptotic normality of the estimators are proved and small sample properties are discussed. A simulation study and a real data example illustrate the theoretical findings. 
Keywords:  Maximum likelihood; Mean squared error minimizer; Penalized splines; Smoothing splines 
Date:  2011–08–02 
URL:  http://d.repec.org/n?u=RePEc:got:gotcrc:085&r=ecm 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Xu Cheng (Dept. of Economics, University of Pennsylvania); Patrik Guggenberger (Dept. of Economics, UCSD) 
Abstract:  This paper provides a set of results that can be used to establish the asymptotic size and/or similarity in a uniform sense of confidence sets and tests. The results are generic in that they can be applied to a broad range of problems. They are most useful in scenarios where the pointwise asymptotic distribution of a test statistic has a discontinuity in its limit distribution. The results are illustrated in three examples. These are: (i) the conditional likelihood ratio test of Moreira (2003) for linear instrumental variables models with instruments that may be weak, extended to the case of heteroskedastic errors; (ii) the grid bootstrap confidence interval of Hansen (1999) for the sum of the AR coefficients in a kth order autoregressive model with unknown innovation distribution, and (iii) the standard quasilikelihood ratio test in a nonlinear regression model where identification is lost when the coefficient on the nonlinear regressor is zero. 
Keywords:  Asymptotically similar, Asymptotic size, Autoregressive model, Confidence interval, Nonlinear regression, Test, Weak instruments 
JEL:  C12 C22 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1813&r=ecm 
By:  Andrzej Kociecki (Narodowy Bank Polski) 
Abstract:  The paper presents the problem of identification in parametric models from the algebraic point of view. We argue that it is not just another perspective but the proper one. That is using our approach we can see the very nature of the identification problem, which is slightly different than that suggested in the literature. In practice it means that in many models we can unambiguously estimate parameters that have been thought as unidentifiable. This is illustrated in the case of Simultaneous Equations Model (SEM), where our analysis leads to conclusion that existing identification conditions, although correct, are based on the inappropriate premise: only the structural parameters that are in one–to–one correspondence with the reduced form parameters are identified. We will show that this is not true. In fact there are other structural parameters, which are identified, but can not be uniquely recovered from the reduced form parameters. Although we apply our theory only to SEM, it can be used in many standard econometric models. 
Keywords:  Identification; Group theory; Orbits; Orbit representatives; Simultaneous Equations Model; Maximal Invariant 
JEL:  C10 C30 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:nbp:nbpmis:88&r=ecm 
By:  Jing Tian; Heather M. Anderson 
Abstract:  This paper proposes two new weighting schemes that average forecasts using different estimation windows to account for structural change. We let the weights reflect the probability of each time point being the mostrecent break point, and we use the reversed ordered Cusum test statistics to capture this intuition. The second weighting method simply imposes heavier weights on those forecasts that use more recent information. The proposed combination forecasts are evaluated using Monte Carlo techniques, and we compare them with forecasts based on other methods that try to account for structural change, including average forecasts weighted by past forecasting performance and techniques that first estimate a break point and then forecast using the post break data. Simulation results show that our proposed weighting methods often outperform the others in the presence of structural breaks. An empirical application based on a NAIRU Phillips curve model for the United States indicates that it is possible to outperform the random walk forecasting model when we employ forecasting methods that account for break uncertainty. 
Keywords:  Forecasting with Structural breaks, Parameter Shifts, break Uncertainty, Structural break Tests, Choice of Estimation Sample, Forecast Combinations, NAIRU Phillips Curve. 
JEL:  C22 C53 E37 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:msh:ebswps:20118&r=ecm 
By:  Donald W.K. Andrews (Cowles Foundation, Yale University); Patrik Guggenberger (Dept. of Economics, UCSD) 
Abstract:  This paper introduces a new confidence interval (CI) for the autoregressive parameter (AR) in an AR(1) model that allows for conditional heteroskedasticity of general form and AR parameters that are less than or equal to unity. The CI is a modification of Mikusheva's (2007a) modification of Stock's (1991) CI that employs the least squares estimator and a heteroskedasticityrobust variance estimator. The CI is shown to have correct asymptotic size and to be asymptotically similar (in a uniform sense). It does not require any tuning parameters. No existing procedures have these properties. Monte Carlo simulations show that the CI performs well in finite samples in terms of coverage probability and average length, for innovations with and without conditional heteroskedasticity. 
Keywords:  Asymptotically similar, Asymptotic size, Autoregressive model, Conditional heteroskedasticity, Confidence interval, Hybrid test, Subsampling test, Unit root 
JEL:  C12 C15 C22 
Date:  2011–08 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1812&r=ecm 
By:  Eiji Kurozumi; Kohei Aono 
Abstract:  This paper proposes new point estimates for predictive regressions. Our estimates are easily obtained by the least squares and the instrumental variable methods. Our estimates, called the plugin estimates, have nice asymptotic properties such as median unbiasedness and the approximated normality of the associated tstatistics. In addition, the plugin estimates are shown to have good finite sample properties via Monte Carlo simulations. Using the new estimates, we investigate U.S. stock returns and find that some variables, which have not been statistically detected as useful predictors in the literature, are able to predict stock returns. Because of their nice properties, our methods complement the existing statistical tests for predictability to investigate the relations between stock returns and economic variables. 
Keywords:  unit root, near unit root, bias, median unbiased, stock return 
JEL:  C13 C22 
Date:  2011–05 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd11192&r=ecm 
By:  Tinkl, Fabian; Reichert, Katja 
Abstract:  We generalize the score test for timevarying copula parameters proposed by [Abegaz & NaikNimbalkar, 2008] to a setting where more than oneparametric copulas can be tested for time variation in at least one parameter. In a next step we model the daily log returns of the Commerzbank stock using copulabased Markov chain models. We found evidence that compared to usual GARCH models the copulabased Markov chain models perform worse when daily stock returns are estimated. Thus we do not see any advantage of this model type when daily returns from financial data are modeled.  
Keywords:  Dynamic copula models,Markov chains,score test,GARCH models 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:zbw:iwqwdp:092011&r=ecm 
By:  Maksym Obrizan (Kyiv School of Economics, Kyiv Economic Institute) 
Abstract:  Depression as a major mental illness among older adults has attracted a lot of research attention. However, the problem of sample selection, inevitable in most health surveys, has been largely ignored. To fill in this gap, this paper formally models selection into the sample jointly with a discrete outcome variable for depression. A Bayesian model of sample selection is developed from a multivariate probit by (i) allowing missing depression status for nonselected respondents, and (ii) using Cholesky factorization of the inverse variance matrix to avoid a MetropolisHastings step in the Gibbs sampler. Nonselected respondents are less likely to suffer from depression. 
Keywords:  Multivariate probit model; Sample selection; Bayesian methods; Gibbs sampler 
JEL:  C11 C35 I1 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:kse:dpaper:41&r=ecm 
By:  Jouchi Nakajima; Toshiaki Watanabe 
Abstract:  This paper applies the timevarying parameter vector autoregressive model to the Japanese economy. The both parameters and volatilities, which are assumed to follow a randomwalk process, are estimated using a Bayesian method with MCMC. The recursive structure is assumed for identification and the reversible jump MCMC is used for the ordering of variables. The empirical result reveals the timevarying structure of the Japanese economy and monetary policy during the period from 1981 to 2008 and provides evidence that the order of variables may change by the introduction of zero interest rate policy. 
Keywords:  Bayesian inference, Monetary policy, Reversible jump Markov chain Monte Carlo, Stochastic volatility, Timevarying parameter VAR 
JEL:  C11 C15 E52 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd11196&r=ecm 
By:  Francesca Di Iorio (Universita' di Napoli Federico II); Stefano Fachin (Universita' di Roma "La Sapienza") 
Abstract:  We develop a sieve bootstrap range test for poolability of cointegrating regressions in dependent panels and evaluate by simulation its performances. Although slightly undersized the test has good power even when only a single unit of the panel is heterogenous. 
Keywords:  Poolability, Panel cointegration, sieve bootstrap. 
JEL:  C23 C15 E2 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:sas:wpaper:20112&r=ecm 
By:  Barnes, Michelle L.; GumbauBrisa, Fabià; Lie, Denny; Olivei, Giovanni P. 
Abstract:  We illustrate the importance of placing modelconsistent restrictions on expectations in the estimation of forwardlooking Euler equations. In twostage limitedinformation settings where firststage estimates are used to proxy for expectations, parameter estimates can differ substantially, depending on whether these restrictions are imposed or not. This is shown in an application to the New Keynesian Phillips Curve (NKPC), first in a Monte Carlo exercise, and then on actual data. The closedform (CF) estimates require by construction that expectations of inflation be modelconsistent at all points in time, while the differenceequation (DE) estimates impose no model discipline on expectations. Between those two polar extremes there is a wide range of alternative DE specifications, based on the same dynamic relationship, that explicitly impose model restrictions on expectations for a finite number of periods. In our application, these last estimates quickly converge to the CF estimates, and illustrate that the DE estimates in Cogley and Sbordone (2008) are not robust to imposing modest model requirements on expectations. In particular, our estimates show that the NKPC is not purely forwardlooking, and thus that timevarying trend inflation is insufficient to explain inflation persistence. 
Keywords:  timevarying trend inflation; forward looking Euler equation; New Keynesian Phillips curve; modelconsistent expectations; closed form 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:syd:wpaper:2123/7708&r=ecm 
By:  Klein, Ingo 
Abstract:  There are several procedures to construct a skewed distribution. One of these procedures splits the value of a parameter of scale for the two halfs of a symmetric distribution. Fechner proposed this procedure in his famous book Kollektivmaßlehre (1897), p. 295ff.. A similar proposal comes from Fernandez et al. (1995). We consider the very general approach from ArellanoValle et al. (2005) of splitting a scale parameter and show that this technique of generating skewed distributions incorporates a welldefined parameter of skewness. It is welldefined in the sense that the parameter of skewness is compatible with the ordering <2 of van Zwet (1964) which is the strongest ordering in the hierarchy of orderings discussed by Oja (1981). For this family of skewed distributions it will be shown that the measure proposed by Arnold & Groeneveld (1995) is a measure of skewness in the sense of Oja (1981). In the special case considered by Fechner (1897) this measure and the skewness parameter coincide.  
Keywords:  Skewness,skewness to the right,skewness ordering,measure of skewness 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:zbw:iwqwdp:082011&r=ecm 
By:  Edda Claus (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne); Chew Lian Chua (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne); G. C. Lim (Melbourne Institute of Applied Economic and Social Research, The University of Melbourne) 
Abstract:  This paper proposes a framework to construct indexes of activity which links two strands of the index literature – the traditional business cycle analysis and the latent variable approach. To illustrate the method, we apply the framework to Australian regional data, namely to two resourcerich and two servicebased states. The results reveal differences in the evolution and drivers of economic activity across the four states. We also demonstrate the value of the Index in a broader context by using a structural vector autoregression (SVAR) approach to analyse the effects of shocks from the US and from China. This IndexSVAR approach facilitates a richer analysis because the unique feature of the index method proposed here allows impulse responses to be traced back to the components. 
Keywords:  Regional economic activity, coincident indicators, dynamic latent factor model 
JEL:  C43 E32 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:iae:iaewps:wp2011n15&r=ecm 
By:  Marco Rocco (Banca d'Italia) 
Abstract:  Extreme value theory is concerned with the study of the asymptotical distribution of extreme events, that is to say events which are rare in frequency and huge with respect to the majority of observations. Statistical methods derived from this theory have been increasingly employed in finance, especially in the context of risk measurement. The aim of the present study is twofold. The first part delivers a critical review of the theoretical underpinnings of extreme value theory. The second part provides a survey of some major applications of extreme value theory to finance, namely its use to test different distributional assumptions for the data, ValueatRisk and Expected Shortfall calculations, asset allocation under safetyfirst type constraints and the study of contagion and dependence across markets under stress conditions. 
Keywords:  extreme value theory, risk management, fattailed distributions, ValueatRisk, systemic risk, asset allocation 
JEL:  C10 C16 G10 G20 G21 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:bdi:opques:qef_99_11&r=ecm 
By:  Vladimir Filimonov; Didier Sornette 
Abstract:  We present a simple transformation of the formulation of the logperiodic power law formula of the JohansenLedoitSornette model of financial bubbles that reduces it to a function of only three nonlinear parameters. The transformation significantly decreases the complexity of the fitting procedure and improves its stability tremendously because the modified cost function is now characterized by good smooth properties with in general a single minimum in the case where the model is appropriate to the empirical data. We complement the approach with an additional subordination procedure that slaves two of the nonlinear parameters to what can be considered to be the most crucial nonlinear parameter, the critical time $t_c$ defined as the end of the bubble and the most probably time for a crash to occur. This further decreases the complexity of the search and provides an intuitive representation of the results of the calibration. With our proposed methodology, metaheuristic searches are not longer necessary and one can resort solely to rigorous controlled local search algorithms, leading to dramatic increase in efficiency. Empirical tests on the Shanghai Composite index (SSE) from January 2007 to March 2008 illustrate our findings. 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1108.0099&r=ecm 
By:  Alicia Rambaldi (School of Economics, The University of Queensland); Prasada Rao (School of Economics, The University of Queensland) 
Abstract:  Hedonic housing price indices are computed from estimated hedonic pricing models. The commonly used time dummy hedonic model and the rolling window hedonic model fail to account for changing consumer preferences over hedonic characteristics and typically these models do not account for the presence of spatial correlation in prices reflecting the role of locational characteristics. This paper develops a class of models with timevarying hedonic coefficients and spatially correlated errors, provides an assessment of the predictive performance of these compared to the commonly used hedonic models, and constructs and compares corresponding price index series. Alternative weighting systems, plutocratic versus democratic, are considered for the class of hedonic imputed price indices. Accounting for seasonality in house sales data, monthly chained indices and annual chained indices based on averages of yearonyear monthly indexes are presented. The empirical results are based on property sales data for Brisbane, Australia over the period 1985 to 2005. On the basis of root mean square prediction error criterion the timevarying parameter with spatial errors is found to be the best performing model and the rollingwindow model to be the worst performing model. 
Date:  2011 
URL:  http://d.repec.org/n?u=RePEc:qld:uq2004:432&r=ecm 
By:  Toshiaki Watanabe 
Abstract:  This article applies the realized GARCH model, which incorporates the GARCH model with realized volatility (RV), to quantile forecasts of financial returns such as ValueatRisk and expected shortfall. This model has certain advantages in the application to quantile forecasts because it can adjust the bias of RV casued by microstructure noise and nontrading hours and enables us to estimate the parameters in the return distribution jointly with the other parameters. Student's t and skewed strudent's tdistributions as well as normal distribution are used for the return distribution. The EGARCH model is used for comparison. Main results for the S&P 500 stock index are: (1) the realized GARCH model with the skewed student's tdistribution performs better than that with the normal and student's tdistributions and the EGARCH model using the daily returns only, and (2) the performance does not improve if the realized kernel, which takes account of microstructure noise, is used instead of the plain realized volatility, implying that the realized GARCH model can adjust the bias of RV caused by microstructure noise. 
Keywords:  Expected shortfall, GARCH, Realized volatility, Skewed student's tdistribution, ValueatRisk 
JEL:  C52 C53 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd11195&r=ecm 
By:  GumbauBrisa, Fabià; Lie, Denny; Olivei, Giovanni P. 
Abstract:  In their 2010 comment (which we refer to as CS10), Cogley and Sbordone argue that: (i) our estimates are not entirely closed form, and hence are arbitrary; (ii) we cannot guarantee that our estimates are valid, while their estimates (Cogley and Sbordone 2008, henceforth CS08) always are; and (iii) the estimates in CS08, in terms of goodness of fit, are just as good as other, much different estimates in our paper. We show in this reply that the exact closedform estimates are virtually the same as the "quasi" closedform estimates. Our estimates are consistent with the implicit assumptions underlying the firststage VAR used to form expectations, while the estimates in CS08 are not. As a result, the estimates in CS08 point towards model misspecification. We also rebut the goodness of fit comparisons in CS10, and provide a more credible exercise that illustrates that our estimates outperform CS08's estimates. 
Keywords:  timevarying trend inflation; forwardlooking Euler equation; New Keynesian Phillips curve; modelconsistent expectations; closed form 
Date:  2011–06 
URL:  http://d.repec.org/n?u=RePEc:syd:wpaper:2123/7707&r=ecm 
By:  Wanfeng Yan; Reda Rebib; Ryan Woodard; Didier Sornette 
Abstract:  Financial markets are well known for their dramatic dynamics and consequences that affect much of the world's population. Consequently, much research has aimed at understanding, identifying and forecasting crashes and rebounds in financial markets. The JohansenLedoitSornette (JLS) model provides an operational framework to understand and diagnose financial bubbles from rational expectations and was recently extended to negative bubbles and rebounds. Using the JLS model, we develop an alarm index based on an advanced pattern recognition method with the aim of detecting bubbles and performing forecasts of market crashes and rebounds. Testing our methodology on 10 major global equity markets, we show quantitatively that our developed alarm performs much better than chance in forecasting market crashes and rebounds. We use the derived signal to develop elementary trading strategies that produce statistically better performances than a simple buy and hold strategy. 
Date:  2011–07 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1108.0077&r=ecm 