
on Econometrics 
By:  Suzukawa, Akio 
Abstract:  Parametric estimation of causespecific hazard functions in a competing risks model is considered. An approximate likelihood procedure for estimating parameters of causespecific hazard functions based on competing risks data subject to right censoring is proposed. In an assumed parametric model that may have been misspecified, an estimator of a parameter is said to be consistent if it converges in probability to the pseudotrue value of the parameter as the sample size becomes large. Under censorship, the ordinary maximum likelihood method does not necessarily give consistent estimators. The proposed approximate likelihood procedure is consistent even if the parametric model is misspecified. An asymptotic distribution of the approximate maximum likelihood estimator is obtained, and the efficiency of the estimator is discussed. Datasets from a simulation experiment, an electrical appliance test and a pneumatic tire test are used to illustrate the procedure. 
Keywords:  AalenJohansen estimator, causespecific cumulative incidence function, Censored data, KaplanMeier estimator, 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:hok:dpaper:231&r=ecm 
By:  Sokbae 'Simon' Lee (Institute for Fiscal Studies and Seoul National University); Myung Hwan Seo; Youngki Shin 
Abstract:  <p>In this article, we develop a general method for testing threshold effects in regression models, using suplikelihoodratio (LR)type statistics. Although the supLRtype test statistic has been considered in the literature, our method for establishing the asymptotic null distribution is new and nonstandard. The standard approach in the literature for obtaining the asymptotic null distribution requires that there exist a certain quadratic approximation to the objective function. The article provides an alternative, novel method that can be used to establish the asymptotic null distribution, even when the usual quadratic approximation is intractable. We illustrate the usefulness of our approach in the examples of the maximum score estimation, maximum likelihood estimation, quantile regression, and maximum rank correlation estimation. We establish consistency and local power properties of the test. We provide some simulation results and also an empirical application to tipping in racial segregation. This article has supplementary materials online.</p> 
Date:  2010–12 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:36/10&r=ecm 
By:  Gianluca Cubadda (Faculty of Economics, University of Rome "Tor Vergata"); Barbara Guardabascio (Faculty of Economics, University of Rome "Tor Vergata") 
Abstract:  This paper considers methods for forecasting macroeconomic time series in a framework where the number of predictors, N, is too large to apply traditional regression models but not su¢ciently large to resort to statistical inference based on double asymptotics. Our interest is motivated by a body of empirical research suggesting that popular datarich prediction methods perform best when N ranges from 20 to 50. In order to accomplish our goal, we examine the conditions under which partial least squares and principal component regression provide consistent estimates of a stable autoregressive distributed lag model as only the number of observations, T, diverges. We show both by simulations and empirical applications that the proposed methods compare well to models that are widely used in macroeconomic forecasting. 
Keywords:  Partial least squares; principal component regression; dynamic factor models; datarich forecasting methods; dimensionreduction techniques. 
Date:  2010–12–09 
URL:  http://d.repec.org/n?u=RePEc:rtv:ceisrp:176&r=ecm 
By:  Juan Carlos Escanciano (Indiana University) 
Abstract:  A new estimator for linear models with endogenous regressors and strictly exogenous instruments is proposed. The new estimator, called the Integrated Instrumental Variables (IIV) estimator, only requires minimal assumptions to identify the true parameters, thereby providing a potential robust alternative to classical Instrumental Variables (IV) methods when instruments and endogenous variables are partially uncorrelated (i.e. weak identification holds) but are nonlinearly dependent. The IIV estimator is simple to compute, as it can be written as a weighted least squares estimator and it does not require to solve an illposed problem and the subsequent regularization. Monte Carlo evidence suggests that the IIV estimator can be a valuable alternative to IV and optimal IV in finite samples under weak identification. An application to estimating the elasticity of intertemporal substitution highlights the merits of the proposed approach over classical IV methods. 
Date:  2010–02 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2010001&r=ecm 
By:  Roger Koenker (Institute for Fiscal Studies and University of Illinois) 
Abstract:  <p>Additive models for conditional quantile functions provide an attractive framework for nonparametric regression applications focused on features of the response beyond its central tendency. Total variation roughness penalities can be used to control the smoothness of the additive components much as squared Sobelev penalties are used for classical L<sub>2</sub> smoothing splines. We describe a general approach to estimation and inference for additive models of this type. We focus attention primarily on selection of smoothing parameters and on the construction of confidence bands for the nonparametric components. Both pointwise and uniform confidence bands are introduced; the uniform bands are based on the Hotelling (1939) tube approach. Some simulation evidence is presented to evaluate finite sample performance and the methods are also illustrated with an application to modeling childhood malnutrition in India.</p> 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:ifs:cemmap:33/10&r=ecm 
By:  Luca Fanelli (luca.fanelli@unibo.it) 
Abstract:  This paper proposes a testing strategy for the null hypothesis that a multivariate linear rational expectations (LRE) model has a unique stable solution (determinacy) against the alternative of multiple stable solutions (indeterminacy). Under a proper set of identification restrictions, determinacy is investigated by a misspecificationtype approach in which the result of the overidentifying restrictions test obtained from the estimation of the LRE model through a version of generalized method of moments is combined with the result of a likelihoodbased test for the crossequation restrictions that the LRE places on its finite order reduced form under determinacy. This approach (i) circumvents the nonstandard inferential problem that a purely likelihoodbased approach implies because of the presence of nuisance parameters that appear under the alternative but not under the null, (ii) does not involve inequality parametric restrictions and nonstandard asymptotic distributions, and (iii) gives rise to a joint test which is consistent against indeterminacy almost everywhere in the space of nuisance parameters, i.e. except for a point of zero measure which gives rise to minimum state variable solutions, and is also consistent against the dynamic misspecification of the LRE model. Monte Carlo simulations show that the testing strategy delivers reasonable size coverage and power in finite samples. An empirical illustration focuses on the determinacy/indeterminacy of a New Keynesian monetary business cycle model for the US. 
Keywords:  Determinatezza, Indeterminatezza, Massima verosimiglianza, Metodo generalizzato dei momenti, Modello lineare con aspettative, Identificazione, Variabili Strumentali, VAR,VARMA Determinacy, Generalized method of moments, Indeterminacy, LRE model, Identification, Instrumental Variables, Maximum Likelihood, VAR, VARMA 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:100&r=ecm 
By:  Manuel Wiesenfarth (GeorgAugustUniversity Göttingen); Tatyana Krivobokova (GeorgAugustUniversity Göttingen); Stephan Klasen (GeorgAugustUniversity Göttingen) 
Abstract:  This article proposes a simple and fast approach to build simultaneous condence bands for smooth curves that (i) enter an additive model, (ii) are spatially heterogeneous, (iii) are estimated from heteroscedastic data and (iv) are combinations of (i)(iii). Such smooth curves are estimated using the mixed model representation of penalized splines, which allows for estimation of complex models from the corresponding likelihood. Based on these penalized spline estimators, the volumeoftube formula is applied, resulting in simultaneous confidence bands with good small sample properties, that are obtained instantly, i.e. without using bootstrap. Finite sample properties are studied in simulations and an application to undernutrition in Kenya shows the practical relevance of the approach. The method is implemented in the R package AdaptFitOS. 
Keywords:  Additive model; Condence band; Heteroscedasticity; Locally adaptive smoothing; Mixed model; Penalized splines; Varying variance 
Date:  2010–12–07 
URL:  http://d.repec.org/n?u=RePEc:got:gotcrc:050&r=ecm 
By:  Suzukawa, Akio 
Abstract:  Extreme value copulas are the limiting copulas of componentwise maxima. A bivariate extreme value copulas can be represented by a convex function called Pickands dependence function. In this paper we consider nonparametric estimation of the Pickands dependence function. Several estimators have been proposed. They can be classified into two types: Pickandstype estimators and CapÃ©raÃ FougÃ¨resGenesttype estimators. We propose a new class of estimators, which contains these two types of estimators. Asymptotic properties of the estimators are investigated, and asymptotic efficiencies of them are discussed under MarshallOlkin copulas. 
Keywords:  Bivariate exponential distribution, Extreme value distribution, Pickands dependence function, 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:hok:dpaper:230&r=ecm 
By:  Carter, Andrew V; Steigerwald, Douglas G 
Abstract:  In Cho and White (2007) "Testing for Regime Switching" the authors obtain the asymptotic null distribution of a quasilikelihood ratio (QLR) statistic. The statistic is designed to test the null hypothesis of one regime against the alternative of Markov switching between two regimes. Likelihood ratio statistics are used because the test involves nuisance parameters that are not identified under the null hypothesis, together with other nonstandard features. Cho and White focus on a quasilikelihood, which ignores certain serial correlation properties but allows for a tractable factorization of the likelihood. While the majority of their paper focuses on asymptotic behavior under the null hypothesis, Theorem 1(b) states that the quasimaximum likelihood estimator (QMLE) is consistent under the alternative hypothesis. Consistency of the QMLE requires that the expected quasiloglikelihood attain a global maximum at the population parameter values. This requirement holds for some Markov regimeswitching processes but, as we show below, not for an autoregressive process as analyzed in Cho and White. 
Keywords:  consistent, Markov regime switching, quasimaximum likelihood 
Date:  2010–10–26 
URL:  http://d.repec.org/n?u=RePEc:cdl:ucsbec:1665690&r=ecm 
By:  Luis F. Martins; Paulo M.M. Rodrigues 
Abstract:  In this paper we propose an approach to detect persistence changes in fractionally integrated models based on recursive forward and backward estimation of the Breitung and Hassler (2002) test. This procedure generalises to fractionally integrated processes the approaches of Leybourne, Kim, Smith and Newbold (2003) and Leybourne and Taylor (2003),which are ADF and seasonal unit root type tests, respectively, for the conventional intenger value context. Asymptotic results are derived and the performance of the new procedures evaluated in a Monte Carlo exercise. The ?nite sample size and power performance of the procedures are very encouraging and compare very favourably to available tests, such as those recently proposed by Hassler and Sheithauer (2009) and Sibbertsen and Kruse (2007).We also apply the test statistics introduced to several world inflation rates and and evidence of change in persistence in most series.<br> 
JEL:  C20 C22 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:ptu:wpaper:w201030&r=ecm 
By:  Tommaso, Proietti; Stefano, Grassi 
Abstract:  We apply a recent methodology, Bayesian stochastic model specification search (SMSS), for the selection of the unobserved components (level, slope, seasonal cycles, trading days effects) that are stochastically evolving over time. SMSS hinges on two basic ingredients: the noncentered representation of the unobserved components and the reparameterization of the hyperparameters representing standard deviations as regression parameters with unrestricted support. The choice of the prior and the conditional independence structure of the model enable the definition of a very efficient MCMC estimation strategy based on Gibbs sampling. We illustrate that the methodology can be quite successfully applied to discriminate between stochastic and deterministic trends, fixed and evolutive seasonal and trading day effects. 
Keywords:  Seasonality; Structural time series models; Variable selection. 
JEL:  C22 C11 C01 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:27305&r=ecm 
By:  Pei Pei (Indiana University Bloomington) 
Abstract:  This paper theoretically and empirically analyzes backtesting portfolio VaR with estimation risk in an intrinsically multivariate framework. For the first time in the literature, it takes into account the estimation of portfolio weights in forecasting portfolio VaR and its impact on backtesting. It shows that the estimation risk from estimating the portfolio weights as well as that from estimating the multivariate dynamic model of asset returns make the existing methods in a univariate framework inapplicable. And it proposes a general theory to quantify estimation risk applicable to the present problem and suggests practitioners a simple but effective way to carry out valid inference to overcome the effect of estimation risk in backtesting portfolio VaR. A simulation exercise illustrates our theoretical findings. In application, a portfolio of three stocks is considered. 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2010010&r=ecm 
By:  Sylvia FrühwirthSchnatter (Department of Applied Statistics, Johannes Kepler University Linz, Austria); Andrea Weber; Rudolf WinterEbmer 
Abstract:  This paper analyzes patterns in the earnings development of young labor market en trants over their life cycle. We identify four distinctly di®erent types of transition patterns between discrete earnings states in a large administrative data set. Further, we investigate the e®ects of labor market conditions at the time of entry on the probability of belonging to each transition type. To estimate our statistical model we use a modelbased clustering approach. The statistical challenge in our application comes from the di±culty in extending distancebased clustering approaches to the problem of identify groups of similar time series in a panel of discretevalued time series. We use Markov chain clustering, proposed by Pam minger and FrÄuhwirthSchnatter (2010), which is an approach for clustering discretevalued time series obtained by observing a categorical variable with several states. This method is based on ¯nite mixtures of ¯rstorder timehomogeneous Markov chain models. In order to analyze group membership we present an extension to this approach by formulating a prob abilistic model for the latent group indicators within the Bayesian classi¯cation rule using a multinomial logit model. 
Keywords:  Labor Market Entry Conditions, Transition Data, Markov Chain Monte Carlo, Multinomial Logit, Panel Data, Auxiliary Mixture Sampler, Bayesian Statistics 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:jku:econwp:2010_11&r=ecm 
By:  Cristina Danciulescu (Indiana University  Bloomington) 
Abstract:  The purpose of this paper is to develop a new and simple backtesting procedure that ex tends the previous work into the multivariate framework. We propose to use the multivariate Portmanteau statistic of LjungBox type to jointly test for the absence of autocorrelations and crosscorrelations in the vector of hits sequences for dierent positions, business lines or nancial institutions. Simulation exercises illustrate that this shift to a multivariate hits dimension delivers a test that increases signicantly the power of the traditional backtesting methods in capturing systemic risk: the building up of positive and signicant hits crosscorrelations which translates into simultaneous realization of large losses at several business lines or banks. Our multivariate procedure is addressing also an operational risk issue. The proposed technique provides a simple solution to the ValueatRisk(VaR) estimates aggregation problem: the institution's global VaR measure being either smaller or larger than the sum of individual trading lines' VaRs leading to the institution either under or over risk exposure by maintaining excessively high or low capital levels. An application using Prot and Loss and VaR data collected from two international major banks illustrates how our proposed testing approach performs in a realistic environment. Results from experiments we conducted using banks' data suggest that the proposed multivariate testing procedure is a more powerful tool in detecting systemic risk if it is combined with multivariate risk modeling i.e. if covariances are modeled in the VaR forecasts. 
Date:  2010–04 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2010004&r=ecm 
By:  ChiaLin Chang; Philip Hans Franses; Michael McAleer (University of Canterbury) 
Abstract:  Macroeconomic forecasts are often based on the interaction between econometric models and experts. A forecast that is based only on an econometric model is replicable and may be unbiased, whereas a forecast that is not based only on an econometric model, but also incorporates an expert’s touch, is nonreplicable and is typically biased. In this paper we propose a methodology to analyze the qualities of combined nonreplicable forecasts. One part of the methodology seeks to retrieve a replicable component from the nonreplicable forecasts, and compares this component against the actual data. A second part modifies the estimation routine due to the assumption that the difference between a replicable and a nonreplicable forecast involves a measurement error. An empirical example to forecast economic fundamentals for Taiwan shows the relevance of the methodological approach. 
Keywords:  Combined forecasts; efficient estimation; generated regressors; replicable forecasts; nonreplicable forecasts; expert’s intuition 
JEL:  C53 C22 E27 E37 
Date:  2010–12–01 
URL:  http://d.repec.org/n?u=RePEc:cbt:econwp:10/74&r=ecm 
By:  Long Kang (The Options Clearing Corporation); Simon H. Babbs (The Options Clearing Corporation) 
Abstract:  We introduce a multivariate GARCHCopula model to describe joint dynamics of overnight and daytime returns for multiple assets. The conditional mean and variance of individual overnight and daytime returns depend on their previous realizations through a variant of GARCH specification, and two Student’s t copulas describe joint distributions of both returns respectively. We employ both constant and timevarying correlation matrices for the t copulas and with the timevarying case the dependence structure of both returns depends on their previous dependence structures through a DCC specification. We estimate the model by a twostep procedure, where marginal distributions are estimated in the first step and copulas in the second. We apply our model to overnight and daytime returns of SPDR ETFs of nine major sectors and briefly illustrate its use in risk management and asset allocation. Our empirical results show higher mean, lower variance, fatter tails and lower correlations for overnight returns than daytime returns. Daytime returns are significantly negatively correlated with previous overnight returns. Moreover, daytime returns depend on previous overnight returns in both conditional variance and correlation matrix (through a DCC specification). Most of our empirical findings are consistent with the asymmetric information argument in the market microstructure literature. With respect to econometric modelling, our results show a DCC specification for correlation matrices of t copulas significantly improves the fit of data and enables the model to account for timevarying dependence structure. 
Date:  2010–06 
URL:  http://d.repec.org/n?u=RePEc:inu:caeprp:2010008&r=ecm 
By:  Dominique Guegan (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, EEPPSE  Ecole d'Économie de Paris  Paris School of Economics  Ecole d'Économie de Paris); Bertrand Hassani (CES  Centre d'économie de la Sorbonne  CNRS : UMR8174  Université PanthéonSorbonne  Paris I, BPCE  BPCE); Cédric Naud (BPCE  BPCE) 
Abstract:  Operational risk quantification requires dealing with data sets which often present extreme values which have a tremendous impact on capital computations (VaR). In order to take into account these effects we use extreme value distributions to model the tail of the loss distribution function. We focus on the Generalized Pareto Distribution (GPD) and use an extension of the Peakoverthreshold method to estimate the threshold above which the GPD is fitted. This one will be approximated using a Bootstrap method and the EM algorithm is used to estimate the parameters of the distribution fitted below the threshold. We show the impact of the estimation procedure on the computation of the capital requirement  through the VaR  considering other estimation methods used in extreme value theory. Our work points also the importance of the building's choice of the information set by the regulators to compute the capital requirement and we exhibit some incoherence with the actual rules. 
Keywords:  Operational risk, generalized pareto distribution, Picklands estimate, Hill estimate, Expectation Maximization algorithm, Monte Carlo simulations, VaR. 
Date:  2010–11 
URL:  http://d.repec.org/n?u=RePEc:hal:cesptp:halshs00544342_v1&r=ecm 
By:  Kevin Denny (School of Economics, University College Dublin); Veruska Oppedisano (Department of Economics, University College London) 
Abstract:  This paper estimates the marginal effect of class size on educational attainment of high school students. We control for the potential endogeneity of class size in two ways using a conventional instrumental variable approach, based on changes in cohort size, and an alternative method where identification is based on restriction on higher moments. The data is drawn from the Program for International Student Assessment (PISA) collected in 2003 for the United States and the United Kingdom. Using either method or the two in conjunction leads to the conclusion that increases in class size lead to improvements in student’s mathematics scores. Only the results for the United Kingdom are statistically significant. 
Keywords:  class sizes, educational production 
Date:  2010–12–02 
URL:  http://d.repec.org/n?u=RePEc:ucd:wpaper:201051&r=ecm 