
on Econometrics 
By:  Cizek, P. (Tilburg University, Center for Economic Research) 
Abstract:  A new class of robust regression estimators is proposed that forms an alternative to traditional robust onestep estimators and that achieves the âˆšn rate of convergence irrespective of the initial estimator under a wide range of distributional assumptions. The proposed reweighted least trimmed squares (RLTS) estimator employs datadependent weights determined from an initial robust fit. Just like many existing one and twostep robust methods, the RLTS estimator preserves robust properties of the initial robust estimate. However contrary to existing methods, the firstorder asymptotic behavior of RLTS is independent of the initial estimate even if errors exhibit heteroscedasticity, asymmetry, or serial correlation. Moreover, we derive the asymptotic distribution of RLTS and show that it is asymptotically efficient for normally distributed errors. A simulation study documents benefits of these theoretical properties in finite samples. 
Keywords:  asymptotic efficiency;breakdown point;least trimmed squares 
JEL:  C13 C21 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:201091&r=ecm 
By:  Almut E. D. Veraart (CREATES, School of Economics and Management Aarhus University) 
Abstract:  This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas the jump–robust statistics turn out not to be as jump robust as the asymptotic theory would suggest in the presence of a highly active jump process. In an empirical study on high frequency data from the Standard & Poor’s Depository Receipt (SPY), we investigate the impact of jumps on inference on volatility by realised variance in practice. 
Keywords:  Realised variance, realised multipower variation, truncated realised variance, inference, stochastic volatility, jumps, priceLength: 48 
JEL:  C10 C14 G10 
Date:  2010–09–18 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201065&r=ecm 
By:  Mogstad, Magne (Statistics Norway); Wiswall, Matthew (New York University) 
Abstract:  The linear IV estimator, in which the dependent variable is a linear function of a potentially endogenous regressor, is a major workhorse in empirical economics. When this regressor takes on multiple values, the linear specification restricts the marginal effects to be constant across all margins. This paper investigates the problems caused by the linearity restriction in IV estimation, and discusses possible remedies. We first examine the biases due to nonlinearity in the commonly used tests for nonzero treatment effects, selection bias, and instrument validity. Next, we consider three applications where theory suggests a nonlinear relationship, yet previous research has used linear IV estimators. We find that relaxing the linearity restriction in the IV estimation changes the qualitative conclusions about the relevant economic theory and the effectiveness of different policies. 
Keywords:  linear model, variable treatment intensity, nonlinearity, instrumental variables 
JEL:  C31 C14 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5216&r=ecm 
By:  Heckman, James J. (University of Chicago); Schmierer, Daniel (University of Chicago) 
Abstract:  This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971, 1974), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model. 
Keywords:  correlated random coefficient models, instrumental variables 
JEL:  C31 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp5205&r=ecm 
By:  Ole E. BarndorffNielsen (The T.N. Thiele Centre for Mathematics in Natural Science, Department of Mathematical Sciences, University of Aarhus, and CREATES); David G. Pollard (AHL Research, Man Research Laboratory); Neil Shephard (OxfordMan Institute, University of Oxford) 
Abstract:  Motivated by features of low latency data in financial econometrics we study in detail integervalued Lévy processes as the basis of price processes for high frequency econometrics. We propose using models built out of the difference of two subordinators. We apply these models in practice to low latency data for a variety of different types of futures contracts. 
Keywords:  futures markets, high frequency econometrics, low latency data, negative binomial, Skellam, tempered stable 
JEL:  C01 C14 C32 
Date:  2010–09–23 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201066&r=ecm 
By:  Tim Bollerslev (Department of Economics, Duke University, and NBER and CREATES); Viktor Todorov (Department of Finance, Kellogg School of Management, Northwestern University) 
Abstract:  We provide a new framework for estimating the systematic and idiosyncratic jump tail risks in financial asset prices. The theory underlying our estimates are based on infill asymptotic arguments for directly identifying the systematic and idiosyncratic jumps, together with conventional longspan asymptotics and Extreme Value Theory (EVT) approximations for consistently estimating the tail decay parameters and asymptotic tail dependencies. On implementing the new estimation procedures with a panel of highfrequency intraday prices for a large crosssection of individual stocks and the aggregate S&P 500 market portfolio, we find that the distributions of the systematic and idiosyncratic jumps are both generally heavytailed and not necessarily symmetric. Our estimates also point to the existence of strong dependencies between the marketwide jumps and the corresponding systematic jump tails for all of the stocks in the sample. We also show how the jump tail dependencies deduced from the highfrequency data together with the daytoday temporal variation in the volatility are able to explain the “extreme” dependencies visavis the market portfolio. 
Keywords:  Extreme events, jumps, highfrequency data, jump tails, nonparametric estimation, stochastic volatility, systematic risks, tail dependence. 
JEL:  C13 C14 G10 G12 
Date:  2010–09–10 
URL:  http://d.repec.org/n?u=RePEc:aah:create:201064&r=ecm 
By:  Ma, Jun (Department of Economics, Finance and Legal Studies, University of Alabama); Nelson, Charles R. (Department of Economics, University of Washington) 
Keywords:  ARMA, unobserved components, state space, GARCH, zeroinformationlimitcondition 
JEL:  C12 C22 C33 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:ihs:ihsesp:256&r=ecm 
By:  Borak, Szymon; Misiorek, Adam; Weron, Rafal 
Abstract:  Many of the concepts in theoretical and empirical finance developed over the past decades – including the classical portfolio theory, the BlackScholesMerton option pricing model or the RiskMetrics variancecovariance approach to VaR – rest upon the assumption that asset returns follow a normal distribution. But this assumption is not justified by empirical data! Rather, the empirical observations exhibit excess kurtosis, more colloquially known as fat tails or heavy tails. This chapter is intended as a guide to heavytailed models. We first describe the historically oldest heavytailed model – the stable laws. Next, we briefly characterize their recent lightertailed generalizations, the socalled truncated and tempered stable distributions. Then we study the class of generalized hyperbolic laws, which – like tempered stable distributions – can be classified somewhere between infinite variance stable laws and the Gaussian distribution. Finally, we provide numerical examples. 
Keywords:  Heavytailed distribution; Stable distribution; Tempered stable distribution; Generalized hyperbolic distribution; Asset return; Random number generation; Parameter estimation 
JEL:  C16 C13 G32 C15 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:25494&r=ecm 
By:  Michael C. M\"unnix; Rudi Sch\"afer; Thomas Guhr 
Abstract:  We present two statistical causes for the distortion of correlations on highfrequency financial data. We demonstrate that the asynchrony of trades as well as the decimalization of stock prices has a large impact on the decline of the correlation coefficients towards smaller return intervals (Epps effect). These distortions depend on the properties of the time series and are of purely statistical origin. We are able to present parameterfree compensation methods, which we validate in a model setup. Furthermore, the compensation methods are applied to highfrequency empirical data from the NYSE's TAQ database. A major fraction of the Epps effect can be compensated. The contribution of the presented causes is particularly high for stocks that are traded at low prices. 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:arx:papers:1009.6157&r=ecm 
By:  Laura Trinchera (University of Macerata); Giorgio Russolillo (University of Naples) 
Abstract:  <div style="textalign: justify;">Nowadays there is a preeminent need to measure very complex phenomena like poverty, progress, wellbeing, etc. As is well known, the main feature of a composite indicator is that it summarizes complex and multidimensional issues. Thanks to its features, Structural Equation Modeling seems to be a useful tool for building systems of composite indicators. Among the several methods that have been developed to estimate Structural Equation Models we focus on the PLS Path Modeling approach (PLSPM), because of the key role that estimation of the latent variables (i.e. the composite indicators) plays in the estimation process. In this work, first we present Structural Equation Models and PLSPM. Then we provide a suite of statistical methodologies for handling categorical indicators in PLSPM. In particular, in order to take categorical indicators into account, we propose to use a modified version of the PLSPM algorithm recently presented by Russolillo [2009]. This new approach provides a quantification of the categorical indicators in such a way that the weight of each quantified indicator is coherent with the explicative ability of the corresponding categorical indicator. To conclude, an application involving data taken from a paper by Russet [1964] will be presented.</div> 
Keywords:  PLS Path Modeling,Categorical Indicators,Structural Equation Modeling,Composite Indicators 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:mcr:wpaper:wpaper00030&r=ecm 
By:  Estrella Gómez Herrera (Department of Economic Theory and Economic History, University of Granada.) 
Abstract:  The gravity equation has been traditionally used to predict trade flows across countries. However, several problems related with its empirical application still remain unsolved. In this paper, I provide a survey of the recent literature concerning the specification and estimation methods of this equation. In addition, I compare the performance of two widely extended estimators, panel OLS and Poisson Pseudo Maximum Likelihood (PPML), for a dataset covering 80% of world trade. 
Keywords:  International trade, Gravity model, Estimation methods 
Date:  2010–09–01 
URL:  http://d.repec.org/n?u=RePEc:gra:wpaper:10/05&r=ecm 
By:  Angelo Mele (University of Illinois, UrbanaChampaign) 
Abstract:  In this paper, I develop and estimate a dynamic model of strategic network formation with heterogeneous agents. The main theoretical result is the existence of a unique stationary equilibrium, which characterizes the probability of observing a specific network in the data. As a consequence, the structural parameters can be estimated using only one observation of the network at a single point in time. The estimation is challenging, since the exact evaluation of the likelihood function is computationally infeasible even for very small networks. To overcome this problem, I propose a Bayesian Markov Chain Monte Carlo algorithm that avoids the direct evaluation of the likelihood. This method drastically reduces the computational burden of estimating the posterior distribution and allows inference in high dimensional models. I present an application to the study of segregation in school friendship networks, using data from Add Health. The latter contains the actual social network of each student in a representative sample of US schools. My results suggest that for White students, the value of a samerace friend decreases with the fraction of whites in the school. This relationship is of opposite sign for African American students. The model is used to study how different desegregation policies may affect the structure of the network in equilibrium. I find an inverted Ushape relationship between the fraction of students belonging to a racial group and the expected equilibrium segregation levels. These results suggests that these policies should be carefully designed in order to be effective. 
Keywords:  Social Networks, Bayesian Estimation, Markov Chain Monte Carlo 
JEL:  D85 C15 C73 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:net:wpaper:1016&r=ecm 
By:  Thomas Siedler; Bettina Sonnenberg 
Abstract:  During the last two decades, laboratory experiments have come into increasing prominence and constitute a popular method of research to examine behavioral outcomes and social preferences. However, it has been debated whether results from these experiments can be extrapolated to the real world and whether, for example, sample selection into the experiment might constitute a major shortcoming of this methodology. This note discusses potential benefits of combining experimental methods and representative datasets as a means to overcome some of the limitations of lab experiments. We also outline how large representative surveys can serve as reference data for researchers collecting 
Keywords:  experiments, survey, representativity 
JEL:  C01 C52 C8 C9 D0 D6 D81 D84 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:rsw:rswwps:rswwps146&r=ecm 
By:  Candelon Bertrand; Dumitrescu ElenaIvona; Hurlin Christophe (METEOR) 
Abstract:  This paper introduces a new generation of Early Warning Systems (EWS) which takes into account dynamics within a system composed by binary variables. We elaborate on Kauppi and Saikonnen (2008), which allows to consider several dynamic specifications and to use an exact maximum likelihood estimation method. Applied so as to predict currency crises for fifteen countries, this new EWS turns out to exhibit significantly better predictive abilities than the existing models both within and out of the sample. 
Keywords:  financial economics and financial management ; 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:dgr:umamet:2010047&r=ecm 
By:  Tsyplakov, Alexander 
Abstract:  This essay is aimed to provide a straightforward and sufficiently accessible demonstration of some known procedures for stochastic volatility model. It reviews the important related concepts, gives informal derivations of the methods and can be useful as a cookbook for a novice. The exposition is confined to classical (nonBayesian) framework and discretetime formulations. 
Keywords:  stochastic volatility 
JEL:  C13 C53 C15 C22 
Date:  2010–09–28 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:25511&r=ecm 
By:  Lawrence E. Blume; William A. Brock; Steven N. Durlauf; Yannis M. Ioannides 
Abstract:  While interest in social determinants of individual behavior has led to a rich theoretical literature and many efforts to measure these influences, a mature "social econometrics" has yet to emerge. This chapter provides a critical overview of the identification of social interactions. We consider linear and discrete choice models as well as social network structures. We also consider experimental and quasiexperimental methods. In addition to describing the state of the identification literature, we indicate areas where additional research is especially needed and suggest some directions that appear to be especially promising. 
Keywords:  social interactions, social networks, identification 
JEL:  C21 C23 C31 C35 C72 Z13 
Date:  2010 
URL:  http://d.repec.org/n?u=RePEc:tuf:tuftec:0754&r=ecm 
By:  Petr Gapko (Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic; Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic); Martin Šmíd (Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic) 
Abstract:  One of the biggest risks arising from financial operations is the risk of counterparty default, commonly known as a “credit risk”. Leaving unmanaged, the credit risk would, with a high probability, result in a crash of a bank. In our paper, we will focus on the credit risk quantification methodology. We will demonstrate that the current regulatory standards for credit risk management are at least not perfect, despite the fact that the regulatory framework for credit risk measurement is more developed than systems for measuring other risks, e.g. market risks or operational risk. Generalizing the well known KMV model, standing behind Basel II, we build a model of a loan portfolio involving a dynamics of the common factor, influencing the borrowers’ assets, which we allow to be nonnormal. We show how the parameters of our model may be estimated by means of past mortgage deliquency rates. We give a statistical evidence that the nonnormal model is much more suitable than the one assuming the normal distribution of the risk factors. We point out how the assumption that risk factors follow a normal distribution can be dangerous. Especially during volatile periods comparable to the current crisis, the normal distribution based methodology can underestimate the impact of change in tail losses caused by underlying risk factors. 
Keywords:  Credit Risk, Mortgage, Delinquency Rate, Generalized Hyperbolic Distribution, Normal Distribution 
JEL:  G21 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:fau:wpaper:wp2010_23&r=ecm 
By:  Arthur Grimes (Motu Economic and Public Policy Research; and University of Waikato); Chris Young (Motu Economic and Public Policy Research) 
Abstract:  We propose a new method to estimate a repeatsales house price index. Our unbalanced panel method employs an OLS panel regression to estimate the (log) house price as a function of time fixed effects and housespecific fixed effects. Comparisons are made across three repeatsales methods using actual data, and using simulated data with both stationary and nonstationary relative price innovations. The unbalanced panel method comprehensively utilises all sale information on a house rather than splitting sales into distinct pairs. It is the simplest of the methods to implement, and possesses superior properties to the other two methods under a wide range of data generation processes. 
Keywords:  RepeatSales, House Price Index, Case Shiller, Unbalanced Panel 
JEL:  C43 R32 
Date:  2010–09 
URL:  http://d.repec.org/n?u=RePEc:mtu:wpaper:10_10&r=ecm 