nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒10‒09
eighteen papers chosen by
Sune Karlsson
Orebro University

  1. Reweighted Least Trimmed Squares: An Alternative to One-Step Estimators By Cizek, P.
  2. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps? By Almut E. D. Veraart
  3. Linearity in Instrumental Variables Estimation: Problems and Solutions By Mogstad, Magne; Wiswall, Matthew
  4. Tests of Hypotheses Arising in the Correlated Random Coefficient Model By Heckman, James J.; Schmierer, Daniel
  5. Integer-valued Lévy processes and low latency financial econometrics By Ole E. Barndorff-Nielsen; David G. Pollard; Neil Shephard
  6. Jump Tails, Extreme Dependencies, and the Distribution of Stock Returns By Tim Bollerslev; Viktor Todorov
  7. Valid Inference for a Class of Models Where Standard Inference Performs Poorly: Including Nonlinear Regression, ARMA, GARCH, and Unobserved Components By Ma, Jun; Nelson, Charles R.
  8. Models for Heavy-tailed Asset Returns By Borak, Szymon; Misiorek, Adam; Weron, Rafal
  9. Statistical causes for the Epps effect in microstructure noise By Michael C. M\"unnix; Rudi Sch\"afer; Thomas Guhr
  10. On the use of Structural Equation Models and PLS Path Modeling to build composite indicators By Laura Trinchera; Giorgio Russolillo
  11. Are estimation techniques neutral to estimate gravity equations? By Estrella Gómez Herrera
  12. A Structural Model of Segregation in Social Networks By Angelo Mele
  13. Experiments, Surveys and the Use of Representative Samples as Reference Data By Thomas Siedler; Bettina Sonnenberg
  14. Currency Crises Early Warning Systems: why they should be Dynamic By Candelon Bertrand; Dumitrescu Elena-Ivona; Hurlin Christophe
  15. Revealing the arcane: an introduction to the art of stochastic volatility models By Tsyplakov, Alexander
  16. Identification of Social Interactions By Lawrence E. Blume; William A. Brock; Steven N. Durlauf; Yannis M. Ioannides
  17. Modeling a Distribution of Mortgage Credit Losses By Petr Gapko; Martin Šmíd
  18. A Simple Repeat Sales House Price Index: Comparative Properties Under Alternative Data Generation Processes. By Arthur Grimes; Chris Young

  1. By: Cizek, P. (Tilburg University, Center for Economic Research)
    Abstract: A new class of robust regression estimators is proposed that forms an alternative to traditional robust one-step estimators and that achieves the √n rate of convergence irrespective of the initial estimator under a wide range of distributional assumptions. The proposed reweighted least trimmed squares (RLTS) estimator employs data-dependent weights determined from an initial robust fit. Just like many existing one- and two-step robust methods, the RLTS estimator preserves robust properties of the initial robust estimate. However contrary to existing methods, the first-order asymptotic behavior of RLTS is independent of the initial estimate even if errors exhibit heteroscedasticity, asymmetry, or serial correlation. Moreover, we derive the asymptotic distribution of RLTS and show that it is asymptotically efficient for normally distributed errors. A simulation study documents benefits of these theoretical properties in finite samples.
    Keywords: asymptotic efficiency;breakdown point;least trimmed squares
    JEL: C13 C21
    Date: 2010
  2. By: Almut E. D. Veraart (CREATES, School of Economics and Management Aarhus University)
    Abstract: This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas the jump–robust statistics turn out not to be as jump robust as the asymptotic theory would suggest in the presence of a highly active jump process. In an empirical study on high frequency data from the Standard & Poor’s Depository Receipt (SPY), we investigate the impact of jumps on inference on volatility by realised variance in practice.
    Keywords: Realised variance, realised multipower variation, truncated realised variance, inference, stochastic volatility, jumps, priceLength: 48
    JEL: C10 C14 G10
    Date: 2010–09–18
  3. By: Mogstad, Magne (Statistics Norway); Wiswall, Matthew (New York University)
    Abstract: The linear IV estimator, in which the dependent variable is a linear function of a potentially endogenous regressor, is a major workhorse in empirical economics. When this regressor takes on multiple values, the linear specification restricts the marginal effects to be constant across all margins. This paper investigates the problems caused by the linearity restriction in IV estimation, and discusses possible remedies. We first examine the biases due to nonlinearity in the commonly used tests for non-zero treatment effects, selection bias, and instrument validity. Next, we consider three applications where theory suggests a nonlinear relationship, yet previous research has used linear IV estimators. We find that relaxing the linearity restriction in the IV estimation changes the qualitative conclusions about the relevant economic theory and the effectiveness of different policies.
    Keywords: linear model, variable treatment intensity, nonlinearity, instrumental variables
    JEL: C31 C14
    Date: 2010–09
  4. By: Heckman, James J. (University of Chicago); Schmierer, Daniel (University of Chicago)
    Abstract: This paper examines the correlated random coefficient model. It extends the analysis of Swamy (1971, 1974), who pioneered the uncorrelated random coefficient model in economics. We develop the properties of the correlated random coefficient model and derive a new representation of the variance of the instrumental variable estimator for that model. We develop tests of the validity of the correlated random coefficient model against the null hypothesis of the uncorrelated random coefficient model.
    Keywords: correlated random coefficient models, instrumental variables
    JEL: C31
    Date: 2010–09
  5. By: Ole E. Barndorff-Nielsen (The T.N. Thiele Centre for Mathematics in Natural Science, Department of Mathematical Sciences, University of Aarhus, and CREATES); David G. Pollard (AHL Research, Man Research Laboratory); Neil Shephard (Oxford-Man Institute, University of Oxford)
    Abstract: Motivated by features of low latency data in financial econometrics we study in detail integervalued Lévy processes as the basis of price processes for high frequency econometrics. We propose using models built out of the difference of two subordinators. We apply these models in practice to low latency data for a variety of different types of futures contracts.
    Keywords: futures markets, high frequency econometrics, low latency data, negative binomial, Skellam, tempered stable
    JEL: C01 C14 C32
    Date: 2010–09–23
  6. By: Tim Bollerslev (Department of Economics, Duke University, and NBER and CREATES); Viktor Todorov (Department of Finance, Kellogg School of Management, Northwestern University)
    Abstract: We provide a new framework for estimating the systematic and idiosyncratic jump tail risks in financial asset prices. The theory underlying our estimates are based on in-fill asymptotic arguments for directly identifying the systematic and idiosyncratic jumps, together with conventional long-span asymptotics and Extreme Value Theory (EVT) approximations for consistently estimating the tail decay parameters and asymptotic tail dependencies. On implementing the new estimation procedures with a panel of highfrequency intraday prices for a large cross-section of individual stocks and the aggregate S&P 500 market portfolio, we find that the distributions of the systematic and idiosyncratic jumps are both generally heavy-tailed and not necessarily symmetric. Our estimates also point to the existence of strong dependencies between the market-wide jumps and the corresponding systematic jump tails for all of the stocks in the sample. We also show how the jump tail dependencies deduced from the high-frequency data together with the day-to-day temporal variation in the volatility are able to explain the “extreme” dependencies vis-a-vis the market portfolio.
    Keywords: Extreme events, jumps, high-frequency data, jump tails, non-parametric estimation, stochastic volatility, systematic risks, tail dependence.
    JEL: C13 C14 G10 G12
    Date: 2010–09–10
  7. By: Ma, Jun (Department of Economics, Finance and Legal Studies, University of Alabama); Nelson, Charles R. (Department of Economics, University of Washington)
    Keywords: ARMA, unobserved components, state space, GARCH, zero-information-limit-condition
    JEL: C12 C22 C33
    Date: 2010–09
  8. By: Borak, Szymon; Misiorek, Adam; Weron, Rafal
    Abstract: Many of the concepts in theoretical and empirical finance developed over the past decades – including the classical portfolio theory, the Black-Scholes-Merton option pricing model or the RiskMetrics variance-covariance approach to VaR – rest upon the assumption that asset returns follow a normal distribution. But this assumption is not justified by empirical data! Rather, the empirical observations exhibit excess kurtosis, more colloquially known as fat tails or heavy tails. This chapter is intended as a guide to heavy-tailed models. We first describe the historically oldest heavy-tailed model – the stable laws. Next, we briefly characterize their recent lighter-tailed generalizations, the so-called truncated and tempered stable distributions. Then we study the class of generalized hyperbolic laws, which – like tempered stable distributions – can be classified somewhere between infinite variance stable laws and the Gaussian distribution. Finally, we provide numerical examples.
    Keywords: Heavy-tailed distribution; Stable distribution; Tempered stable distribution; Generalized hyperbolic distribution; Asset return; Random number generation; Parameter estimation
    JEL: C16 C13 G32 C15
    Date: 2010–09
  9. By: Michael C. M\"unnix; Rudi Sch\"afer; Thomas Guhr
    Abstract: We present two statistical causes for the distortion of correlations on high-frequency financial data. We demonstrate that the asynchrony of trades as well as the decimalization of stock prices has a large impact on the decline of the correlation coefficients towards smaller return intervals (Epps effect). These distortions depend on the properties of the time series and are of purely statistical origin. We are able to present parameter-free compensation methods, which we validate in a model setup. Furthermore, the compensation methods are applied to high-frequency empirical data from the NYSE's TAQ database. A major fraction of the Epps effect can be compensated. The contribution of the presented causes is particularly high for stocks that are traded at low prices.
    Date: 2010–09
  10. By: Laura Trinchera (University of Macerata); Giorgio Russolillo (University of Naples)
    Abstract: <div style="text-align: justify;">Nowadays there is a pre-eminent need to measure very complex phenomena like poverty, progress, well-being, etc. As is well known, the main feature of a composite indicator is that it summarizes complex and multidimensional issues. Thanks to its features, Structural Equation Modeling seems to be a useful tool for building systems of composite indicators. Among the several methods that have been developed to estimate Structural Equation Models we focus on the PLS Path Modeling approach (PLS-PM), because of the key role that estimation of the latent variables (i.e. the composite indicators) plays in the estimation process. In this work, first we present Structural Equation Models and PLS-PM. Then we provide a suite of statistical methodologies for handling categorical indicators in PLS-PM. In particular, in order to take categorical indicators into account, we propose to use a modified version of the PLS-PM algorithm recently presented by Russolillo [2009]. This new approach provides a quantification of the categorical indicators in such a way that the weight of each quantified indicator is coherent with the explicative ability of the corresponding categorical indicator. To conclude, an application involving data taken from a paper by Russet [1964] will be presented.</div>
    Keywords: PLS Path Modeling,Categorical Indicators,Structural Equation Modeling,Composite Indicators
    Date: 2010–09
  11. By: Estrella Gómez Herrera (Department of Economic Theory and Economic History, University of Granada.)
    Abstract: The gravity equation has been traditionally used to predict trade flows across countries. However, several problems related with its empirical application still remain unsolved. In this paper, I provide a survey of the recent literature concerning the specification and estimation methods of this equation. In addition, I compare the performance of two widely extended estimators, panel OLS and Poisson Pseudo Maximum Likelihood (PPML), for a dataset covering 80% of world trade.
    Keywords: International trade, Gravity model, Estimation methods
    Date: 2010–09–01
  12. By: Angelo Mele (University of Illinois, Urbana-Champaign)
    Abstract: In this paper, I develop and estimate a dynamic model of strategic network formation with heterogeneous agents. The main theoretical result is the existence of a unique stationary equilibrium, which characterizes the probability of observing a specific network in the data. As a consequence, the structural parameters can be estimated using only one observation of the network at a single point in time. The estimation is challenging, since the exact evaluation of the likelihood function is computationally infeasible even for very small networks. To overcome this problem, I propose a Bayesian Markov Chain Monte Carlo algorithm that avoids the direct evaluation of the likelihood. This method drastically reduces the computational burden of estimating the posterior distribution and allows inference in high dimensional models. I present an application to the study of segregation in school friendship networks, using data from Add Health. The latter contains the actual social network of each student in a representative sample of US schools. My results suggest that for White students, the value of a same-race friend decreases with the fraction of whites in the school. This relationship is of opposite sign for African American students. The model is used to study how different desegregation policies may affect the structure of the network in equilibrium. I find an inverted U-shape relationship between the fraction of students belonging to a racial group and the expected equilibrium segregation levels. These results suggests that these policies should be carefully designed in order to be effective.
    Keywords: Social Networks, Bayesian Estimation, Markov Chain Monte Carlo
    JEL: D85 C15 C73
    Date: 2010–09
  13. By: Thomas Siedler; Bettina Sonnenberg
    Abstract: During the last two decades, laboratory experiments have come into increasing prominence and constitute a popular method of research to examine behavioral outcomes and social preferences. However, it has been debated whether results from these experiments can be extrapolated to the real world and whether, for example, sample selection into the experiment might constitute a major shortcoming of this methodology. This note discusses potential benefits of combining experimental methods and representative datasets as a means to overcome some of the limitations of lab experiments. We also outline how large representative surveys can serve as reference data for researchers collecting
    Keywords: experiments, survey, representativity
    JEL: C01 C52 C8 C9 D0 D6 D81 D84
    Date: 2010
  14. By: Candelon Bertrand; Dumitrescu Elena-Ivona; Hurlin Christophe (METEOR)
    Abstract: This paper introduces a new generation of Early Warning Systems (EWS) which takes into account dynamics within a system composed by binary variables. We elaborate on Kauppi and Saikonnen (2008), which allows to consider several dynamic specifications and to use an exact maximum likelihood estimation method. Applied so as to predict currency crises for fifteen countries, this new EWS turns out to exhibit significantly better predictive abilities than the existing models both within and out of the sample.
    Keywords: financial economics and financial management ;
    Date: 2010
  15. By: Tsyplakov, Alexander
    Abstract: This essay is aimed to provide a straightforward and sufficiently accessible demonstration of some known procedures for stochastic volatility model. It reviews the important related concepts, gives informal derivations of the methods and can be useful as a cookbook for a novice. The exposition is confined to classical (non-Bayesian) framework and discrete-time formulations.
    Keywords: stochastic volatility
    JEL: C13 C53 C15 C22
    Date: 2010–09–28
  16. By: Lawrence E. Blume; William A. Brock; Steven N. Durlauf; Yannis M. Ioannides
    Abstract: While interest in social determinants of individual behavior has led to a rich theoretical literature and many efforts to measure these influences, a mature "social econometrics" has yet to emerge. This chapter provides a critical overview of the identification of social interactions. We consider linear and discrete choice models as well as social network structures. We also consider experimental and quasi-experimental methods. In addition to describing the state of the identification literature, we indicate areas where additional research is especially needed and suggest some directions that appear to be especially promising.
    Keywords: social interactions, social networks, identification
    JEL: C21 C23 C31 C35 C72 Z13
    Date: 2010
  17. By: Petr Gapko (Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic; Institute of Economic Studies, Faculty of Social Sciences, Charles University, Prague, Czech Republic); Martin Šmíd (Institute of Information Theory and Automation, Academy of Sciences of the Czech Republic)
    Abstract: One of the biggest risks arising from financial operations is the risk of counterparty default, commonly known as a “credit risk”. Leaving unmanaged, the credit risk would, with a high probability, result in a crash of a bank. In our paper, we will focus on the credit risk quantification methodology. We will demonstrate that the current regulatory standards for credit risk management are at least not perfect, despite the fact that the regulatory framework for credit risk measurement is more developed than systems for measuring other risks, e.g. market risks or operational risk. Generalizing the well known KMV model, standing behind Basel II, we build a model of a loan portfolio involving a dynamics of the common factor, influencing the borrowers’ assets, which we allow to be non-normal. We show how the parameters of our model may be estimated by means of past mortgage deliquency rates. We give a statistical evidence that the non-normal model is much more suitable than the one assuming the normal distribution of the risk factors. We point out how the assumption that risk factors follow a normal distribution can be dangerous. Especially during volatile periods comparable to the current crisis, the normal distribution based methodology can underestimate the impact of change in tail losses caused by underlying risk factors.
    Keywords: Credit Risk, Mortgage, Delinquency Rate, Generalized Hyperbolic Distribution, Normal Distribution
    JEL: G21
    Date: 2010–09
  18. By: Arthur Grimes (Motu Economic and Public Policy Research; and University of Waikato); Chris Young (Motu Economic and Public Policy Research)
    Abstract: We propose a new method to estimate a repeat-sales house price index. Our unbalanced panel method employs an OLS panel regression to estimate the (log) house price as a function of time fixed effects and house-specific fixed effects. Comparisons are made across three repeat-sales methods using actual data, and using simulated data with both stationary and non-stationary relative price innovations. The unbalanced panel method comprehensively utilises all sale information on a house rather than splitting sales into distinct pairs. It is the simplest of the methods to implement, and possesses superior properties to the other two methods under a wide range of data generation processes.
    Keywords: Repeat-Sales, House Price Index, Case Shiller, Unbalanced Panel
    JEL: C43 R32
    Date: 2010–09

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at For comments please write to the director of NEP, Marco Novarese at <>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.