nep-ecm New Economics Papers
on Econometrics
Issue of 2010‒12‒04
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. Conditional stochastic dominance tests in dynamic settings By Jesús Gonzalo; José Olmo
  2. Chow-Lin Methods in Spatial Mixed Models By Wolfgang Polasek; Richard Sellner; Carlos Llano
  3. Maintaining symmetry of simulated likelihood functions By Laura Mørch Andersen
  4. Forecasting with Medium and Large Bayesian VARs By Gary Koop
  5. Panel Estimation for Worriers By Anindya Banerjee; Markus Eberhardt; J. James Reade
  6. Weighted trimmed likelihood estimator for GARCH models By Chalabi, Yohan / Y.; Wuertz, Diethelm
  7. Estimating the effect of a variable in a high-dimensional regression model By Peter Sandholt Jensen; Allan H. Würtz
  8. Mixtures of g-priors for Bayesian model averaging with economic applications By Ley, Eduardo; Steel, Mark F. J.
  9. Conditional beta pricing models: A nonparametric approach. By Eva Ferreira; Javier Gil-Bazo; Susan Orbe
  10. A Spectral Estimation of Tempered Stable Stochastic Volatility Models and Option Pricing By Junye Lia; Carlo Favero; Fulvio Ortu
  11. How to evaluate an Early Warning System? Towards a United Statistical Framework for Assessing Financial Crises Forecasting Methods By Candelon Bertrand; Dumitrescu Elena-Ivona; Hurlin Christophe
  12. Selecting random parameters in discrete choice experiment for environmental valuation: A simulation experiment. By Petr Mariel; Amaya De Ayala; David Hoyos; Sabah Abdullah
  13. Forecasting with mixed-frequency data By Elena Andreou; Eric Ghysels; Andros Kourtellos
  14. Inferring Fundamental Value and Crash Nonlinearity from Bubble Calibration By Wanfeng Yan; Ryan Woodard; Didier Sornette
  15. Wavelet-Based Prediction for Governance, Diversification and Value Creation Variables By Ines Kahloul; Anouar Ben Mabrouk; Slah-Eddine Hallara
  16. Measuring Spatial Dynamics in Metropolitan Areas By S. J. Rey; L. Anselin; D. C. Folch; M. L. Sastre-Gutierrez
  17. Measuring industrial agglomeration with inhomogeneous K-function: the case of ICT firms in Milan (Italy) By Giuseppe Espa; Giuseppe Arbia; Diego Giuliani

  1. By: Jesús Gonzalo; José Olmo
    Abstract: This paper proposes nonparametric consistent tests of conditional stochastic dominance of arbitrary order in a dynamic setting. The novelty of these tests resides on the nonparametric manner of incorporating the information set into the test. The test allows for general forms of unknown serial and mutual dependence between random variables, and has an asymptotic distribution under the null hypothesis that can be easily approximated by a p-value transformation method. This method has a good finite-sample performance. These tests are applied to determine investment efficiency between US industry portfolios conditional on the performance of the market portfolio. Our analysis suggests that Utilities are the best performing sectors in normal as well as distress episodes of the market.
    Keywords: Empirical processes, Hypothesis testing, Lower partial moments, Martingale difference sequence, P-value transformation, Stochastic dominance,
    JEL: C1 C2 G1
    Date: 2010–10
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1029&r=ecm
  2. By: Wolfgang Polasek (Institute for Advanced Studies, Vienna, Austria; University of Porto, Porto, Portugal; The Rimini Centre for Economic Analysis (RCEA)); Richard Sellner (Institute for Advanced Studies, Vienna, Austria); Carlos Llano (Universidad Autónoma de Madrid, Facultad de Ciencias Económicas y Empresariales, Departamento de Análisis Económico, Madrid, Spain)
    Abstract: Missing data in dynamic panel models occur quite often since detailed recording of the dependent variable is often not possible at all observation points in time and space. In this paper we develop classical and Bayesian methods to complete missing data in panel models. The Chow-Lin (1971) method is a classical method for completing dependent disaggregated data and is successfully applied in economics to disaggregate aggregated time series. We will extend the space-time panel model in a new way to include cross-sectional and spatially correlated data. The missing disaggregated data will be obtained either by point prediction or by a numerical (posterior) predictive density. Furthermore, we point out that the approach can be extended to more complex models, like ow data or systems of panel data. The panel Chow-Lin approach will be demonstrated with examples involving regional growth for Spanish regions.
    Keywords: Space-time interpolation, Spatial panel econometrics, MCMC, Spatial Chow-Lin, missing regional data, Spanish provinces, MCMC, NUTS: nomenclature of territorial units for statistics
    JEL: C11 C15 C52 E17 R12
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:47_10&r=ecm
  3. By: Laura Mørch Andersen (Institute of Food and Resource Economics, University of Copenhagen)
    Abstract: This paper suggests solutions to two different types of simulation errors related to Quasi-Monte Carlo integration. Likelihood functions which depend on standard deviations of mixed parameters are symmetric in nature. This paper shows that antithetic draws preserve this symmetry and thereby improves precision substantially. Another source of error is that models testing away mixing dimensions must replicate the relevant dimensions of the quasi-random draws in the simulation of the restricted likelihood. These simulation errors are ignored in the standard estimation procedures used today and this paper shows that the result may be substantial estimation- and inference errors within the span of draws typically applied.
    Keywords: Quasi-Monte Carlo integration; Antithetic draws; Likelihood Ratio tests; simulated likelihood; panel mixed multinomial logit; Halton draws
    JEL: C15 C25
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:foi:wpaper:2010_16&r=ecm
  4. By: Gary Koop (University of Strathclyde; The Rimini Centre for Economic Analysis (RCEA))
    Abstract: This paper is motivated by the recent interest in the use of Bayesian VARs for forecasting, even in cases where the number of dependent variables is large. In such cases, factor methods have been traditionally used but recent work using a particular prior suggests that Bayesian VAR methods can forecast better. In this paper, we consider a range of alternative priors which have been used with small VARs, discuss the issues which arise when they are used with medium and large VARs and examine their forecast performance using a US macroeconomic data set containing 168 variables. We ?nd that Bayesian VARs do tend to forecast better than factor methods and provide an extensive comparison of the strengths and weaknesses of various approaches. Our empirical results show the importance of using forecast metrics which use the entire predictive density, instead of using only point forecasts.
    Keywords: Bayesian, Minnesota prior, stochastic search variable selection, predictive likelihood
    Date: 2010–01
    URL: http://d.repec.org/n?u=RePEc:rim:rimwps:43_10&r=ecm
  5. By: Anindya Banerjee; Markus Eberhardt; J. James Reade
    Abstract: The recent blossoming of panel econometrics in general and panel time-series methods in particular has enabled many more research questions to be investigated than before. However, this development has not assuaged serious concerns over the lack of diagnostic testing procedures in panel econometrics, in particular vis-a-vis the prominence of such practices in the time-series domain: the recent introduction of residual cross-section independence tests aside, within mainstream panel empirics the combination of ‘model’, ‘spefication’ and ‘testing’ typically refers to the distinction between fixed and random effects, as opposed to a rigorous investigation of residual properties. In this paper we investigate these issues in the context of non-stationary panels with multifactor error structure, employing Monte Carlo simulations to investigate the distributions and rejection frequencies for standard time-series diagnostic procedures, including tests for residual autocorrelation, ARCH, normality, heteroskedasticity and functional form.
    Keywords: Panel time-series, residual diagnostic, common factor model
    JEL: C12 C22 C23
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:oxf:wpaper:514&r=ecm
  6. By: Chalabi, Yohan / Y.; Wuertz, Diethelm
    Abstract: Generalized autoregressive heteroskedasticity (GARCH) models are widely used to reproduce stylized facts of financial time series and today play an essential role in risk management and volatility forecasting. But despite extensive research, problems are still encountered during parameter estimation in the presence of outliers. Here we show how this limitation can be overcome by applying the robust weighted trimmed likelihood estimator (WTLE) to the standard GARCH model. We suggest a fast implementation and explain how the additional robust parameter can be automatically estimated. We compare our approach with other recently introduced robust GARCH estimators and show through the results of an extensive simulation study that the proposed estimator provides robust and reliable estimates with a small computation cost. Moreover, the proposed fully automatic method for selecting the trimming parameter obviates the tedious fine tuning process required by other models to obtain a “robust” parameter, which may be appreciated by practitioners.
    Keywords: GARCH Models; Robust Estimators; Outliers; Weighted Trimmed Likelihood Estimator (WTLE); Quasi Maximum Likelihood Estimator (QMLE)
    JEL: C40
    Date: 2010–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:26536&r=ecm
  7. By: Peter Sandholt Jensen (Department of Business and Economics, University of Southern Denmark); Allan H. Würtz (School of Economics and Management, University of Aarhus and CREATES)
    Abstract: A problem encountered in some empirical research, e.g. growth empirics, is that the potential number of explanatory variables is large compared to the number of observations. This makes it infeasible to condition on all variables in order to determine whether a particular variable has an effect. We assume that the effect is identified in a high-dimensional linear model specified by unconditional moment restrictions. We consider properties of the following methods, which rely on lowdimensional models to infer the effect: Extreme bounds analysis, the minimum t-statistic over models, Sala-i-Martin’s method, BACE, BIC, AIC and general-tospecific. We propose a new method and show that it is well behaved compared to existing methods.
    Keywords: AIC, BACE, BIC, extreme bounds analysis, general-to-specific, robustness, sensitivity analysis.
    JEL: C12 C51 C52
    Date: 2010–11–24
    URL: http://d.repec.org/n?u=RePEc:aah:create:2010-73&r=ecm
  8. By: Ley, Eduardo; Steel, Mark F. J.
    Abstract: We examine the issue of variable selection in linear regression modeling, where we have a potentially large amount of possible covariates and economic theory offers insufficient guidance on how to select the appropriate subset. Bayesian Model Averaging presents a formal Bayesian solution to dealing with model uncertainty. Our main interest here is the effect of the prior on the results, such as posterior inclusion probabilities of regressors and predictive performance. We combine a Binomial-Beta prior on model size with a g-prior on the coefficients of each model. In addition, we assign a hyperprior to g, as the choice of g has been found to have a large impact on the results. For the prior on g, we examine the Zellner-Siow prior and a class of Beta shrinkage priors, which covers most choices in the recent literature. We propose a benchmark Beta prior, inspired by earlier findings with fixed g, and show it leads to consistent model selection. Inference is conducted through a Markov chain Monte Carlo sampler over model space and g. We examine the performance of the various priors in the context of simulated and real data. For the latter, we consider two important applications in economics, namely cross-country growth regression and returns to schooling. Recommendations to applied users are provided.
    Keywords: Consistency; Model uncertainty; Posterior odds; Prediction; Robustness
    JEL: O47 C11
    Date: 2010–11–22
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:26941&r=ecm
  9. By: Eva Ferreira (Universidad del País Vasco/Euskal Herriko Unibertsitatea); Javier Gil-Bazo (Universidad Pompeu Fabra); Susan Orbe (Universidad del País Vasco/Euskal Herriko Unibertsitatea)
    Abstract: We propose a two-stage procedure to estimate conditional beta pricing models that allow for flexibility in the dynamics of assets' covariances with risk factors and market prices of risk (MPR). First, conditional covariances are estimated nonparametrically for each asset and period using the time-series of previous data. Then, time-varying MPR are estimated from the cross-section of returns and covariances using the entire sample. We prove the consistency and asymptotic normality of the estimators. Results from a Monte Carlo simulation for the three-factor model of Fama and French (1993) suggest that nonparametrically estimated betas outperform rolling betas under different specifications of beta dynamics. Using return data on the 25 size and book-to-market sorted portfolios, we find that MPR associated with the three Fama-French factors exhibit substantial variation through time. Finally, the flexible version of the three-factor model beats alternative parametric specifications in terms of forecasting future returns.
    Keywords: Kernel estimation; Conditional asset pricing models; Fama-French three-factor model; Locally stationary processes
    JEL: G12 C14 C32
    Date: 2010–11–24
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:201010&r=ecm
  10. By: Junye Lia; Carlo Favero; Fulvio Ortu
    Abstract: A characteristic function-based method is proposed to estimate the time-changed L´evy models, which take into account both stochastic volatility and infinite-activity jumps. The method facilitates computation and overcomes problems related to the discretization error and to the non-tractable probability density. Estimation results and option pricing performance indicate that the infiniteactivity model performs better than the finite-activity one. By introducing a jump component in the volatility process, a double-jump model is also investigated.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:igi:igierp:370&r=ecm
  11. By: Candelon Bertrand; Dumitrescu Elena-Ivona; Hurlin Christophe (METEOR)
    Abstract: This paper proposes a new statistical framework originating from the traditional credit-scoring literature, to evaluate currency crises Early Warning Systems (EWS). Based on an assessment of the predictive power of panel logit and Markov frameworks, the panel logit model is outperforming the Markov switching specitcations. Furthermore, the introduction of forward-looking variables clearly improves the forecasting properties of the EWS. This improvement confirms the adequacy of the second generation crisis models in explaining the occurrence of crises.
    Keywords: macroeconomics ;
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2010046&r=ecm
  12. By: Petr Mariel (Department of Applied Economics III (Econometrics and Statistics), University of the Basque Country); Amaya De Ayala (Department of Applied Economics III (Econometrics and Statistics), University of the Basque Country); David Hoyos (Department of Applied Economics III (Econometrics and Statistics), University of the Basque Country); Sabah Abdullah (Environmental Economics Unit, Institute for Public Economics, University of the Basque Country)
    Abstract: This paper examines the various tests commonly used to select random parameters in choice modelling. The most common procedures for selecting random parameters are: the Lagrange Multiplier test as proposed by McFadden and Train (2000), the t-statistic of the deviation of the random parameter and the log-likelihood ratio test. The identification of random parameters in other words the recognition of preference heterogeneity among population is based on the fact that an individual makes a choice depending on her/his: tastes, perceptions and experiences. A simulation experiment was carried out based on a real choice experiment and the results indicated that the power of these three tests depends importantly on the spread and type of the tested parameter distribution.
    Keywords: choice experiment, preference heterogeneity, random parameter logit, simulation, tests for selecting random parameters
    JEL: Q51
    Date: 2010–11–23
    URL: http://d.repec.org/n?u=RePEc:ehu:biltok:201009&r=ecm
  13. By: Elena Andreou; Eric Ghysels; Andros Kourtellos
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:ucy:cypeua:10-2010&r=ecm
  14. By: Wanfeng Yan; Ryan Woodard; Didier Sornette
    Abstract: Identifying unambiguously the presence of a bubble in an asset price remains an unsolved problem in standard econometric and financial economic approaches. A large part of the problem is that the fundamental value of an asset is, in general, not directly observable and it is poorly constrained to calculate. Further, it is not possible to distinguish between an exponentially growing fundamental price and an exponentially growing bubble price. We present a series of new models based on the Johansen-Ledoit-Sornette (JLS) model, which is a flexible tool to detect bubbles and predict changes of regime in financial markets. Our new models identify the fundamental value of an asset price and crash nonlinearity from a bubble calibration. In addition to forecasting the time of the end of a bubble, the new models can also estimate the fundamental value and the crash nonlinearity. Besides, the crash nonlinearity obtained in the new models presents a new approach to possibly identify the dynamics of a crash after a bubble. We test the models using data from three historical bubbles ending in crashes from different markets. They are: the Hong Kong Hang Seng index 1997 crash, the S&P 500 index 1987 crash and the Shanghai Composite index 2009 crash. All results suggest that the new models perform very well in describing bubbles, forecasting their ending times and estimating fundamental value and the crash nonlinearity. The performance of the new models is tested under both the Gaussian and non-Gaussian residual assumption. Under the Gaussian residual assumption, nested hypotheses with the Wilks statistics are used and the p-values suggest that models with more parameters are necessary. Under non-Gaussian residual assumption, we use a bootstrap method to get type I and II errors of the hypotheses. All tests confirm that the generalized JLS models provide useful improvements over the standard JLS model.
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1011.5343&r=ecm
  15. By: Ines Kahloul; Anouar Ben Mabrouk; Slah-Eddine Hallara
    Abstract: We study the possibility of completing data bases of a sample of governance, diversification and value creation variables by providing a well adapted method to reconstruct the missing parts in order to obtain a complete sample to be applied for testing the ownership-structure/diversification relationship. It consists of a dynamic procedure based on wavelets. A comparison with Neural Networks, the most used method, is provided to prove the efficiency of the here-developed one. The empirical tests are conducted on a set of French firms.
    Date: 2010–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1011.5020&r=ecm
  16. By: S. J. Rey; L. Anselin; D. C. Folch; M. L. Sastre-Gutierrez
    Abstract: This paper introduces a new approach to measuring neighborhood change. Instead of the traditional method of identifying “neighborhoods†a priori and then studying how resident attributes change over time, our approach looks at the neighborhood more intrinsically as a unit that has both a geographic footprint and a socioeconomic composition. Therefore, change is identified when both as- pects of a neighborhood transform from one period to the next. Our approach is based on a spatial clustering algorithm that identifies neighborhoods at two points in time for one city. We also develop indicators of spatial change at both the macro (city) level as well as local (neighborhood) scale. We illustrate these methods in an application to an extensive database of time-consistent census tracts for 359 of the largest metropolitan areas in the US for the period 1990-2000.
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:asg:wpaper:5&r=ecm
  17. By: Giuseppe Espa; Giuseppe Arbia; Diego Giuliani
    Abstract: Why do industrial clusters occur in space? Is it because industries need to stay close together to interact or, conversely, because they concentrate in certain portions of space to exploit favourable conditions like public incentives, proximity to communication networks, to big population concentrations or to reduce transport costs? This is a fundamental question and the attempt to answer to it using empirical data is a challenging statistical task. In economic geography scientists refer to this dichotomy using the two categories of spatial interaction and spatial reaction to common factors. In economics we can refer to a distinction between exogenous causes and endogenous effects. In spatial econometrics and statistics we use the terms of spatial dependence and spatial heterogeneity. A series of recent papers introduced explorative methods to analyses the spatial patterns of firms using micro data and characterizing each firm by its spatial coordinates. In such a setting a spatial distribution of firms is seen as a point pattern and an industrial cluster as the phenomenon of extra-concentration of one industry with respect to the concentration of a benchmarking spatial distribution. Often the benchmarking distribution is that of the whole economy on the ground that exogenous factors affect in the same way all branches. Using such an approach a positive (or negative) spatial dependence between firms is detected when the pattern of a specific sector is more aggregated (or more dispersed) than the one of the whole economy. In this paper we suggest a parametric approach to the analysis of spatial heterogeneity, based on the socalled inhomogeneous K-function (Baddeley et al., 2000). We present an empirical application of the method to the spatial distribution of high-tech industries in Milan (Italy) in 2001. We consider the economic space to be non homogenous, we estimate the pattern of inhomogeneity and we use it to separate spatial heterogeneity from spatial dependence.
    JEL: C15 C21 C59 R12
    Date: 2010
    URL: http://d.repec.org/n?u=RePEc:trn:utwpde:1014&r=ecm

This nep-ecm issue is ©2010 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.