nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒10‒13
seventeen papers chosen by
Sune Karlsson
Orebro University

  1. Fourier--type estimation of the power garch model with stable--paretian innovations By Francq, Christian; Meintanis, Simos
  2. Risk-parameter estimation in volatility models By Francq, Christian; Zakoian, Jean-Michel
  3. Spatial Filtering and Model Interpretation for Spatial Durbin Models By Matthias Koch
  4. The Viability of Global Optimization for Parameter Estimation in Spatial Econometrics Models By Mark Wachowiak; Renata Wachowiak-Smolikova; Jonathan Zimmerling
  5. Identification and Estimation of Games with Incomplete Information Using Excluded Regressors By Arthur Lewbel; Xun Tang
  6. Forecasting with Unobserved Heterogeneity By Matteo G. Richiardi
  7. Portfolio risk evaluation: An approach based on dynamic conditional correlations models and wavelet multiresolution analysis By Khalfaoui, R; Boutahar, M
  8. Structural gravity estimation & agriculture By Prehn, Sören; Brümmer, Bernhard; Glauben, Thomas
  9. Limit theorems for non-degenerate U-statistics of continuous semimartingales By Mark Podolskij; Christian Schmidt; Johanna Fasciati Ziegel
  10. Probabilistic Bounded Relative Error Property for Learning Rare Event Simulation Techniques By Ad Ridder; Bruno Tuffin
  11. The Speed of Income Convergence in Europe: A case for Bayesian Model Averaging with Eigenvector Filtering By Florian Schoiswohl; Philipp Piribauer; Michael Gmeinder; Matthias Koch; Manfred Fischer
  12. Microstructure effect on firm’s volatility risk By Flavia Barsotti; Simona Sanfelici
  13. An Overview of the Special Regressor Method By Arthur Lewbel
  14. A Dynamic Bivariate Poisson Model for Analysing and Forecasting Match Results in the English Premier League By Siem Jan Koopman; Rutger Lit
  15. A Bayesian approach to identifying and interpreting regional convergence clubs in Europe By Manfred M. Fischer; James P. LeSage
  16. Understanding DSGE Filters in Forecasting and Policy Analysis By Andrle, Michal
  17. Additive decomposition in two-stage DEA: An alternative approach By Despotis, Dimitrs; Koronakos, Gregory; Sotiros, Dimitris

  1. By: Francq, Christian; Meintanis, Simos
    Abstract: We consider estimation for general power GARCH models under stable--Paretian innovations. Exploiting the simple structure of the conditional characteristic function of the observations driven by these models we propose minimum distance estimation based on the empirical characteristic function of corresponding residuals. Consistency of the estimators is proved, and we obtain a singular asymptotic distribution which is concentrated on a hyperplane. Efficiency issues are explored and finite--sample results are presented as well as applications of the proposed procedures to real data from the financial markets. A multivariate extension is also considered.
    Keywords: GARCH model; Minimum distance estimation; Heavy--tailed distribution; Empirical characteristic function
    JEL: C32 C13 C22
    Date: 2012–10–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:41667&r=ecm
  2. By: Francq, Christian; Zakoian, Jean-Michel
    Abstract: This paper introduces the concept of risk parameter in conditional volatility models of the form $\epsilon_t=\sigma_t(\theta_0)\eta_t$ and develops statistical procedures to estimate this parameter. For a given risk measure $r$, the risk parameter is expressed as a function of the volatility coefficients $\theta_0$ and the risk, $r(\eta_t)$, of the innovation process. A two-step method is proposed to successively estimate these quantities. An alternative one-step approach, relying on a reparameterization of the model and the use of a non Gaussian QML, is proposed. Asymptotic results are established for smooth risk measures as well as for the Value-at-Risk (VaR). Asymptotic comparisons of the two approaches for VaR estimation suggest a superiority of the one-step method when the innovations are heavy-tailed. For standard GARCH models, the comparison only depends on characteristics of the innovations distribution, not on the volatility parameters. Monte-Carlo experiments and an empirical study illustrate these findings.
    Keywords: GARCH; Quantile Regression; Quasi-Maximum Likelihood; Risk measures; Value-at-Risk
    JEL: C13 C22
    Date: 2012–10–04
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:41713&r=ecm
  3. By: Matthias Koch
    Abstract: Spatial Filter for spatial autoregressive models like the spatial Durbin Model have seen a great interest in the recent literature. Pace et al. (2011) show that the spatial filtering methods developed by Griffith (2000) have desireable estimation properties for some parameters associated with spatial autoregessive models. However, spatial filtering faces two conceptual weaknesses: First the estimated parameters lack in general, and especially for the Spatial Durbin Model a proper interpretation. Second, there exists an inherent tradeoff between the estimator bias and its efficiency, depending on the spectrum of the used spatial weight matrix. This paper tackles both problems by introducing a new four step estimation procedure based on the eigenvectors of the spatial weight matrix. This new estimation procedure estimates all parameters of interest in a Spatial Durbin model and thus allows for a proper model interpretation. Additionally the estimation procedure's efficiency is only marginally influenced by the number of added eigenvectors, which allows us to use approximatly 95% of the available eigenvectors. By using Monte Carlo Simulations we observe that the estimaton procedure has a lower (or equal) bias and smaller (or equal) sample variance as the corresponding Maximum Likelihood estimator based on normality.
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa12p1021&r=ecm
  4. By: Mark Wachowiak; Renata Wachowiak-Smolikova; Jonathan Zimmerling
    Abstract: This paper addresses parameter estimation of spatial regression models incorporating spatial lag. These models are very important in spatial econometrics, where spatial interaction and structure are introduced into regression analysis. Because of spatial interactions, observations are not truly independent, and traditional regression techniques fail. Estimation techniques include maximum likelihood estimation, ordinary least squares, and the method of moments. However, parameters of spatial lag models are difficult to estimate due to the simultaneity bias (Ord, 1975). These estimation problems are generally intractable by standard numerical methods, and, consequently, robust and efficient optimization techniques are needed. In the case of simple general spatial regressive models (GSRMs), standard local optimization methods, such as Newton-Raphson iteration (as suggested by Ord) converge to high-quality solutions. Unfortunately, a good initial guess of the parameters is required for these local methods to succeed. In more complex autoregressive spatial models, an analytic expression for good initial guesses is not available, and, consequently, local methods generally fail. In this paper, global optimization (specifically, particle swarm optimization, or PSO) is used to estimate parameters of spatial autoregressive models. PSO is an iterative, stochastic population-based technique that is increasingly used in a variety of fields to solve complex continuous- and discrete-valued problems. In contrast to genetic algorithms and evolutionary strategies, PSO exploits cooperative and social behavior among members of a population of agents, or particles, which represent a point in the search space. This paper first motivates the need for global methods by demonstrating that GSRM parameters can be estimated with PSO even without a good initial guess, while the local Newton-Raphson and Nelder-Mead approaches have a greater failure rate. Next, PSO was tested with an autoregressive spatial model, for which no analytic initial guess can be computed, and for which no analytic parameter estimation method is known. Simulated data were generated to provide ground truth values to assess the viability of PSO. The global PSO method was found to successfully estimate the parameters using two different MLE approximation techniques for trials with 10, 20, and 40 samples (R2 > 0.867 for all trials). These results indicate that global optimization is a viable approach to estimating the parameters of spatial autoregressive models, and suggest that future directions should focus on more advanced global techniques, such as branch-and-bound, dividing rectangles, and differential evolution, which may further improve parameter estimation in spatial econometrics applications.
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa12p598&r=ecm
  5. By: Arthur Lewbel (Boston College); Xun Tang (University of Pennsylvania)
    Abstract: The existing literature on binary games with incomplete information assumes that either payoff functions or the distribution of private information are finitely parameterized to obtain point identification. In contrast, we show that, given excluded regressors, payoff functions and the distribution of private information can both be nonparametrically point identified. An excluded regressor for player i is a sufficiently varying state variable that does not affect other players utility and is additively separable from other components in is payoff. We show how excluded regressors satisfying these conditions arise in contexts such as entry games between firms, as variation in observed components of fixed costs. Our identification proofs are constructive, so consistent nonparametric estimators can be readily based on them. For a semiparametric model with linear payoffs, we propose root-N consistent and asymptotically normal estimators for parameters in players payoffs. Finally, we extend our approach to accommodate the existence of multiple Bayesian Nash equilibria in the data-generating process without assuming equilibrium selection rules.
    Keywords: Games with Incomplete Information, Excluded Regressors, Nonparametric Identification, Semiparametric Estimation, Multiple Equilibria.
    JEL: C14 C51 D43
    Date: 2012–08–21
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:808&r=ecm
  6. By: Matteo G. Richiardi
    Abstract: Forecasting based on random intercepts models requires imputation of the individual permanent effects to the simulated individuals. When these individuals enter the simulation with a history of past outcomes this involves sampling from conditional distributions, which might be unfeasible. I present a method for drawing individual permanent effects from a conditional distribution which only requires to invert the corresponding estimated unconditional distribution. While the algorithms currently available in the literature require polynomial time, the proposed method only requires matching two ranks and works therefore in N lnN time.
    Keywords: forecasting, microsimulation, random intercept models, unobserved heterogeneity
    JEL: C15 C53 C63
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:cca:wplabo:123&r=ecm
  7. By: Khalfaoui, R; Boutahar, M
    Abstract: We analyzed the volatility dynamics of three developed markets (U.K., U.S. and Japan), during the period 2003-2011, by comparing the performance of several multivariate volatility models, namely Constant Conditional Correlation (CCC), Dynamic Conditional Correlation (DCC) and consistent DCC (cDCC) models. To evaluate the performance of models we used four statistical loss functions on the daily Value-at-Risk (VaR) estimates of a diversified portfolio in three stock indices: FTSE 100, S&P 500 and Nikkei 225. We based on one-day ahead conditional variance forecasts. To assess the performance of the abovementioned models and to measure risks over different time-scales, we proposed a wavelet-based approach which decomposes a given time series on different time horizons. Wavelet multiresolution analysis and multivariate conditional volatility models are combined for volatility forecasting to measure the comovement between stock market returns and to estimate daily VaR in the time-frequency space. Empirical results shows that the asymmetric cDCC model of Aielli (2008) is the most preferable according to statistical loss functions under raw data. The results also suggest that wavelet-based models increase predictive performance of financial forecasting in low scales according to number of violations and failure probabilities for VaR models.
    Keywords: Dynamic conditional correlations; Value-at-Risk; wavelet decomposition; Stock prices
    JEL: D53 C53 G11 C52
    Date: 2012–09–24
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:41624&r=ecm
  8. By: Prehn, Sören; Brümmer, Bernhard; Glauben, Thomas
    Abstract: Recently, discussion about the appropriate estimation of gravity trade models has started in agriculture. Here, we are going to review recent developments in the literature. It appears that fixed effects Poisson Pseudo Maximum Likelihood is not only the only consistent estimator [Santos Silva and Tenreyro, 2006] but also it already allows for a structural fit [Fally, 2012]. Fixed effects in conjunction with the adding-up property of Poisson Pseudo Maximum Likelihood - which has been so far neglegted - can be harnessed to directly deduce multilateral resistance indexes (i.e. general equilibrium effects) from reduced-form estimation. This innovation made by Fally will ease comparative statics and incidence analysis, making Poisson Pseudo Maximum Likelihood even more preferable in practice. --
    Keywords: Gravity Estimation,Poisson Pseudo Maximum Likelihood,Adding-up,Structural Fit
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:zbw:daredp:1209&r=ecm
  9. By: Mark Podolskij (Heidelberg University and CREATES); Christian Schmidt (Heidelberg University); Johanna Fasciati Ziegel (University of Bern)
    Abstract: This paper presents the asymptotic theory for non-degenerate U-statistics of high frequency observations of continuous Itô semimartingales. We prove uniform convergence in probability and show a functional stable central limit theorem for the standardized version of the U-statistic. The limiting process in the central limit theorem turns out to be conditionally Gaussian with mean zero. Finally, we indicate potential statistical applications of our probabilistic results.
    Keywords: High frequency data, Limit theorems, Semimartingales, Stable convergence, U-statistics
    JEL: C10 C13 C14
    Date: 2012–10–02
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-40&r=ecm
  10. By: Ad Ridder (VU University Amsterdam); Bruno Tuffin (Inria Rennes Bretagne Atlantique)
    Abstract: In rare event simulation, we look for estimators such that the relative accuracy of the output is ''controlled'' when the rarity is getting more and more critical. Different robustness properties have been defined in the literature, that an estimator is expected to satisfy. Though, those properties are not adapted to estimators for which the estimators come from a parametric family and the optimal parameter is learned and random. For this reason, we motivate in this paper the need to define probabilistic robustness properties, because the accuracy of the resulting estimator is therefore random. We especially focus on the so-called probabilistic bounded relative error property. We additionally provide sufficient conditions, both in general and Markov settings, to satisfy such a property, illustrate them and simple but standard examples, and hope that it will foster discussions and new works in the area.
    Keywords: Rare event probability; Importance sampling; Probabilistic robustness; Markov chains
    JEL: C6
    Date: 2012–10–01
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20120103&r=ecm
  11. By: Florian Schoiswohl; Philipp Piribauer; Michael Gmeinder; Matthias Koch; Manfred Fischer
    Abstract: The speed of income convergence in Europe remains one of the hot topics in regional economics. Recently Bayesian Model Averaging (BMA) applied to spatial autoregressive models seems to have gained more popularity. BMA averages over some predetermined number of so called top models, ranked by the model's posterior likelihood. We regard two approaches for especially noteworthy: First Crespo-Cuaresma and Feldkircher (2012) employ BMA to a spatial autoregressive model, where spatial eigenvector filtering is used in order to tackle the econometric problems caused by the spatial lag. However, spatial filtering has its drawbacks. It relies on a model approximation and no partial derivatives of interest associated with the model can be computed. This means, that it is impossible to derive direct and indirect effects. Second LeSage and Fischer (2008) rely on BMA applied to a Spatial Durbin Model (SDM), where the model posterior is calculated without any model approximation. Although it can be computationally burdensome, it allows for a proper model interpretation if the underlying data generating process (DGP) is of SDM form. One virtue of spatial filtering, as shown by Pace et al. (2011), is that it estimates some of the model coefficients efficiently for various spatial autocorrelated DGPs. Hence, the likelihoods associated with spatial filtering are more robust against model misspecification. Since our preliminary results show that the top models' (posterior) likelihoods obtained from a spatial filtering BMA exercise and (non spatial filtering) BMA applied to a Spatial Durbin Model differ, it is most likely that the DGP is not of SDM form, i.e. misspecified. This leads us to the conclusion that, even though the methodology employed by Crespo-Cuaresma and Feldkircher (2012) cannot be used for a proper model interpretation, the results obtained by spatial filtering BMA do not suffer from model misspecification. References: Crespo-Cuaresma, J., Feldkircher, M. (2012), `Spatial Filtering, Model Uncertainty and the Speed of Income Convergence in Europe', Journal of Applied Econometrics, forthcoming. LeSage, J., Fischer, M. (2008), `Spatial Growth Regressions, Model Specification, Estimation, and Interpretation', Spatial Economic Analysis 3, 275-304. Pace, R., LeSage, J., Zhu, S. (2011), `Interpretation and Computation of Estimates from Regression Models using Spatial Filtering', written for Spatial Econometrics Association 2011. Keywords: Model uncertainty, spatial ltering, determinants of economic growth, European regions. JEL Classi cations: C11, C21, O11, R11
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa12p744&r=ecm
  12. By: Flavia Barsotti (ISFA, University Lyon 1, France); Simona Sanfelici (Dipartimento di Economia, Universita' di Parma)
    Abstract: Equity returns and firm's default probability are strictly interrelated financial measures capturing the credit risk profile of a firm. Following the idea proposed in [20] we use high-frequency equity prices in order to estimate the volatility risk component of a firm within Merton [17] structural model. Differently from [20] we consider a more general framework by introducing market microstructure noise as a direct effect of using noisy high-frequency data and propose the use of non- parametric estimation techniques in order to estimate equity volatility. We conduct a simulation analysis to compare the performance of different non-parametric volatil- ity estimators in their capability of i) filtering out the market microstructure noise, ii) extracting the (unobservable) true underlying asset volatility level, iii) predicting default probabilies calibrated from Merton [17] model.
    Keywords: market microstructure noise, high-frequency data, non-parametric volatility estimation, Merton model, default probabilities, volatility risk
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:flo:wpaper:2012-05&r=ecm
  13. By: Arthur Lewbel (Boston College)
    Abstract: This chapter provides background for understanding and applying special regressor methods. This chapter is intended for inclusion in the "Handbook of Applied Nonparametric and Semiparametric Econometrics and Statistics," Co-edited by Aman Ullah, Jeffrey Racine, and Liangjun Su, to be published by Oxford University Press.
    Keywords: special regressor method
    JEL: C14 D12 D13 C21
    Date: 2012–09–15
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:810&r=ecm
  14. By: Siem Jan Koopman (VU University Amsterdam); Rutger Lit (VU University Amsterdam)
    Abstract: Attack and defense strengths of football teams vary over time due to changes in the teams of players or their managers. We develop a statistical model for the analysis and forecasting of football match results which are assumed to come from a bivariate Poisson distribution with intensity coefficients that change stochastically over time. This development presents a novelty in the statistical time series analysis of match results from football or other team sports. Our treatment is based on state space and importance sampling methods which are computationally efficient. The out-of-sample performance of our methodology is verified in a betting strategy that is applied to the match outcomes from the 2010/11 and 2011/12 seasons of the English Premier League. We show that our statistical modeling framework can produce a significant positive return over the bookmaker's odds.
    Keywords: Betting; Importance sampling; Kalman filter smoother; Non-Gaussian multivariate time series models; Sport statistics
    JEL: C32 C35
    Date: 2012–09–27
    URL: http://d.repec.org/n?u=RePEc:dgr:uvatin:20120099&r=ecm
  15. By: Manfred M. Fischer; James P. LeSage
    Abstract: This study suggests a two-step approach to identifying and interpreting regional convergence clubs in Europe. The first step involves identifying the number and composition of clubs using a space-time panel data model for annual income growth rates in conjunction with Bayesian model comparison methods. A second step uses a Bayesian space-time panel data model to assess how changes in the initial endowments of variables (that explain growth) impact regional income levels over time. These dynamic trajectories of changes in regional income levels over time allow us to draw inferences regarding the timing and magnitude of regional income responses to changes in the initial conditions for the clubs that have been identified in the first step. This is in contrast to conventional practice that involves setting the number of clubs ex ante, selecting the composition of the potential convergence clubs according to some a priori criterion (such as initial per capita income thresholds for example), and using cross-sectional growth regressions for estimation and interpretation purposes. KEYWORDS: Dynamic space-time panel data model, Bayesian model comparison, European regions JEL Classification: C11, C23, O47, O52
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:wiw:wiwrsa:ersa12p217&r=ecm
  16. By: Andrle, Michal
    Abstract: The paper introduces methods that allow analysts to (i) decompose the estimates of unobserved quantities into observed data and (ii) impose subjective prior constraints on path estimates of unobserved shocks in structural economic models. For instance, decomposition of output gap to output, inflation, interest rates and other observables contribution is feasible. The intuitive nature and the analytical clarity of procedures suggested are appealing for policy-related and forecasting models. The paper brings some of the power embodied in the theory of linear multivariate filters, namely relatinship between Kalman and Wiener-Kolmogorov filtering, into the area of structural multivariate models, expressed in linear state-space form.
    Keywords: filter; DSGE; state-space; observables decomposition; judgement
    JEL: C10 E50
    Date: 2012–10
    URL: http://d.repec.org/n?u=RePEc:cpm:dynare:016&r=ecm
  17. By: Despotis, Dimitrs; Koronakos, Gregory; Sotiros, Dimitris
    Abstract: Typically, a two-stage production process assumes that the first stage transforms external inputs to a number of intermediate measures, which then are used as inputs to the second stage that produces the final outputs. The three fundamental approaches to efficiency assessment in the context of DEA (two-stage DEA) are the simple (or independent), the multiplicative and the additive. The simple approach does not assume any relationship between the two stages and estimates the overall efficiency and the individual efficiencies for the two stages independently with typical DEA models. The other two approaches assume a series relationship between the two stages and differ in the way they conceptualize the decomposition of the overall efficiency to the efficiencies of the individual stages. This paper presents an alternative approach to additive efficiency decomposition in two-stage DEA. We show that when using the intermediate measures as pivot, it is possible to aggregate the efficiency assessment models of the two individual stages in a single linear program. We test our models with data sets taken from previous studies and we compare the results with those reported in the literature.
    Keywords: Data envelopment analysis (DEA); Efficiency; Decomposition; Two-stage DEA
    JEL: C6
    Date: 2012–07–13
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:41724&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.