nep-ecm New Economics Papers
on Econometrics
Issue of 2005‒11‒12
eleven papers chosen by
Sune Karlsson
Orebro University

  1. Trimmed likelihood-based estimation in binary regression models By Cizek,Pavel
  2. Propagation of Memory Parameter from Durations to Counts By Rohit Deo; Clifford Hurvich; Philippe Soulier; Yi Wang
  3. A NOTE ON INFLUENCE ASSESSMENT IN SCORE TESTS By J.M.C. Santos Silva
  4. The variance of some common estimators and its components under nonresponse By Tångdahl, Sara
  5. A Rank Approach to Equity Forecast Construction By S.E. Satchell; S.M.Wright
  6. Nonparametric Risk Management with Generalized Hyperbolic Distributions By Michal Benko; Alois Kneip
  7. Can the SupLR test discriminate between different switching By CHARFEDDINE Lanouar
  8. A Rank-order Analysis of Learning Models for Regional Labor Market Forecasting By Roberto Patuelli; Simonetta Longhi; Aura Reggiani; Peter Nijkamp; Uwe Blien
  9. Multicriteria Analysis of Neural Network Forecasting Models: An Application to German Regional Labour Markets By Roberto Patuelli; Simonetta Longhi; Aura Reggiani; Peter Nijkamp
  10. DSFM fitting of Implied Volatility Surfaces By Szymon Borak; Matthias Fengler; Wolfgang Härdle
  11. Bibliographic portrait of the Gini concentration ratio By Giovanni Maria Giorgi

  1. By: Cizek,Pavel (Tilburg University, Center for Economic Research)
    Abstract: The binary-choice regression models such as probit and logit are typically estimated by the maximum likelihood method. To improve its robustness, various M-estimation based procedures were proposed, which however require bias corrections to achieve consistency and their resistance to outliers is relatively low. On the contrary, traditional high-breakdown point methods such as maximum trimmed likelihood are not applicable since they induce the separation of data and thus non-identification of estimates by trimming observations. We propose a new robust estimator of binary-choice models based on a maximum symmetrically trimmed likelihood estimator. It is proved to be identified and consistent, and additionally, it does not create separation in the space of explanatory variables as the existing maximum trimmed likelihood. We also discuss asymptotic and robust properties of the proposed method and compare all methods by means of Monte Carlo simulations.
    Keywords: binary-choice regression;robust estimation;trimming; regression analysis;maximum likelihood
    JEL: C13 C25
    Date: 2005
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2005108&r=ecm
  2. By: Rohit Deo (New York University); Clifford Hurvich (New York University); Philippe Soulier (University of Paris X); Yi Wang (New York University)
    Abstract: We establish sufficient conditions on durations that are stationary with finite variance and memory parameter $d \in [0,1/2)$ to ensure that the corresponding counting process $N(t)$ satisfies $\textmd{Var} \, N(t) \sim C t^{2d+1}$ ($C>0$) as $t \rightarrow \infty$, with the same memory parameter $d \in [0,1/2)$ that was assumed for the durations. Thus, these conditions ensure that the memory in durations propagates to the same memory parameter in counts and therefore in realized volatility. We then show that any Autoregressive Conditional Duration ACD(1,1) model with a sufficient number of finite moments yields short memory in counts, while any Long Memory Stochastic Duration model with $d>0$ and all finite moments yields long memory in counts, with the same $d$. Finally, we present a result implying that the only way for a series of counts aggregated over a long time period to have nontrivial autocorrelation is for the short-term counts to have long memory. In other words, aggregation ultimately destroys all autocorrelation in counts, if and only if the counts have short memory.
    Keywords: Long Memory Stochastic Duration, Autoregressive Conditional Duration, Rosenthal-type Inequality.
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–11–08
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511010&r=ecm
  3. By: J.M.C. Santos Silva (ISEG/Universidade Tecnica de Lisboa)
    Abstract: Building on the work of Lustbader and Moolgavkar (1985, 'A Diagnostic Statistic for the Score Test', Journal of the American Statistical Association 80, 375-379), this paper studies influence diagnostics for score tests. The diagnostic proposed by Lustbader and Moolgavkar is reassessed and alternative diagnostics are proposed. The new diagnostics are based on one-step approximations to the change in the parameter estimates, resulting from deleting one observation. The use of the different diagnostics is illustrated with simulations and with an example.
    Keywords: artificial regressions; Newton's algorithm; one-step approximations.
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–11–07
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511008&r=ecm
  4. By: Tångdahl, Sara (Department of Business, Economics, Statistics and Informatics)
    Abstract: In most surveys, the risk of nonresponse is a factor taken into account at the planning stage. Commonly, resources are set aside for a follow-up procedure which aims at reducing the nonresponse rate. However, we should pay attention to the effect of nonresponse, rather than the nonresponse rate itself. When considering nonresponse error, i.e. bias and variance, it is not obvious that the resources spent on nonresponse rate reduction efforts are time and money well spent. <p>In this paper we address this issue, continuing the work begun in Tångdahl (2004), now focusing on the effect of follow-ups on estimator variance. The components of the variance for some common estimators are derived under a setup that allows us to take into account the data collection process, and follow-up efforts in particular.
    Keywords: nonresponse variance; response distribution; RHG model; calibration for nonresponse; variance components; resource allocation
    JEL: C13 C42
    Date: 2005–11–04
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2005_009&r=ecm
  5. By: S.E. Satchell; S.M.Wright
    Abstract: The purpose of this paper is to present a rank based approach to cross-sectional linear factor modelling. The emphasis is on approximating factor exposures in a consistent manner in order to facilitate the merging of subjective information (from professional investors) with objective information (from accounting data and/or state of the art quantitative models) in a statistically rigorous way without needing to impose the unrealistic simplifying assumptions typical of more standard time series models. We deal with the problems of identifying country and sector returns by an innovative hierarchical factor structure. This is all discussed from the perspective that investment models are not immutable but rather need to be designed with characteristics that are fit for their purpose; for example, returning aggregate county and sector forecasts that are consistent by construction.
    Keywords: : Linear Factor Models, Ranking, Robustness Exposures, Forecasting.
    JEL: G11
    Date: 2005–11
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:0553&r=ecm
  6. By: Michal Benko; Alois Kneip
    Abstract: Functional data analysis (FDA) has become a popular technique in applied statistics. In particular, this methodology has received considerable attention in recent studies in empirical finance. In this talk we discuss selected topics of functional principal components analysis that are motivated by financial data.
    Keywords: nonparametric risk management, generalized hyperbolic distribution, functional data analysis
    JEL: C13 G19
    Date: 2005–03
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-016&r=ecm
  7. By: CHARFEDDINE Lanouar (University of Paris II, Centre de recherche ERMES, doctorant associés à L'ENS Cahcan)
    Abstract: In recent years two classes of switching models have been proposed, the Markov switching models, Hamilton (1989) and the Threshold Auto- Regressive Models (TAR), Lim and Tong (1980). These two models have the advantage of being able to modelize and capture asymmetry, sudden changes and irreversibility time observed in many economic and financial time series. Despite these similarities and common points, these models have been envolved, in the literature, largely independently. In this paper, using the $SupLR$ test, we study the possibility of discrimination between these two models. This approach is motivated by the fact that the majority of authors, in applications, use switching models without any statistical justification. We show that when the null hypothesis is rejected it appears that different switching models are significant. Then, using simulation experiments we show that it is very difficult to differenciate between MSAR and SETAR models specially with large samples. The power of the $SupLR$ test seems to be sensitive to the mean, the noise variance and the delay parameter which appear in each model. Finally, we apply this methodology to the US GNP growth rate and the US/UK exchange rate.
    Keywords: Switching Models, SETAR processes SupLR test, Empirical power, exchange rates
    JEL: C12 C15 F31
    Date: 2005–11–04
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpif:0511002&r=ecm
  8. By: Roberto Patuelli (Vrije Universiteit); Simonetta Longhi (University of Essex); Aura Reggiani (University of Bologna); Peter Nijkamp (Vrije Universiteit); Uwe Blien (Institut fuer Arbeitsmarkt und Berufsforschung)
    Abstract: Using a panel of 439 German regions we evaluate and compare the performance of various Neural Network (NN) models as forecasting tools for regional employment growth. Because of relevant differences in data availability between the former East and West Germany, NN models are computed separately for the two parts of the country. The comparisons of the models and their ex-post forecasts have been carried out by means of a non-parametric test: viz. the Friedman statistic. The Friedman statistic tests the consistency of model results obtained in terms of their rank order. Since there is no normal distribution assumption, this methodology is an interesting substitute for a standard analysis of variance. Furthermore, the Friedman statistic is indifferent to the scale on which the data are measured. The evaluation of the ex-post forecasts suggests that NN models are generally able to correctly identify the fastest-growing and the slowest-growing regions, and hence predict rather well the correct ranking of regions in terms of their employment growth. The comparison among NN models – on the basis of several criteria – suggests that the choice of the variables used in the model may influence the model’s performance and the reliability of its forecasts.
    Keywords: forecasts, regional employment, learning algorithms, rank order test
    JEL: C23 E27 R12
    Date: 2005–11–08
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpur:0511004&r=ecm
  9. By: Roberto Patuelli (Vrije Universiteit); Simonetta Longhi (University of Essex); Aura Reggiani (University of Bologna); Peter Nijkamp (Vrije Universiteit)
    Abstract: This paper develops a flexible multi-dimensional assessment method for the comparison of different statistical-econometric techniques based on learning mechanisms, with a view to analysing and forecasting regional labour markets. The aim of this paper is twofold. A first major objective is to explore the use of a standard choice tool, namely Multicriteria Analysis (MCA), in order to cope with the intrinsic methodological uncertainty on the choice of a suitable statistical- econometric learning technique for regional labour market analysis. MCA is applied here to support choices on the performance of various models – based on classes of Neural Network (NN) techniques – that serve to generate employment forecasts in West Germany at a regional/district level. A second objective of the paper is to analyse the methodological potential of a blend of approaches (NN-MCA) in order to extend the analysis framework to other economic research domains, where formal models are not available, but where a variety of statistical data is present. The paper offers a basis for a more balanced judgement of the performance of rival statistical tests.
    Keywords: multicriteria analysis; neural networks; regional labour markets
    JEL: C9
    Date: 2005–11–08
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpex:0511001&r=ecm
  10. By: Szymon Borak; Matthias Fengler; Wolfgang Härdle
    Abstract: The implied volatility became one of the key issues in modern quantitative finance, since the plain vanilla option prices contain vital information for pricing and hedging of exotic and illiquid options. European plain vanilla options are nowadays widely traded, which results in a great amount of high-dimensional data especially on an intra day level. The data reveal a degenerated string structure. Dynamic Semiparametric Factor Models (DSFM) are tailored to handle complex, degenerated data and yield low dimensional representation of the implied volatility surface (IVS). We discuss estimation issues of the model and apply it to DAX option prices.
    Keywords: dynamic semiparametric factor model, implied volatility, vanilla options, DAX option prices
    JEL: C14
    Date: 2005–04
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2005-022&r=ecm
  11. By: Giovanni Maria Giorgi (University of Siena)
    Keywords: Inference and bounds on the Gini index, decompositions, new interpretations and extensions of the concentration ratio
    JEL: C1 C2 C3 C4 C5 C8
    Date: 2005–11–04
    URL: http://d.repec.org/n?u=RePEc:wpa:wuwpem:0511004&r=ecm

This nep-ecm issue is ©2005 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.