nep-ecm New Economics Papers
on Econometrics
Issue of 2009‒01‒10
fifteen papers chosen by
Sune Karlsson
Orebro University

  1. A NONPARAMETRIC TEST FOR EQUALITY OF DISTRIBUTIONS WITH MIXED CATEGORICAL AND CONTINUOUS DATA By Qi Li; Esfandiar Maasoumi; Jeffrey S. Racine
  2. Estimation with inequality constraints on the parameters: dealing with truncation of the sampling distribution. By Barnett, William A.; Seck, Ousmane
  3. Identifying Financial Time Series with Similar Dynamic Conditional Correlation By Edoardo Otranto
  4. A functional data based method for time series classification By Andres M. Alonso; David Casado; Sara Lopez Pintado; Juan Romo
  5. A multivariate generalized independent factor GARCH model with an application to financial stock returns By Antonio Garcia-Ferrer; Ester Gonzalez-Prieto; Daniel Pena
  6. Admissible clustering of aggregator components: a necessary and sufficient stochastic semi-nonparametric test for weak separability By Barnett, William A.; de Peretti, Philippe
  7. A Robust Entropy-Based Test of Asymmetry for Discrete and Continuous Processes By Esfandiar Maasoumi; Jeffrey S. Racine
  8. A Pure-Jump Transaction-Level Price Model Yielding Cointegration, Leverage, and Nonsynchronous Trading Effects By Hurvich, Clifford; Wang, Yi
  9. Realized volatility By Torben G. Andersen; Luca Benzoni
  10. Multi-Factor Policy Evaluation and Selection in the One-Sample Situation By Chen, C.M.
  11. Measuring financial risk : comparison of alternative procedures to estimate VaR and ES By Maria Rosa Nieto; Esther Ruiz
  12. The fragility of sensitivity analysis: an encompassing perspective By Neil R. Ericsson
  13. Locally linear approximation for Kernel methods : the Railway Kernel By Javier Gonzalez; Alberto Munoz
  14. The Perils of the Learning Model For Modeling Endogenous Technological Change By William D. Nordhaus
  15. Measuring Productivity By Massimo Del Gatto; Adriana Di Liberto; Carmelo Petraglia

  1. By: Qi Li; Esfandiar Maasoumi; Jeffrey S. Racine
    Abstract: In this paper we consider the problem of testing for equality of two density or two conditional density functions dened over mixed discrete and continuous variables. We smooth both the discrete and continuous variables, with the smoothing parameters chosen via least-squares cross- validation. The test statistics are shown to have (asymptotic) normal null distributions. However, we advocate the use of bootstrap methods in order to better approximate their null distribution in nite-sample settings. Simulations show that the proposed tests have better power than both conventional frequency-based tests and smoothing tests based on ad hoc smoothing parameter selection, while a demonstrative empirical application to the joint distribution of earnings and educational attainment underscores the utility of the proposed approach in mixed data settings.
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:emo:wp2003:0805&r=ecm
  2. By: Barnett, William A.; Seck, Ousmane
    Abstract: Theoretical constraints on economic-model parameters often are in the form of inequality restrictions. For example, many theoretical results are in the form of monotonicity or nonnegativity restrictions. Inequality constraints can truncate sampling distributions of parameter estimators, so that asymptotic normality no longer is possible. Sampling theoretic asymptotic inference is thereby greatly complicated or compromised. We use numerical methods to investigate the resulting sampling properties of inequality constrained estimators produced by popular methods of imposing inequality constraints. In particular, we investigate the possible bias in the asymptotic standard errors of estimators of inequality constrained estimators, when the constraint is imposed by the popular method of squaring. That approach is known to violate a regularity condition in the available asymptotic proofs regarding the unconstrained estimator, since the sign of the unconstrained estimator, prior to squaring, is nonidentified.
    Keywords: inequality constraints; truncation of sampling distribution; asymptotics; constrained estimation
    JEL: C13 C16 C15
    Date: 2008–08–28
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:12500&r=ecm
  3. By: Edoardo Otranto
    Abstract: One of the main problems in modelling multivariate conditional covariance time series is the parameterization of the correlation structure because, if no constraints are imposed, it implies a large number of unknown coefficients. The most popular models propose parsimonious representations, imposing similar correlation structures to all the series or to groups of time series, but the choice of these groups is quite subjective. In this paper we propose a statistical approach to detect groups of homogeneous time series in terms of correlation dynamics. The approach is based on a clustering algorithm, which uses the idea of distance between dynamic conditional correlations, and the classical Wald test to compare the coefficients of two groups of dynamic conditional correlations. The proposed approach is evaluated in terms of simulation experiments and applied to a set of financial time series.
    Keywords: Multivariate GARCH, DCC, distance, Wald test, clustering.
    JEL: C10 C32 G11
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:200817&r=ecm
  4. By: Andres M. Alonso; David Casado; Sara Lopez Pintado; Juan Romo
    Abstract: We propose using the integrated periodogram to classify time series. The method assigns a new element to the group minimizing the distance from the integrated periodogram of the element to the group mean of integrated periodograms. Local computation of these periodograms allows the application of the approach to nonstationary time series. Since the integrated periodograms are functional data, we apply depth-based techniques to make the classification robust. The method provides small error rates with both simulated and real data, and shows good computational behaviour.
    Keywords: Time series, Classification, Integrated periodogram, Data depth
    JEL: C14 C22
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087427&r=ecm
  5. By: Antonio Garcia-Ferrer; Ester Gonzalez-Prieto; Daniel Pena
    Abstract: We propose a new multivariate factor GARCH model, the GICA-GARCH model , where the data are assumed to be generated by a set of independent components (ICs). This model applies independent component analysis (ICA) to search the conditionally heteroskedastic latent factors. We will use two ICA approaches to estimate the ICs. The first one estimates the components maximizing their non-gaussianity, and the second one exploits the temporal structure of the data. After estimating the ICs, we fit an univariate GARCH model to the volatility of each IC. Thus, the GICA-GARCH reduces the complexity to estimate a multivariate GARCH model by transforming it into a small number of univariate volatility models. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. An empirical application to the Madrid stock market will be presented, where we compare the forecasting accuracy of the GICA-GARCH model versus the orthogonal GARCH one.
    Keywords: ICA, Multivariate GARCH, Factor models, Forecasting volatility
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087528&r=ecm
  6. By: Barnett, William A.; de Peretti, Philippe
    Abstract: In aggregation theory, the admissibility condition for clustering together components to be aggregated is blockwise weak separability, which also is the condition needed to separate out sectors of the economy. Although weak separability is thereby of central importance in aggregation and index number theory and in econometrics, prior attempts to produce statistical tests of weak separability have performed poorly in Monte Carlo studies. This paper deals with semi-nonparametric tests for weak separability. It introduces both a necessary and sufficient test, and a fully stochastic procedure allowing to take into account measurement error. Simulations show that the test performs well, even for large measurement errors.
    Keywords: weak separability; quantity aggregation; clustering; sectors; index number theory; semi-nonparametrics
    JEL: C43 D12 C14 C12
    Date: 2008–11–03
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:12503&r=ecm
  7. By: Esfandiar Maasoumi; Jeffrey S. Racine
    Abstract: We consider a metric entropy capable of detecting deviations from symmetry that is suitable for both discrete and continuous processes. A test statistic is constructed from an integrated normed dierence between nonparametric estimates of two den- sity functions. The null distribution (symmetry) is obtained by resampling from an articially lengthened series constructed from a rotation of the original series about its mean (median, mode). Simulations demonstrate that the test has correct size and good power in the direction of interesting alternatives, while applications to updated Nelson & Plosser (1982) data demonstrate its potential power gains relative to existing tests.
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:emo:wp2003:0806&r=ecm
  8. By: Hurvich, Clifford; Wang, Yi
    Abstract: We propose a new transaction-level bivariate log-price model, which yields fractional or standard cointegration. The model provides a link between market microstructure and lower-frequency observations. The two ingredients of our model are a Long Memory Stochastic Duration process for the waiting times between trades, and a pair of stationary noise processes which determine the jump sizes in the pure-jump log-price process. Our model includes feedback between the disturbances of the two log-price series at the transaction level, which induces standard or fractional cointegration for any fixed sampling interval. We prove that the cointegrating parameter can be consistently estimated by the ordinary least-squares estimator, and obtain a lower bound on the rate of convergence. We propose transaction-level method-of-moments estimators of the other parameters in our model and discuss the consistency of these estimators. We then use simulations to argue that suitably-modified versions of our model are able to capture a variety of additional properties and stylized facts, including leverage, and portfolio return autocorrelation due to nonsynchronous trading. The ability of the model to capture these effects stems in most cases from the fact that the model treats the (stochastic) intertrade durations in a fully endogenous way.
    Keywords: Tick Time; Long Memory Stochastic Duration; Information Share.
    JEL: C32
    Date: 2009–01–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:12575&r=ecm
  9. By: Torben G. Andersen; Luca Benzoni
    Abstract: Realized volatility is a nonparametric ex-post estimate of the return variation. The most obvious realized volatility measure is the sum of finely-sampled squared return realizations over a fixed time interval. In a frictionless market the estimate achieves consistency for the underlying quadratic return variation when returns are sampled at increasingly higher frequency. We begin with an account of how and why the procedure works in a simplified setting and then extend the discussion to a more general framework. Along the way we clarify how the realized volatility and quadratic return variation relate to the more commonly applied concept of conditional return variance. We then review a set of related and useful notions of return variation along with practical measurement issues (e.g., discretization error and microstructure noise) before briefly touching on the existing empirical applications.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-08-14&r=ecm
  10. By: Chen, C.M. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University)
    Abstract: Firms nowadays need to make decisions with fast information obsolesce. In this paper I deal with one class of decision problems in this situation, called the “one-sample†problems: we have finite options and one sample of the multiple criteria with which we use to evaluate those options. I develop evaluation procedures based on bootstrapping DEA (Data Envelopment Envelopment) and the related decision-making methods. This paper improves the bootstrap procedure proposed by Simar and Wilson (1998) and shows how to exploit information from bootstrap outputs for decision-making.
    Keywords: multiple criteria;bootstrap;data envelopment analysis;parametric transformation;R&D project;supplier selection
    Date: 2008–12–11
    URL: http://d.repec.org/n?u=RePEc:dgr:eureri:1765014275&r=ecm
  11. By: Maria Rosa Nieto; Esther Ruiz
    Abstract: We review several procedures for estimating and backtesting two of the most important measures of risk, the Value at Risk (VaR) and the Expected Shortfall (ES). The alternative estimators differ in the way the specify and estimate the conditional mean and variance and the conditional distribution of returns. The results are illustrated by estimating the VaR and ES of daily S&P500 returns.
    Keywords: Backtesting, Extreme value, GARCH models, Leverage effect
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087326&r=ecm
  12. By: Neil R. Ericsson
    Abstract: Robustness and fragility in Leamer's sense are defined with respect to a particular coefficient over a class of models. This paper shows that inclusion of the data generation process in that class of models is neither necessary nor sufficient for robustness. This result holds even if the properly specified model has well-determined, statistically significant coefficients. The encompassing principle explains how this result can occur. Encompassing also provides a link to a more common-sense notion of robustness, which is still a desirable property empirically; and encompassing clarifies recent discussion on model averaging and the pooling of forecasts.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:fip:fedgif:959&r=ecm
  13. By: Javier Gonzalez; Alberto Munoz
    Abstract: In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capability of the proposed kernel is higher than the obtained using RBF kernels. Experimental work is shown to support the theoretical issues.
    Keywords: Support vector machines, Kernel Methods, Classification problems
    Date: 2008–12
    URL: http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087024&r=ecm
  14. By: William D. Nordhaus (Dept. of Economics, Yale University)
    Abstract: Learning or experience curves are widely used to estimate cost functions in manufacturing modeling. They have recently been introduced in policy models of energy and global warming economics to make the process of technological change endogenous. It is not widely appreciated that this is a dangerous modeling strategy. The present note has three points. First, it shows that there is a fundamental statistical identification problem in trying to separate learning from exogenous technological change and that the estimated learning coefficient will generally be biased upwards. Second, we present two empirical tests that illustrate the potential bias in practice and show that learning parameters are not robust to alternative specifications. Finally, we show that an overestimate of the learning coefficient will provide incorrect estimates of the total marginal cost of output and will therefore bias optimization models to tilt toward technologies that are incorrectly specified as having high learning coefficients.
    Keywords: Learning by doing, Experience curves, Energy models, Technological change
    JEL: O3 O13 D83
    Date: 2009–01
    URL: http://d.repec.org/n?u=RePEc:cwl:cwldpp:1685&r=ecm
  15. By: Massimo Del Gatto; Adriana Di Liberto; Carmelo Petraglia
    Abstract: Quantifying productivity is a conditio sine qua non for empirical analysis in a number of research elds. The identication of the measure that best ts with the specic goals of the analysis, as well as being data-driven, is currently complicated by the fact that an array of methodologies is available. This paper provides economic researchers with an up-to-date overview of issues and relevant solutions associated with this choice. Methods of productivity measurement are surveyed and classied according to three main criteria: i) macro/micro; ii) frontier/non-frontier; iii) deterministic/econometric.
    Keywords: productivity measurement, TFP, Solow residual, endogeneity, simultaneity, selection bias, Stochastic Frontier Analysis, DEA, Growth accounting,, GMM, Olley-Pakes, rm heterogeneity, price dispersion.
    JEL: O40 O33 O47 C14 C33 C43
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:cns:cnscwp:200818&r=ecm

This nep-ecm issue is ©2009 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.