
on Econometrics 
By:  Qi Li; Esfandiar Maasoumi; Jeffrey S. Racine 
Abstract:  In this paper we consider the problem of testing for equality of two density or two conditional density functions dened over mixed discrete and continuous variables. We smooth both the discrete and continuous variables, with the smoothing parameters chosen via leastsquares cross validation. The test statistics are shown to have (asymptotic) normal null distributions. However, we advocate the use of bootstrap methods in order to better approximate their null distribution in nitesample settings. Simulations show that the proposed tests have better power than both conventional frequencybased tests and smoothing tests based on ad hoc smoothing parameter selection, while a demonstrative empirical application to the joint distribution of earnings and educational attainment underscores the utility of the proposed approach in mixed data settings. 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:emo:wp2003:0805&r=ecm 
By:  Barnett, William A.; Seck, Ousmane 
Abstract:  Theoretical constraints on economicmodel parameters often are in the form of inequality restrictions. For example, many theoretical results are in the form of monotonicity or nonnegativity restrictions. Inequality constraints can truncate sampling distributions of parameter estimators, so that asymptotic normality no longer is possible. Sampling theoretic asymptotic inference is thereby greatly complicated or compromised. We use numerical methods to investigate the resulting sampling properties of inequality constrained estimators produced by popular methods of imposing inequality constraints. In particular, we investigate the possible bias in the asymptotic standard errors of estimators of inequality constrained estimators, when the constraint is imposed by the popular method of squaring. That approach is known to violate a regularity condition in the available asymptotic proofs regarding the unconstrained estimator, since the sign of the unconstrained estimator, prior to squaring, is nonidentified. 
Keywords:  inequality constraints; truncation of sampling distribution; asymptotics; constrained estimation 
JEL:  C13 C16 C15 
Date:  2008–08–28 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:12500&r=ecm 
By:  Edoardo Otranto 
Abstract:  One of the main problems in modelling multivariate conditional covariance time series is the parameterization of the correlation structure because, if no constraints are imposed, it implies a large number of unknown coefficients. The most popular models propose parsimonious representations, imposing similar correlation structures to all the series or to groups of time series, but the choice of these groups is quite subjective. In this paper we propose a statistical approach to detect groups of homogeneous time series in terms of correlation dynamics. The approach is based on a clustering algorithm, which uses the idea of distance between dynamic conditional correlations, and the classical Wald test to compare the coefficients of two groups of dynamic conditional correlations. The proposed approach is evaluated in terms of simulation experiments and applied to a set of financial time series. 
Keywords:  Multivariate GARCH, DCC, distance, Wald test, clustering. 
JEL:  C10 C32 G11 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:cns:cnscwp:200817&r=ecm 
By:  Andres M. Alonso; David Casado; Sara Lopez Pintado; Juan Romo 
Abstract:  We propose using the integrated periodogram to classify time series. The method assigns a new element to the group minimizing the distance from the integrated periodogram of the element to the group mean of integrated periodograms. Local computation of these periodograms allows the application of the approach to nonstationary time series. Since the integrated periodograms are functional data, we apply depthbased techniques to make the classification robust. The method provides small error rates with both simulated and real data, and shows good computational behaviour. 
Keywords:  Time series, Classification, Integrated periodogram, Data depth 
JEL:  C14 C22 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087427&r=ecm 
By:  Antonio GarciaFerrer; Ester GonzalezPrieto; Daniel Pena 
Abstract:  We propose a new multivariate factor GARCH model, the GICAGARCH model , where the data are assumed to be generated by a set of independent components (ICs). This model applies independent component analysis (ICA) to search the conditionally heteroskedastic latent factors. We will use two ICA approaches to estimate the ICs. The first one estimates the components maximizing their nongaussianity, and the second one exploits the temporal structure of the data. After estimating the ICs, we fit an univariate GARCH model to the volatility of each IC. Thus, the GICAGARCH reduces the complexity to estimate a multivariate GARCH model by transforming it into a small number of univariate volatility models. We report some simulation experiments to show the ability of ICA to discover leading factors in a multivariate vector of financial data. An empirical application to the Madrid stock market will be presented, where we compare the forecasting accuracy of the GICAGARCH model versus the orthogonal GARCH one. 
Keywords:  ICA, Multivariate GARCH, Factor models, Forecasting volatility 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087528&r=ecm 
By:  Barnett, William A.; de Peretti, Philippe 
Abstract:  In aggregation theory, the admissibility condition for clustering together components to be aggregated is blockwise weak separability, which also is the condition needed to separate out sectors of the economy. Although weak separability is thereby of central importance in aggregation and index number theory and in econometrics, prior attempts to produce statistical tests of weak separability have performed poorly in Monte Carlo studies. This paper deals with seminonparametric tests for weak separability. It introduces both a necessary and sufficient test, and a fully stochastic procedure allowing to take into account measurement error. Simulations show that the test performs well, even for large measurement errors. 
Keywords:  weak separability; quantity aggregation; clustering; sectors; index number theory; seminonparametrics 
JEL:  C43 D12 C14 C12 
Date:  2008–11–03 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:12503&r=ecm 
By:  Esfandiar Maasoumi; Jeffrey S. Racine 
Abstract:  We consider a metric entropy capable of detecting deviations from symmetry that is suitable for both discrete and continuous processes. A test statistic is constructed from an integrated normed dierence between nonparametric estimates of two den sity functions. The null distribution (symmetry) is obtained by resampling from an articially lengthened series constructed from a rotation of the original series about its mean (median, mode). Simulations demonstrate that the test has correct size and good power in the direction of interesting alternatives, while applications to updated Nelson & Plosser (1982) data demonstrate its potential power gains relative to existing tests. 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:emo:wp2003:0806&r=ecm 
By:  Hurvich, Clifford; Wang, Yi 
Abstract:  We propose a new transactionlevel bivariate logprice model, which yields fractional or standard cointegration. The model provides a link between market microstructure and lowerfrequency observations. The two ingredients of our model are a Long Memory Stochastic Duration process for the waiting times between trades, and a pair of stationary noise processes which determine the jump sizes in the purejump logprice process. Our model includes feedback between the disturbances of the two logprice series at the transaction level, which induces standard or fractional cointegration for any fixed sampling interval. We prove that the cointegrating parameter can be consistently estimated by the ordinary leastsquares estimator, and obtain a lower bound on the rate of convergence. We propose transactionlevel methodofmoments estimators of the other parameters in our model and discuss the consistency of these estimators. We then use simulations to argue that suitablymodified versions of our model are able to capture a variety of additional properties and stylized facts, including leverage, and portfolio return autocorrelation due to nonsynchronous trading. The ability of the model to capture these effects stems in most cases from the fact that the model treats the (stochastic) intertrade durations in a fully endogenous way. 
Keywords:  Tick Time; Long Memory Stochastic Duration; Information Share. 
JEL:  C32 
Date:  2009–01–05 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:12575&r=ecm 
By:  Torben G. Andersen; Luca Benzoni 
Abstract:  Realized volatility is a nonparametric expost estimate of the return variation. The most obvious realized volatility measure is the sum of finelysampled squared return realizations over a fixed time interval. In a frictionless market the estimate achieves consistency for the underlying quadratic return variation when returns are sampled at increasingly higher frequency. We begin with an account of how and why the procedure works in a simplified setting and then extend the discussion to a more general framework. Along the way we clarify how the realized volatility and quadratic return variation relate to the more commonly applied concept of conditional return variance. We then review a set of related and useful notions of return variation along with practical measurement issues (e.g., discretization error and microstructure noise) before briefly touching on the existing empirical applications. 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:fip:fedhwp:wp0814&r=ecm 
By:  Chen, C.M. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University) 
Abstract:  Firms nowadays need to make decisions with fast information obsolesce. In this paper I deal with one class of decision problems in this situation, called the â€œonesampleâ€ problems: we have finite options and one sample of the multiple criteria with which we use to evaluate those options. I develop evaluation procedures based on bootstrapping DEA (Data Envelopment Envelopment) and the related decisionmaking methods. This paper improves the bootstrap procedure proposed by Simar and Wilson (1998) and shows how to exploit information from bootstrap outputs for decisionmaking. 
Keywords:  multiple criteria;bootstrap;data envelopment analysis;parametric transformation;R&D project;supplier selection 
Date:  2008–12–11 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureri:1765014275&r=ecm 
By:  Maria Rosa Nieto; Esther Ruiz 
Abstract:  We review several procedures for estimating and backtesting two of the most important measures of risk, the Value at Risk (VaR) and the Expected Shortfall (ES). The alternative estimators differ in the way the specify and estimate the conditional mean and variance and the conditional distribution of returns. The results are illustrated by estimating the VaR and ES of daily S&P500 returns. 
Keywords:  Backtesting, Extreme value, GARCH models, Leverage effect 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087326&r=ecm 
By:  Neil R. Ericsson 
Abstract:  Robustness and fragility in Leamer's sense are defined with respect to a particular coefficient over a class of models. This paper shows that inclusion of the data generation process in that class of models is neither necessary nor sufficient for robustness. This result holds even if the properly specified model has welldetermined, statistically significant coefficients. The encompassing principle explains how this result can occur. Encompassing also provides a link to a more commonsense notion of robustness, which is still a desirable property empirically; and encompassing clarifies recent discussion on model averaging and the pooling of forecasts. 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:fip:fedgif:959&r=ecm 
By:  Javier Gonzalez; Alberto Munoz 
Abstract:  In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capability of the proposed kernel is higher than the obtained using RBF kernels. Experimental work is shown to support the theoretical issues. 
Keywords:  Support vector machines, Kernel Methods, Classification problems 
Date:  2008–12 
URL:  http://d.repec.org/n?u=RePEc:cte:wsrepe:ws087024&r=ecm 
By:  William D. Nordhaus (Dept. of Economics, Yale University) 
Abstract:  Learning or experience curves are widely used to estimate cost functions in manufacturing modeling. They have recently been introduced in policy models of energy and global warming economics to make the process of technological change endogenous. It is not widely appreciated that this is a dangerous modeling strategy. The present note has three points. First, it shows that there is a fundamental statistical identification problem in trying to separate learning from exogenous technological change and that the estimated learning coefficient will generally be biased upwards. Second, we present two empirical tests that illustrate the potential bias in practice and show that learning parameters are not robust to alternative specifications. Finally, we show that an overestimate of the learning coefficient will provide incorrect estimates of the total marginal cost of output and will therefore bias optimization models to tilt toward technologies that are incorrectly specified as having high learning coefficients. 
Keywords:  Learning by doing, Experience curves, Energy models, Technological change 
JEL:  O3 O13 D83 
Date:  2009–01 
URL:  http://d.repec.org/n?u=RePEc:cwl:cwldpp:1685&r=ecm 
By:  Massimo Del Gatto; Adriana Di Liberto; Carmelo Petraglia 
Abstract:  Quantifying productivity is a conditio sine qua non for empirical analysis in a number of research elds. The identication of the measure that best ts with the specic goals of the analysis, as well as being datadriven, is currently complicated by the fact that an array of methodologies is available. This paper provides economic researchers with an uptodate overview of issues and relevant solutions associated with this choice. Methods of productivity measurement are surveyed and classied according to three main criteria: i) macro/micro; ii) frontier/nonfrontier; iii) deterministic/econometric. 
Keywords:  productivity measurement, TFP, Solow residual, endogeneity, simultaneity, selection bias, Stochastic Frontier Analysis, DEA, Growth accounting,, GMM, OlleyPakes, rm heterogeneity, price dispersion. 
JEL:  O40 O33 O47 C14 C33 C43 
Date:  2008 
URL:  http://d.repec.org/n?u=RePEc:cns:cnscwp:200818&r=ecm 