nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒06‒13
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Nonparametric estimation and inference for Granger causality measures By Abderrahim Taamouti; Taoufik Bouezmarni; Anouar El Ghouch
  2. A Note on the Asymptotic Variance of Sample By Timothy Halliday
  3. Consistent estimation of pseudo panels in the presence of selection bias By Mora Rodriguez, Jhon James; Muro, Juan
  4. Identification, Estimation and Specification in a Class of Semi-Linear Time Series Models By Gao, Jiti
  5. Kriging in Multi-response Simulation, including a Monte Carlo Laboratory By Kleijnen, Jack P.C.; Mehdad, E.
  6. Bridging DSGE models and the raw data By Fabio Canova
  7. Optimal Tests for the Two-Sample Spherical Location Problem By Christophe Ley; Yvik Swan; Yves-Caoimhin Swan; Thomas Verdebout
  8. Testing for Spatial Error Dependence in Probit Models By Pedro V. Amaral; Luc Anselin; Daniel Arribas-Bel
  9. Dealing with small samples and dimensionality issues in data envelopment analysis By Zervopoulos, Panagiotis
  10. The perils of aggregating foreign variables in panel data models By Michele Ca' Zorzi; Alexander Chudik; Alistair Dieppe
  11. Finite Sample Properties of Moran's I Test for Spatial Autocorrelation in Probit and Tobit Models - Empirical Evidence By P. Amaral; L. Anselin
  12. Estimating Overidentified, Nonrecursive Time-Varying Coefficients Structural VARs By Fabio Canova; Fernando J. Pérez Forero
  13. Enhancing the Jaquez k Nearest Neighbor Test for Space-Time Interaction By Nicholas Malizia; Elizabeth A. Mack
  14. Spatial Fixed Effects and Spatial Dependence By Luc Anselin; Daniel Arribas-Bel
  15. Selection criteria for overlapping binary Models By M. T. Aparicio; I. Villanúa
  16. Fat tails in small samples By Candelon B.; Straetmans S.
  17. TailCoR By Lorenzo Ricci; David Veredas
  18. On the scaling ranges of detrended fluctuation analysis for long-memory correlated short series of data By Dariusz Grech; Zygmunt Mazur
  19. Identifying spikes and seasonal components in electricity spot price data: A guide to robust modeling By Janczura, Joanna; Trueck, Stefan; Weron, Rafal; Wolff, Rodney
  20. Empirical simultaneous prediction regions for path-forecasts By Òscar Jordá; Malte Knuppel; Massimiliano Marcellino

  1. By: Abderrahim Taamouti; Taoufik Bouezmarni; Anouar El Ghouch
    Abstract: We propose a nonparametric estimator and a nonparametric test for Granger causality measures that quantify linear and nonlinear Granger causality in distribution between random variables. We first show how to write the Granger causality measures in terms of copula densities. We suggest a consistent estimator for these causality measures based on nonparametric estimators of copula densities. Further, we prove that the nonparametric estimators are asymptotically normally distributed and we discuss the validity of a local smoothed bootstrap that we use in finite sample settings to compute a bootstrap bias-corrected estimator and test for our causality measures. A simulation study reveals that the bias-corrected bootstrap estimator of causality measures behaves well and the corresponding test has quite good finite sample size and power properties for a variety of typical data generating processes and different sample sizes. Finally, we illustrate the practical relevance of nonparametric causality measures by quantifying the Granger causality between S&P500 Index returns and many exchange rates (US/Canada, US/UK and US/Japen exchange rates).
    Keywords: Causality measures, Nonparametric estimation, Time series, Copulas, Bernstein copula density, Local bootstrap, Conditional distribution function, Stock returns
    JEL: C12 C14 C15 C19 G1 G12 E3 E4
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1217&r=ecm
  2. By: Timothy Halliday (Department of Economics, University of Hawaii at Manoa)
    Abstract: We derive the asymptotic distribution of the eigenvalues of a sample covari- ance matrix with distinct roots. Our theorem can accommodate the situation in which the population covariance matrix is estimated via its sample analogue as well as the more general case in which it is estimated via a pN-consistent extremum estimator. The sample roots will have a Normal distribution in a large sample with a covariance matrix that is easy to compute. We con- duct Monte Carlo experiments that show that standard errors based on our derived asymptotic distribution accurately approximate standard errors in the empirical distribution.
    Keywords: Principal Components Analysis, Asymptotic Distribution, Extremum Estimation
    JEL: C01 C02
    Date: 2012–06–01
    URL: http://d.repec.org/n?u=RePEc:hai:wpaper:201209&r=ecm
  3. By: Mora Rodriguez, Jhon James; Muro, Juan
    Abstract: In the presence of selection bias, traditional estimators of pseudo panel data are inconsistent. In this paper, the authors derive the conditions under which consistence is achieved in pseudo-panel estimation and propose a simple test of selection bias. Specifically, they propose a Wald test for the null hypothesis that there is no selection bias. Under rejection of the null hypothesis, the authors can consistently estimate pseudo-panel parameters. They use cross sections and pseudo-panel regressions to test for selection bias and estimate the returns to education in Colombia. The authors corroborate the existence of selection bias and find that returns to education are around twenty percent. --
    Keywords: Repeated cross-section models,selectivity bias testing,human capital
    JEL: C23 C52
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:zbw:ifwedp:201226&r=ecm
  4. By: Gao, Jiti
    Abstract: In this paper, we consider some identification, estimation and specification problems in a class of semi-linear time series models. Existing studies for the stationary time series case have been reviewed and discussed. We also establish some new results for the integrated time series case. In the meantime, we propose a new estimation method and establish a new theory for a class of semi-linear nonstationary autoregressive models. In addition, we discuss certain directions for further research.
    Keywords: Asymptotic theory; departure function; kernel method; nonlinearity; nonstationarity; semiparametric model; stationarity; time series
    JEL: C14 C22
    Date: 2012–04–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:39256&r=ecm
  5. By: Kleijnen, Jack P.C.; Mehdad, E. (Tilburg University, Center for Economic Research)
    Abstract: Abstract: To analyze the input/output behavior of simulation models with multiple responses, we may apply either univariate or multivariate Kriging (Gaussian Process) models. Univariate Kriging may use a popular MATLAB Kriging toolbox called \DACE'. Multivariate Kriging faces a major problem: its covariance matrix should remain positive-definite; this problem may be solved through nonseparable dependence model. To evaluate the performance of these two Kriging models, we develop a Monte Carlo \laboratory' that simulates Gaussian Processes. To verify that this laboratory works correctly, we derive statistics that test whether the Kriging parameters have the correct values. Our Monte Carlo results demonstrate that in general DACE gives smaller Mean Squared Error (MSE); we also explain these results.
    Keywords: positive-definite covariance-matrix;nonseparable dependence model;Gaussian process;verification.
    JEL: C0 C1 C9 C15 C44
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:dgr:kubcen:2012039&r=ecm
  6. By: Fabio Canova
    Abstract: A method to estimate DSGE models using the raw data is proposed. The approach links the observables to the model counterparts via a flexible specification which does not require the model-based component to be solely located at business cycle frequencies, allows the non model-based component to take various time series patterns, and permits model misspecification. Applying standard data transformations induce biases in structural estimates and distortions in the policy conclusions. The proposed approach recovers important model-based features in selected experimental designs. Two widely discussed issues are used to illustrate its practical use.
    Keywords: DSGE models, Filters, Structural estimation, Business cycles
    JEL: E3 C3
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:635&r=ecm
  7. By: Christophe Ley; Yvik Swan; Yves-Caoimhin Swan; Thomas Verdebout
    Abstract: We tackle the classical two-sample spherical location problem for directional data by having recourse to the Le Cam methodology, habitually used in classical linear multivariate analysis. More precisely we construct locally and asymptotically optimal (in the maximin sense) parametric tests, which we then turn into semi-parametric ones in two distinct ways. First, by using a studentization argument; this leads to so-called pseudo-FvML tests. Second, by resorting to the invariance principle; this leads to efficient rank-based tests. Within each construction, the semi-parametric tests inherit optimality under a given distribution (the FvML in the rst case, any rotationally symmetric one in the second) from their parametric counterparts and also improve on the latter by being valid under the whole class of rotationally symmetric distributions. Asymptotic relative efficiencies are calculated and the nite-sample behavior of the proposed tests is investigated by means of a Monte Carlo simulation.
    Keywords: Directional Statistics; Local Asymptotic normality; pseudo-FvML tests; rank based inference; two-sample spherical location problem
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/119308&r=ecm
  8. By: Pedro V. Amaral (GeoDa Center for Geospatial Analysis and Computation; Arizona State University); Luc Anselin (GeoDa Center for Geospatial Analysis and Computation; Arizona State University); Daniel Arribas-Bel
    Abstract: In this note, we compare three test statistics that have been suggested to assess the presence of spatial error autocorrelation in probit models. We highlight the formal differences between the tests proposed by Pinkse and Slade (1998), Pinkse (1999, 2004) and Kelejian and Prucha (2001), and compare their properties in a extensive set of Monte Carlo simulation experiments both under the null and under the alternative. We also assess the conjecture by Pinkse (1999) that the usefulness of these test statistics is limited when the explanatory variables are spatially correlated. The Kelejian and Prucha (2001) generalized Moran’s I statistic turns out to perform best, even in medium sized samples of several hundreds of obser- vations. The other two tests are acceptable in very large samples.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:asg:wpaper:1051&r=ecm
  9. By: Zervopoulos, Panagiotis
    Abstract: Data Envelopment Analysis (DEA) is a widely applied nonparametric method for comparative evaluation of firms’ efficiency. A deficiency of DEA is that the efficiency scores assigned to each firm are sensitive to sampling variations, particularly when small samples are used. In addition, an upward bias is present due to dimensionality issues when the sample size is limited compared to the number of inputs and output. As a result, in case of small samples, DEA efficiency scores cannot be considered as reliable measures. The DEA Bootstrap addresses this limitation of the DEA method as it provides the efficiency scores with stochastic properties. However, the DEA Bootstrap is still inappropriate in the presence of small samples. In this context, we introduce a new method that draws on random data generation procedures, unlike Bootstrap which is based on resampling, and Monte Carlo simulations.
    Keywords: Data envelopment analysis; Data generation process; Random data; Bootstrap; Bias correction; Efficiency
    JEL: C14 C15 C1
    Date: 2012–02–05
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:39226&r=ecm
  10. By: Michele Ca' Zorzi; Alexander Chudik; Alistair Dieppe
    Abstract: The curse of dimensionality refers to the difficulty of including all relevant variables in empirical applications due to the lack of sufficient degrees of freedom. A common solution to alleviate the problem in the context of open economy models is to aggregate foreign variables by constructing trade-weighted cross-sectional averages. This paper provides two key contributions in the context of static panel data models. The first is to show under what conditions the aggregation of foreign variables (AFV) leads to consistent estimates (as the time dimension T is fixed and the cross section dimension N 8). The second is to design a formal test to assess the admissibility of the AFV restriction and to evaluate the small sample properties of the test by undertaking Monte Carlo experiments. Finally, we illustrate an application in the context of the current account empirical literature where the AFV restriction is rejected.
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:feddgw:111&r=ecm
  11. By: P. Amaral; L. Anselin
    Abstract: In this paper, we investigate the finite sample properties of Moran’s I test statistic for spatial autocorrelation in limited dependent variable models suggested by Kelejian and Prucha (2001). We analyze the socio- economic determinants of the availability of dialysis equipment in 5,507 Brazilian municipalities in 2009 by means of a probit and tobit specifica- tion. We assess the extent to which evidence of spatial autocorrelation can be remedied by the inclusion of spatial fixed effects. We find spa- tial autocorrelation in both model specifications. For the probit model, a spatial fixed effects approach removes evidence of spatial autocorrelation. However, this is not the case for the tobit specification. We further fill a void in the theoretical literature by investigating the finite sample prop- erties of these test statistics in a series of Monte Carlo simulations, using data sets ranging from 49 to 15,625 observations. We find that the tests are unbiased and have considerable power for even medium-sized sample sizes. Under the null hypothesis of no spatial autocorrelation, their em- pirical distribution cannot be distinguished from the asymptotic normal distribution, empirically confirming the theoretical results of Kelejian and Prucha (2001), although the sample size required to achieve this result is larger in the tobit case than in the probit case.
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:asg:wpaper:1048&r=ecm
  12. By: Fabio Canova; Fernando J. Pérez Forero
    Abstract: This paper provides a method to estimate time varying coefficients structural VARs which are non-recursive and potentially overidentified. The procedure allows for linear and non-linear restrictions on the parameters, maintains the multi-move structure of standard algorithms and can be used to estimate structural models with different identification restrictions. We study the transmission of monetary policy shocks and compare the results with those obtained with traditional methods.
    Keywords: Non-recursive overidentified SVARs, Time-varying coefficient models, Bayesian methods, Monetary transmission mechanism
    JEL: C11 E51 E52
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:637&r=ecm
  13. By: Nicholas Malizia (GeoDa Center for Geospatial Analysis and Computation; Arizona State University); Elizabeth A. Mack (GeoDa Center for Geospatial Analysis and Computation; Arizona State University)
    Abstract: The Jacquez k nearest neighbor test, originally developed to improve upon shortcomings of existing tests for space-time interaction, has been shown to be a robust and powerful method of detecting interaction. Despite its flexibility and power however, the test has three main shortcomings: (1) it discards important information regarding the spatial and temporal scale at which detected interac- tion takes place; (2) the results of the test have not been visualized; (3) recent research demonstrates the test to be susceptible to population shift bias. This study presents enhancements to the Jacquez k nearest neighbors test with the goal of addressing each of these three shortcomings and improving the utility of the test. Data on Burkitt’s lymphoma cases in Uganda between 1961-1975 are employed to illustrate the modifications and enhance the visual output of the test. Output from the enhanced test is compared to that provided by alternative tests of space-time interaction. Results show the enhancements presented in this study transform the Jacquez test into a complete, descriptive, and informative metric that can be used as a stand alone measure of global space-time interaction.
    Keywords: space-time interaction, Jacquez k nearest neighbor, visualization, space-time cube, population shift bias
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:asg:wpaper:1052&r=ecm
  14. By: Luc Anselin (GeoDa Center for Geospatial Analysis and Computation; Arizona State University); Daniel Arribas-Bel
    Abstract: We investigate the common conjecture in applied econometric work that the inclusion of spatial fixed effects in a regression specification re- moves spatial dependence. We demonstrate analytically and by means of a series of simulation experiments how evidence of the removal of spatial autocorrelation by spatial fixed effects may be spurious when the true DGP takes the form of a spatial lag or spatial error dependence. In addition, we also show that only in the special case where the dependence is group-wise, with all observations in the same group as neighbors of each other, do spatial fixed effects correctly remove spatial correlation.
    Keywords: spatial autocorrelation, spatial econometrics, spatial externalities, spatial fixed effects, spatial interaction, spatial weights
    Date: 2011
    URL: http://d.repec.org/n?u=RePEc:asg:wpaper:1045&r=ecm
  15. By: M. T. Aparicio (University of Zaragoza); I. Villanúa (University of Zaragoza)
    Abstract: The binary model selection procedures are the purpose of this paper. It has been very often studied in the case of linear and nested competing models, but not too much in the framework of non linear and non nested models. Using the classification of Vuong (1989) in nested, overlapping and strictly non-nested models, we focus on the overlapping models. The special situation that the competing models don’t include the true model is studied together with the usual case of the true model included in the compared models. We carry out the analysis both theoretically and through a Monte Carlo experiment.
    Keywords: logit and probit, selection criteria, overlapping models, true model, convergence, asymptotic behaviour.
    JEL: C15 C25 C44
    Date: 2012–01
    URL: http://d.repec.org/n?u=RePEc:zar:wpaper:dt2012-01&r=ecm
  16. By: Candelon B.; Straetmans S. (METEOR)
    Abstract: The tail of financial returns is typically governed by a power law (i.e. “fat tails”). However,the constancy of the so-called tail index α which dictates the tail decay has been hardlyinvestigated. We study the finite sample properties of some recently proposed endogenous tests forstructural change in α. Given that the finite sample critical values strongly depend on the tailparameters of the return distribution we propose a bootstrap-based version of the structuralchange test. Our empirical application spans a wide variety of long-term developed and emergingfinancial asset returns. Somewhat surprisingly, the tail behavior of emerging stock markets is notmore strongly inclined to structural change than their developed counterparts. Emergingcurrencies, on the contrary, are more prone to shifts in the tail behavior than developedcurrencies. Our results suggest that extreme value theory (EVT) applications in hedging tail risksor in assessing the (changing) propensity to financial crises can assume stationary tail behaviorover long time spans provided one considers portfolios that solely consist of stocks or bonds.However, our break results also indicate it is advisable to use shorter estimation windows whenapplying EVT methods to emerging currency portfolios.
    Keywords: Economics ;
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:dgr:umamet:2012014&r=ecm
  17. By: Lorenzo Ricci; David Veredas
    Abstract: We introduce TailCoR, a new measure for tail correlation that is a function of linear and non–linear contributions, the latter characterized by the tails. TailCoR can be exploited in a number of financial applications, such as portfolio selection where the investor faces risks of linear and tail nature –a case that we cover in detail. Moreover, TailCoR has the following advantages: i) it is exact for any probability level as it is not based on tail asymptotic arguments (contrary to tail dependence coefficients), ii) it is distribution free, and iii) it is simple and no optimizations are needed. Monte Carlo simulations and calibrations reveal its goodness in finite samples. An empirical illustration to a panel of European sovereign bonds shows that prior to 2009 linear correlations were in the vicinity of one and non–linear correlations were inexistent. Since the beginning of the crisis the linear correlations have decreased sharply and non–linear correlations appeared and increased significantly in 2010–2011.
    JEL: C32 C51
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/119282&r=ecm
  18. By: Dariusz Grech; Zygmunt Mazur
    Abstract: We examine the scaling regime for the detrended fluctuation analysis (DFA) - the most popular method used to detect the presence of long memory in data and the fractal structure of time series. First, the scaling range for DFA is studied for uncorrelated data as a function of length $L$ of time series and regression line coefficient $R^2$ at various confidence levels. Next, an analysis of artificial short series with long memory is performed. In both cases the scaling range $\lambda$ is found to change linearly -- both with $L$ and $R^2$. We show how this dependence can be generalized to a simple unified model describing the relation $\lambda=\lambda(L, R^2, H)$ where $H$ ($1/2\leq H \leq 1$) stands for the Hurst exponent of long range autocorrelated data. Our findings should be useful in all applications of DFA technique, particularly for instantaneous (local) DFA where enormous number of short time series has to be examined at once, without possibility for preliminary check of the scaling range of each series separately.
    Date: 2012–06
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1206.1007&r=ecm
  19. By: Janczura, Joanna; Trueck, Stefan; Weron, Rafal; Wolff, Rodney
    Abstract: An important issue in fitting stochastic models to electricity spot prices is the estimation of a component to deal with trends and seasonality in the data. Unfortunately, estimation routines for the long-term and short-term seasonal pattern are usually quite sensitive to extreme observations, known as electricity price spikes. Improved robustness of the model can be achieved by (a) filtering the data with some reasonable procedure for outlier detection, and then (b) using estimation and testing procedures on the filtered data. In this paper we examine the effects of different treatment of extreme observations on model estimation and on determining the number of spikes (outliers). In particular we compare results for the estimation of the seasonal and stochastic components of electricity spot prices using either the original or filtered data. We find significant evidence for a superior estimation of both the seasonal short-term and long-term components when the data have been treated carefully for outliers. Overall, our findings point out the substantial impact the treatment of extreme observations may have on these issues and, therefore, also on the pricing of electricity derivatives like futures and option contracts. An added value of our study is the ranking of different filtering techniques used in the energy economics literature, suggesting which methods could be and which should not be used for spike identification.
    Keywords: Electricity spot price; Outlier treatment; Price spike; Robust modeling; Seasonality
    JEL: C51 C52 Q47 C80
    Date: 2012–06–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:39277&r=ecm
  20. By: Òscar Jordá; Malte Knuppel; Massimiliano Marcellino
    Abstract: This paper investigates the problem of constructing prediction regions for forecast trajectories 1 to H periods into the future - a path forecast. We take the more general view that the null model is only approximative and in some cases it may be altogether unavailable. As a consequence, one cannot derive the usual analytic expressions nor resample from the null model as is usually done when bootstrap methods are used. The paper derives methods to construct approximate rectangular regions for simultaneous probability coverage which correct for serial correlation. The techniques appear to work well in simulations and in an application to the Greenbook path-forecasts of growth and inflation.
    Keywords: Forecasting
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:fedfwp:2012-05&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.