nep-ecm New Economics Papers
on Econometrics
Issue of 2012‒05‒15
23 papers chosen by
Sune Karlsson
Orebro University

  1. Simultaneous Statistical Inference in Dynamic Factor Models By Thorsten Dickhaus; ;
  2. Robust Standard Errors in Transformed Likelihood Estimation of Dynamic Panel Models By Hayakawa, K.; Pesaran, M.H.
  3. Stein-Rule Estimation and Generalized Shrinkage Methods for Forecasting Using Many Predictors By Eric Hillebrand; Tae-Hwy Lee
  4. Nonparametric estimation and inference for Granger causality measures By Abderrahim Taamouti; Taoufik Bouezmarni; Anouar El Ghouch
  5. Nonparametric tests for conditional independence using conditional distributions By Taoufik Bouezmarni; Abderrahim Taamouti
  6. Indirect estimation of GARCH models with alpha-stable innovations By Parrini, Alessandro
  7. Multilevel structured additive regression By Stefan Lang; Nikolaus Umlauf; Peter Wechselberger; Kenneth Harttgen; Thomas Kneib
  8. Bootstrapping factor-augmented regression models By Sílvia Gonçalves; Benoit Perron
  9. Robustness for Dummies By Vincenzo Verardi; Marjorie Gassner; Darwin Ugarte Ontiveros
  10. A Multivariate Random Walk Model with Slowly Changing Drift and Cross-correlation Applied to Finance By Yuanhua Feng; David Hand; Yuanhua Feng
  11. Oracle Inequalities for High Dimensional Vector Autoregressions By Anders Bredahl Kock; Laurent A.F. Callot
  12. Bernstein estimator for unbounded density copula By Taoufik Bouezmarni; Anouar El Ghouch; Abderrahim Taamouti
  13. The Estimation of Multi-dimensional Fixed Effects Panel Data Models By László Mátyás; László Balázsi
  14. Bayesian Estimation of a Dynamic Game with Endogenous, Partially Observed, Serially Correlated State By A. Ronald Gallant; Han Hong; Ahmed Khwaja
  15. Estimation of Public Goods Game Data By Merrett, Danielle
  16. Decomposition of non-linear models using simulated residuals By François-Charles Wolff
  17. A multivariate piecing-together approach with an application to operational loss data By Stefan Aulbach; Verena Bayer; Michael Falk
  18. An endogenously clustered factor approach to international business cycles By Neville Francis; Michael T. Owyang; Özge Savascin
  19. Using of Non-Numeric, Non-Exact and Non-Complete Information for Alternatives’ Probabilities Estimation By Nikolai V. Hovanov; Maria S. Yudaeva
  20. Overlapping sub-sampling and invariance to initial conditions By Kyriacou, Maria
  21. The Contributions of Rare Objects in Correspondence Analysis By Michael Greenacre
  22. Testing for linear and threshold cointegration under the spatial equilibrium condition By Araujo-Enciso, Sergio Rene
  23. Interaction effects in econometrics By Balli, Hatice Ozer; Sorensen, Bent E.

  1. By: Thorsten Dickhaus; ;
    Abstract: Based on the theory of multiple statistical hypothesis testing, we elaborate simultaneous statistical inference methods in dynamic factor models. In particular, we employ structural properties of multivariate chi-squared distributions in order to construct critical regions for vectors of likelihood ratio statistics in such models. In this, we make use of the asymptotic distribution of the vector of test statistics for large sample sizes, assuming that the model is identified and model restrictions are testable. Examples of important multiple test problems in dynamic factor models demonstrate the relevance of the proposed methods for practical applications.
    Keywords: family-wise error rate, false discovery rate, likelihood ratio statistic, multiple hypothesis testing, multivariate chi-squared distribution, time series regression, Wald statistic
    JEL: C12 C32 C52
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2012-033&r=ecm
  2. By: Hayakawa, K.; Pesaran, M.H.
    Abstract: This paper extends the transformed maximum likelihood approach for estimation of dynamic panel data models by Hsiao, Pesaran, and Tahmiscioglu (2002) to the case where the errors are crosssectionally heteroskedastic. This extension is not trivial due to the incidental parameters problem that arises, and its implications for estimation and inference. We approach the problem by working with a mis-specified homoskedastic model. It is shown that the transformed maximum likelihood estimator continues to be consistent even in the presence of cross-sectional heteroskedasticity. We also obtain standard errors that are robust to cross-sectional heteroskedasticity of unknown form. By means of Monte Carlo simulation, we investigate the finite sample behavior of the transformed maximum likelihood estimator and compare it with various GMM estimators proposed in the literature. Simulation results reveal that, in terms of median absolute errors and accuracy of inference, the transformed likelihood estimator outperforms the GMM estimators in almost all cases.
    Keywords: Dynamic Panels, Cross-sectional heteroskedasticity, Monte Carlo simulation, GMM estimation
    JEL: C12 C13 C23
    Date: 2012–05–09
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1224&r=ecm
  3. By: Eric Hillebrand (Aarhus University and CREATES); Tae-Hwy Lee (University of California, Riverside)
    Abstract: We examine the Stein-rule shrinkage estimator for possible improvements in estimation and forecasting when there are many predictors in a linear time series model. We consider the Stein-rule estimator of Hill and Judge (1987) that shrinks the unrestricted unbiased OLS estimator towards a restricted biased principal component (PC) estimator. Since the Stein-rule estimator combines the OLS and PC estimators, it is a model-averaging estimator and produces a combined forecast. The conditions under which the improvement can be achieved depend on several unknown parameters that determine the degree of the Stein-rule shrinkage. We conduct Monte Carlo simulations to examine these parameter regions. The overall picture that emerges is that the Stein-rule shrinkage estimator can dominate both OLS and principal components estimators within an intermediate range of the signal-to-noise ratio. If the signal-to-noise ratio is low, the PC estimator is superior. If the signal-to-noise ratio is high, the OLS estimator is superior. In out-of-sample forecasting with AR(1) predictors, the Stein-rule shrinkage estimator can dominate both OLS and PC estimators when the predictors exhibit low persistence.
    Keywords: Stein-rule, shrinkage, risk, variance-bias tradeo, OLS, principal components.
    JEL: C1 C2 C5
    Date: 2012–04–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-18&r=ecm
  4. By: Abderrahim Taamouti; Taoufik Bouezmarni; Anouar El Ghouch
    Abstract: We propose a nonparametric estimator and a nonparametric test for Granger causality measures that quantify linear and nonlinear Granger causality in distribution between random variables. We first show how to write the Granger causality measures in terms of copula densities. We suggest a consistent estimator for these causality measures based on nonparametric estimators of copula densities. Further, we prove that the nonparametric estimators are asymptotically normally distributed and we discuss the validity of a local smoothed bootstrap that we use in finite sample settings to compute a bootstrap bias-corrected estimator and test for our causality measures. A simulation study reveals that the bias-corrected bootstrap estimator of causality measures behaves well and the corresponding test has quite good finite sample size and power properties for a variety of typical data generating processes and different sample sizes. Finally, we illustrate the practical relevance of nonparametric causality measures by quantifying the Granger causality between S&P500 Index returns and many exchange rates (US/Canada, US/UK and US/Japen exchange rates).
    Keywords: Causality measures, Nonparametric estimation, Time series, Copulas, Bernstein copula density, Local bootstrap, Conditional distribution function, Stock returns
    JEL: C12 C14 C15 C19 G1 G12 E3 E4
    Date: 2012–03
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1212&r=ecm
  5. By: Taoufik Bouezmarni; Abderrahim Taamouti
    Abstract: The concept of causality is naturally defined in terms of conditional distribution, however almost all the empirical works focus on causality in mean. This paper aim to propose a nonparametric statistic to test the conditional independence and Granger non-causality between two variables conditionally on another one. The test statistic is based on the comparison of conditional distribution functions using an L2 metric. We use Nadaraya-Watson method to estimate the conditional distribution functions. We establish the asymptotic size and power properties of the test statistic and we motivate the validity of the local bootstrap. Further, we ran a simulation experiment to investigate the finite sample properties of the test and we illustrate its practical relevance by examining the Granger non-causality between S&P 500 Index returns and VIX volatility index. Contrary to the conventional t-test, which is based on a linear mean-regression model, we find that VIX index predicts excess returns both at short and long horizons.
    Keywords: Nonparametric tests, Time series, Conditional independence, Granger non-causality, Nadaraya-Watson estimator, Conditional distribution function, VIX volatility index, S&P500 index
    JEL: C12 C14 C15 C19 G1 G12 E3 E4
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1211&r=ecm
  6. By: Parrini, Alessandro
    Abstract: Several studies have highlighted the fact that heavy-tailedness of asset returns can be the consequence of conditional heteroskedasticity. GARCH models have thus become very popular, given their ability to account for volatility clustering and, implicitly, heavy tails. However, these models encounter some difficulties in handling financial time series, as they respond equally to positive and negative shocks and their tail behavior remains too short even with Student-t error terms. To overcome these weaknesses we apply GARCH-type models with alpha-stable innovations. The stable family of distributions constitutes a generalization of the Gaussian distribution that has intriguing theoretical and practical properties. Indeed it is stable under addiction and, having four parameters, it allows for asymmetry and heavy tails. Unfortunately stable models do not have closed likelihood function, but since simulated values from α-stable distributions can be straightforwardly obtained, the indirect inference approach is particularly suited to the situation at hand. In this work we provide a description of how to estimate a GARCH(1,1) and a TGARCH(1,1) with symmetric stable shocks using as auxiliary model a GARCH(1,1) with skew-t innovations. Monte Carlo simulations, conducted using GAUSS, are presented and finally the proposed models are used to estimate the IBM weekly return series as an illustration of how they perform on real data.
    Keywords: GARCH; alpha-stable distribution; indirect estimation; skew-t distribution; Monte Carlo simulations
    JEL: C13 C32 C87 C15 C01
    Date: 2012–04–18
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:38544&r=ecm
  7. By: Stefan Lang; Nikolaus Umlauf; Peter Wechselberger; Kenneth Harttgen; Thomas Kneib
    Abstract: Models with structured additive predictor provide a very broad and rich framework for complex regression modeling. They can deal simultaneously with nonlinear covariate effects and time trends, unit- or cluster-specific heterogeneity, spatial heterogeneity and complex interactions between covariates of different type. In this paper, we propose a hierarchical or multilevel version of regression models with structured additive predictor where the regression coefficients of a particular nonlinear term may obey another regression model with structured additive predictor. In that sense, the model is composed of a hierarchy of complex structured additive regression models. The proposed model may be regarded as an extended version of a multilevel model with nonlinear covariate terms in every level of the hierarchy. The model framework is also the basis for generalized random slope modeling based on multiplicative random effects. Inference is fully Bayesian and based on Markov chain Monte Carlo simulation techniques. We provide an in depth description of several highly efficient sampling schemes that allow to estimate complex models with several hierarchy levels and a large number of observations within a couple of minutes (often even seconds). We demonstrate the practicability of the approach in a complex application on childhood undernutrition with large sample size and three hierarchy levels.
    Keywords: Bayesian hierarchical models, kriging, Markov random fields, MCMC, multiplicative random effects, P-splines
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:inn:wpaper:2012-07&r=ecm
  8. By: Sílvia Gonçalves; Benoit Perron
    Abstract: The main contribution of this paper is to propose and theoretically justify bootstrap methods for regressions where some of the regressors are factors estimated from a large panel of data. We derive our results under the assumption that √T/N→c, where 0≤c<∞ (N and T are the cross-sectional and the time series dimensions, respectively), thus allowing for the possibility that factors estimation error enters the limiting distribution of the OLS estimator. We consider general residual-based bootstrap methods and provide a set of high level conditions on the bootstrap residuals and on the idiosyncratic errors such that the bootstrap distribution of the OLS estimator is consistent. We subsequently verify these conditions for a simple wild bootstrap residual-based procedure. Our main results can be summarized as follows. When c=0, as in Bai and Ng (2006), the crucial condition for bootstrap validity is the ability of the bootstrap regression scores to mimic the serial dependence of the original regression scores. Mimicking the cross sectional and/or serial dependence of the idiosyncratic errors in the panel factor model is asymptotically irrelevant in this case since the limiting distribution of the original OLS estimator does not depend on these dependencies. Instead, when c>0, a two-step residual-based bootstrap is required to capture the factors estimation uncertainty, which shows up as an asymptotic bias term (as we show here and as was recently discussed by Ludvigson and Ng (2009b)). Because the bias depends on the cross sectional dependence of the idiosyncratic error term, bootstrap validity depends crucially on the ability of the bootstrap panel factor model to capture this cross sectional dependence. <P>Cet article propose et justifie théoriquement des méthodes de bootstrap pour des régressions où certains régresseurs sont des facteurs estimés à partir de panel de données de grandes dimensions. Nous obtenons nos résultats sous la condition que √T/N→c, où 0≤c<∞ (N et T sont les dimensions individuelle et temporelle du panel respectivement), ce qui permet à l’erreur d’estimation des facteurs d’affecter la loi asymptotique de l’estimateur des moindres carrés ordinaires (MCO). Nous considérons des méthodes de bootstrap basées sur les résidus et donnons des conditions de haut niveau sur les résidus bootstrap et les erreurs idiosyncrasiques telles que la loi bootstrap de l’estimateur des MCO est convergente. Par la suite, nous vérifions ces conditions pour un algorithme du wild bootstrap. Nos résultats sont les suivants. Lorsque c = 0, comme dans Bai et Ng (2006), la condition essentielle pour la validité du bootstrap est la capacité de la régression bootstrap à reproduire la dépendance temporelle des scores de la régression originale. La dépendance transversale ou temporelle des erreurs idiosyncrasiques du modèle à facteurs est négligeable asymptotiquement puisque la loi asymptotique des MCO n’est pas affectée par ces phénomènes. Cependant, lorsque c > 0, une procédure de bootstrap à deux étapes est nécessaire pour capter l’incertitude reliée à l’estimation des facteurs qui apparaît comme un biais asymptotique (tel que discuté récemment par Ludvigson et Ng (2009b). Parce que ce biais dépend de la dépendance transversale des erreurs idiosyncrasiques, la validité du bootstrap dépend de sa capacité à reproduire cette dépendance.
    Keywords: factor model, bootstrap, asymptotic bias, Modèle à facteurs, bootstrap, biais asymptotique
    Date: 2012–05–01
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2012s-12&r=ecm
  9. By: Vincenzo Verardi; Marjorie Gassner; Darwin Ugarte Ontiveros
    Abstract: In the robust statistics literature, a wide variety of models have been devel- oped to cope with outliers in a rather large number of scenarios. Nevertheless, a recurrent problem for the empirical implementation of these estimators is that optimization algorithms generally do not perform well when dummy vari- ables are present. What we propose in this paper is a simple solution to this involving the replacement of the sub-sampling step of the maximization procedures by a projection-based method. This allows us to propose robust estimators involving categorical variables, be they explanatory or dependent. Some Monte Carlo simulations are presented to illustrate the good behavior of the method.
    Keywords: S-estimators; Robust Regression; Dummy Variables; Outliers
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:eca:wpaper:2013/117087&r=ecm
  10. By: Yuanhua Feng (University of Paderborn); David Hand (Imperial College); Yuanhua Feng (Brunel University)
    Abstract: A new multivariate random walk model with slowly changing drift and cross-correlations for multivariate processes is introduced and investigated in detail. In the model, not only the drifts and the cross-covariances but also the cross-correlations between single series are allowed to change slowly over time. The model can accompany any number of components such as many number of assets. The model is particularly useful for modelling and forecasting the value of financial portfolios under very complex market conditions. Kernel estimation of local covariance matrix is used. The integrated effect of the estimation errors involved in estimating the integrated processes is derived. Practical relevance of the model and estimation is illustrated by application to several foreign exchange rates.
    Keywords: Forecasting, Kernel estimation, Multivariate time series analysis, Portfolio return, Slowly changing multivariate random walk
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:pdn:wpaper:50&r=ecm
  11. By: Anders Bredahl Kock (Aarhus University and CREATES); Laurent A.F. Callot (Aarhus University and CREATES)
    Abstract: This paper establishes non-asymptotic oracle inequalities for the prediction error and estimation accuracy of the LASSO in stationary vector autoregressive models. These inequalities are used to establish consistency of the LASSO even when the number of parameters is of a much larger order of magnitude than the sample size. Furthermore, it is shown that under suitable conditions the number of variables selected is of the right order of magnitude and that no relevant variables are excluded. Next, non-asymptotic probabilities are given for the Adaptive LASSO to select the correct sign pattern (and hence the correct sparsity pattern). Finally conditions under which the Adaptive LASSO reveals the correct sign pattern with probability tending to one are given. Again, the number of parameters may be much larger than the sample size. Some maximal inequalities for vector autoregressions which might be of independent interest are contained in the appendix.
    Keywords: Vector autoregression, LASSO, Adaptive LASSO, Oracle inequality, Variable selection.
    JEL: C01 C02 C13 C32
    Date: 2012–04–30
    URL: http://d.repec.org/n?u=RePEc:aah:create:2012-16&r=ecm
  12. By: Taoufik Bouezmarni; Anouar El Ghouch; Abderrahim Taamouti
    Abstract: We study the asymptotic properties of the Bernstein estimator for unbounded density copula functions. We show that the estimator converges to infinity at the corner. We establish its relative convergence when the copula is unbounded and we provide the uniform strong consistency of the estimator on every compact in the interior region. We also check the finite simple performance of the estimator via an extensive simulation study and we compare it with other well known nonparametric methods. Finally, we consider an empirical application where the asymmetric dependence between international equity markets (US, Canada, UK, and France) is re-examined.
    Keywords: Unbounded copula, Nonparametric estimation, Bernstein polynomial, Asymptotic properties, Uniform strong consistency, Relative convergence, Boundary bias
    Date: 2011–10
    URL: http://d.repec.org/n?u=RePEc:cte:werepe:we1143&r=ecm
  13. By: László Mátyás; László Balázsi
    Abstract: The paper introduces for the most frequently used three-dimensional fixed effects panel data models the appropriate within estimators. It analyzes the behaviour of these estimators in the case of no-self-flow data, unbalanced data and dynamic autoregressive models. Then the main results are generalised for higher dimensional panel data sets as well.
    Date: 2012–04–23
    URL: http://d.repec.org/n?u=RePEc:ceu:econwp:2012_2&r=ecm
  14. By: A. Ronald Gallant; Han Hong; Ahmed Khwaja
    Abstract: We consider dynamic games that can have state variables that are partially observed, serially correlated, endogenous, and heterogeneous. We propose a Bayesian method that uses a particle filter to compute an unbiased estimate of the likelihood within a Metropolis chain. Unbiasedness guarantees that the stationary density of the chain is the exact posterior, not an approximation. The number of particles required is easily determined. The regularity conditions are weak. Results are verified by simulation from two dynamic oligopolistic games with endogenous state. One is an entry game with feedback to costs based on past entry and the other a model of an industry with a large number of heterogeneous firms that compete on product quality.
    Keywords: Dynamic Games, Partially Observed State, Endogenous State, Serially Correlated State, Particle Filter
    JEL: E00 G12 C51 C52
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:duk:dukeec:12-01&r=ecm
  15. By: Merrett, Danielle
    Abstract: This paper compares the performance of alternative estimation approaches for Public Goods Game data. A leave-one-out cross validation was applied to test the performance of five estimation approaches. Random effects is revealed as the best estimation approach because of its un-biased and precise estimates and its ability to estimate time-invariant demographics. Surprisingly, approaches that treat the choice variable as continuous out-perform those that treat the choice variable as discrete. Correcting for censoring is shown to induce biased estimates. A finite Poisson mixture model produced relatively un-biased estimates however lacked the precision of fixed and random effects estimation.
    Keywords: finite mixture models; ordered logit; fixed effects; random effects; economic experiments; voluntary contributions mechanism; public goods
    Date: 2012–04
    URL: http://d.repec.org/n?u=RePEc:syd:wpaper:2123/8256&r=ecm
  16. By: François-Charles Wolff (LEMNA - Laboratoire d'économie et de management de Nantes Atlantique - Université de Nantes : EA4272, INED - Institut National d'Etudes Démographiques Paris - INED)
    Abstract: This paper proposes to decompose non-linear models deduced from a latent regression framework using the latent dependent outcome as dependent variable and the Oaxaca-Blinder decomposition technique. Values of the unobserved latent outcome are obtained using simulated residuals.
    Keywords: Blinder-Oaxaca ; non-linear models ; simulated residuals
    Date: 2012–05–04
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:hal-00694421&r=ecm
  17. By: Stefan Aulbach; Verena Bayer; Michael Falk
    Abstract: The univariate piecing-together approach (PT) fits a univariate generalized Pareto distribution (GPD) to the upper tail of a given distribution function in a continuous manner. We propose a multivariate extension. First it is shown that an arbitrary copula is in the domain of attraction of a multivariate extreme value distribution if and only if its upper tail can be approximated by the upper tail of a multivariate GPD with uniform margins. The multivariate PT then consists of two steps: The upper tail of a given copula $C$ is cut off and substituted by a multivariate GPD copula in a continuous manner. The result is again a copula. The other step consists of the transformation of each margin of this new copula by a given univariate distribution function. This provides, altogether, a multivariate distribution function with prescribed margins whose copula coincides in its central part with $C$ and in its upper tail with a GPD copula. When applied to data, this approach also enables the evaluation of a wide range of rational scenarios for the upper tail of the underlying distribution function in the multivariate case. We apply this approach to operational loss data in order to evaluate the range of operational risk.
    Date: 2012–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1205.1617&r=ecm
  18. By: Neville Francis; Michael T. Owyang; Özge Savascin
    Abstract: Factor models have become useful tools for studying international business cycles. Block factor models [e.g., Kose, Otrok, and Whiteman (2003)] can be especially useful as the zero restrictions on the loadings of some factors may provide some economic interpretation of the factors. These models, however, require the econometrician to predefine the blocks, leading to potential misspecification. In Monte Carlo experiments, we show that even small misspecifica- tion can lead to substantial declines in t. We propose an alternative model in which the blocks are chosen endogenously. The model is estimated in a Bayesian framework using a hierarchi- cal prior, which allows us to incorporate series-level covariates that may influence and explain how the series are grouped. Using similar international business cycle data as Kose, Otrok, and Whiteman, we find our country clusters differ in important ways from those identified by geography alone. In particular, we find that similarities in institutions (e.g., legal systems, language diversity) may be just as important as physical proximity for analyzing business cycle comovements.
    Keywords: Business cycles ; Economic conditions
    Date: 2012
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:2012-014&r=ecm
  19. By: Nikolai V. Hovanov; Maria S. Yudaeva
    Abstract: A method of alternatives’ probabilities estimation under deficiency of numeric information (obtained from different sources) is proposed. The method is based on the well known Bayesian model of uncertainty randomization. Additional non-numeric, non-exact, and non-complete information about the sources’ significance are used for final estimation of the alternatives’ probabilities. Some examples of the method application to commodities’ prices and currencies rates dynamics forecasting are presented.
    Keywords: Ordinal and Interval Information; Randomization of Uncertainty; Random Probabilities
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:deg:conpap:c016_010&r=ecm
  20. By: Kyriacou, Maria
    Abstract: This paper studies the use of the overlapping blocking scheme in unit root autoregression. When the underlying process is that of a random walk, the blocks’ initial conditions are not fixed, but are equal to the sum of all the previous observations’ error terms. When non- overlapping subsamples are used, as first shown by Chambers and Kyriacou (2010), these initial conditions do not disappear asymptotically. In this paper we show that a simple way of overcoming this issue is to use overlapping blocks. By doing so, the effect of these initial conditions vanishes asymptotically. An application of these findings to jackknife estimators indicates that an estimator based on moving-blocks is able to provide obvious reductions to the mean square error
    Date: 2012–05–01
    URL: http://d.repec.org/n?u=RePEc:stn:sotoec:1203&r=ecm
  21. By: Michael Greenacre
    Abstract: Correspondence analysis, when used to visualize relationships in a table of counts (for example, abundance data in ecology), has been frequently criticized as being too sensitive to objects (for example, species) that occur with very low frequency or in very few samples. In this statistical report we show that this criticism is generally unfounded. We demonstrate this in several data sets by calculating the actual contributions of rare objects to the results of correspondence analysis and canonical correspondence analysis, both to the determination of the principal axes and to the chi-square distance. It is a fact that rare objects are often positioned as outliers in correspondence analysis maps, which gives the impression that they are highly influential, but their low weight offsets their distant positions and reduces their effect on the results. An alternative scaling of the correspondence analysis solution, the contribution biplot, is proposed as a way of mapping the results in order to avoid the problem of outlying and low contributing rare objects.
    Keywords: Biplot, canonical correspondence analysis, contribution, correspondence analysis, influence, outlier, scaling
    JEL: C19 C88
    Date: 2011–09
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:571&r=ecm
  22. By: Araujo-Enciso, Sergio Rene
    Abstract: Economic theory states that the spatial equilibrium condition is a region where prices can be or not cointegrated. It is when prices are within such a region when they are no cointegrated, when prices are in its boundaries they are not only cointegrated but also fulfilling the Law of One Price (LOP). Nonetheless the econometric techniques assume a mean reverting process in order to test for cointegration, either linear or non linear. This research shows that in the absence of such mean reverting process by using prices in pure equilibrium, cointegration (linear and non linear) is often rejected. Such findings go in line with the Band Threshold Autoregressive Model where the neutral band is a region of no cointegration. Furthermore it can be concluded that the economic concept of perfect market integration (LOP) by itself is not sufficient for testing cointegration with some of the current econometric methods.
    Keywords: Spatial Equilibrium Condition, Testing Cointegration, Demand and Price Analysis, Risk and Uncertainty, C15, E37,
    Date: 2012–02–23
    URL: http://d.repec.org/n?u=RePEc:ags:eaa123:122545&r=ecm
  23. By: Balli, Hatice Ozer; Sorensen, Bent E.
    Abstract: We provide practical advice for applied economists regarding robust specification and interpretation of linear regression models with interaction terms. We replicate a number of prominent published results using interaction effects and examine if they are robust to reasonable specification permutations.
    Keywords: Non-Linear Regression; Interaction Terms
    JEL: C13 C12
    Date: 2012–04–10
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:38608&r=ecm

This nep-ecm issue is ©2012 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.