nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒10‒28
twenty-six papers chosen by
Sune Karlsson
Orebro University

  1. Using Samples of Unequal Length in Generalized Method of Moments Estimation By Anthony W. Lynch; Jessica A. Wachter
  2. The Information Content of Implied Probabilities to Detect Structural Change By Alain Guay; jean-Francois Lamarche
  3. Copula-Based Nonlinear Quantile Autoregression By Xiaohong Chen; Roger Koenker; Zhijie Xiao
  4. Optimal Bandwidth Selection for Conditional Efficiency Measures: a Data-driven Approach By Luiza Badin; Cinzia Daraio; Léopold Simar
  5. Identifying Adjustment Costs of Net and Gross Employment Changes By Joao Santos Silva; Geert Dhaene
  6. How Are Shocks to Trend and Cycle Correlated? A Simple Methodology for Unidentified Unobserved Components Models By Daisuke Nagakura
  7. Dynamic distributions and changing copulas By Harvey, A.
  8. Identification with Imperfect Instruments By Aviv Nevo; Adam M. Rosen
  9. Combining Multivariate Density Forecasts using Predictive Criteria By Hugo Gerard; Kristoffer Nimark
  10. Beta-t-(E)GARCH By Harvey, A.; Chakravarty, T.
  11. A Partially Linear Censored Quantile Regression Model for Unemployment Duration By Neocleous, Tereza; Portnoy, Stephen
  12. Optimal testing of multiple hypotheses with common effect direction By Richard M. Bittman; Joseph P. Romano; Carlos Vallarino; Michael Wolf
  13. Sequential Testing with Uniformly Distributed Size By Stanislav Anatolyev; Grigory Kosenok
  14. Forecasting Exchange Rates with a Large Bayesian VAR By Andrea Carriero; George Kapetanios; Massimiliano Marcellino
  15. Local Lyapunov exponents: Zero plays no role in Forecasting chaotic systems By Dominique Guégan; Justin Leroux
  16. When is a copula constant? A test for changing relationships By Busetti, F.; Harvey, A.
  17. Testing For Asymmetric Information In Insurance Markets With Unobservable Types By Dardanoni, V; Li Donni, P
  18. On the Correlation Structure of Microstructure Noise in Theory and Practice By Francis X. Diebold; Georg H. Strasser
  19. Analysing CPI inflation by the fractionally integrated ARFIMA-STVGARCH model By Mustapha Belkhouja; Imene Mootamri; Mohamed Boutahar
  20. A note on the model selection risk for ANOVA based adaptive forecasting of the EURIBOR swap term structure. By Oliver Blaskowitz; Helmut Herwartz
  21. Model Selection Criteria for the Leads-and-Lags Cointegrating Regression By In Choi; Eiji Kurozumi
  22. Are Structural VARs with Long-Run Restrictions Useful in Developing Business Cycle Theory? By V. V. Chari; Patrick J. Kehoe; Ellen R. McGrattan
  23. Is the consumption-income ratio stationary? Evidence from a nonlinear panel unit root test for OECD and non-OECD countries By Mario Cerrato; Christian de Peretti; Chris Stewart
  24. "Do the Innovations in a Monetary VAR Have Finite Variances?" By Greg Hannsgen
  25. Sector Classification through non-Gaussian Similarity By Maximilian Vermorken; Ariane Szafarz; Hugues Pirotte
  26. Incorporating Cost in Power Analysis for Three-Level Cluster Randomized Designs By Konstantopoulos, Spyros

  1. By: Anthony W. Lynch; Jessica A. Wachter
    Abstract: Many applications in financial economics use data series with different starting or ending dates. This paper describes estimation methods, based on the generalized method of moments (GMM), which make use of all available data for each moment condition. We introduce two asymptotically equivalent estimators that are consistent, asymptotically normal, and more efficient asymptotically than standard GMM. We apply these methods to estimating predictive regressions in international data and show that the use of the full sample affects point estimates and standard errors for both assets with data available for the full period and assets with data available for a subset of the period. Monte Carlo experiments demonstrate that reductions hold for small-sample standard errors as well as asymptotic ones. These methods are extended to more general patterns of missing data, and are shown to be more efficient than estimators that ignore intervals of the data, and thus more efficient than standard GMM.
    JEL: C32 G12
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14411&r=ecm
  2. By: Alain Guay (Department of Economics, Universite du Quebec a Montreal); jean-Francois Lamarche (Department of Economics, Brock University)
    Abstract: This paper proposes Pearson-type statistics based on implied probabilities to detect structural change. The class of generalized empirical likelihood estimators (see Smith, 1997) assigns a set of probabilities to each observation such that moment conditions are satisfied. These restricted probabilities are called implied probabilities. Implied probabilities may also be constructed for the standard GMM (see Back and Brown, 1993). The proposed test statistics for structural change are based on the information content in these implied probabilities. We consider cases of structural change with unknown breakpoint which can occur in the parameters of interest or in the overidentifying restrictions used to estimate these parameters. The test statistics considered here have good size and power properties.
    Keywords: Generalized empirical likelihood, generalized method of moments, parameter instability, structural change
    JEL: C12 C32
    Date: 2005–07
    URL: http://d.repec.org/n?u=RePEc:brk:wpaper:0804&r=ecm
  3. By: Xiaohong Chen (Yale University); Roger Koenker (University of Illinois at Urbana-Champaign); Zhijie Xiao (Boston College)
    Abstract: Parametric copulas are shown to be attractive devices for specifying quantile autoregressive models for nonlinear time-series. Estimation of local, quantile-specific copula-based time series models offers some salient advantages over classical global parametric approaches. Consistency and asymptotic normality of the proposed quantile estimators are established under mild conditions, allowing for global misspecification of parametric copulas and marginals, and without assuming any mixing rate condition. These results lead to a general framework for inference and model specification testing of extreme conditional value-at-risk for financial time series data.
    Keywords: Quantile autoregression, Copula, Ergodic nonlinear Markov models
    JEL: C10 C13 C22
    Date: 2008–10–08
    URL: http://d.repec.org/n?u=RePEc:boc:bocoec:691&r=ecm
  4. By: Luiza Badin; Cinzia Daraio; Léopold Simar
    Abstract: In productivity analysis an important issue is to detect how external (environmental) factors, exogenous to the production process and not under the control of the producer, might influence the production process and the resulting efficiency of the firms. Most of the traditional approaches proposed in the literature have serious drawbacks. An alternative approach is to describe the production process as being conditioned by a given value of the environmental variables (Cazals, Florens and Simar, 2002, Daraio and Simar, 2005). This defines conditional efficiency measures where the production set in the input × output space may depend on the value of the external variables. The statistical properties of nonparametric estimators of these conditional measures are now established (Jeong, Park and Simar, 2008). These involve the estimation of a nonstandard conditional distribution function which requires the specification of a smoothing parameter (a bandwidth). So far, only the asymptotic optimal order of this bandwidth has been established. This is of little interest for the practitioner. In this paper we fill this gap and we propose a data-driven technique for selecting this parameter in practice. The approach, based on a Least Squares Cross Validation procedure (LSCV), provides an optimal bandwidth that minimizes an appropriate integrated Mean Squared Error (MSE). The method is carefully described and exemplified with some simulated data with univariate and multivariate environmental factors. An application on real data (performances of Mutual Funds) illustrates how this new optimal method of bandwidth selection outperforms former methods.
    Keywords: Nonparametric efficiency estimation, conditional efficiency measures, environmental factors, conditional distribution function, bandwidth.
    JEL: C14 C40 C60 D20
    Date: 2008–10–24
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2008/22&r=ecm
  5. By: Joao Santos Silva; Geert Dhaene
    Abstract: This paper suggests a simple specification test to check the adequacy of the assumptions made about the distribution of individual effects in models where these unobservable random variables are integrated out by quadrature methods. Because the proposed test checks the specification of the finite-mixture analogue of the model of interest, it also has power to detect other forms of misspecification. Additionally, it is shown that it is easy to increase the flexibility of models with unobserved individual effects. The results of a Monte Carlo study and an application using a well known data set are presented to illustrate the finite sample properties of the proposed methods and their implementation in practice.
    Date: 2008–10–20
    URL: http://d.repec.org/n?u=RePEc:esx:essedp:661&r=ecm
  6. By: Daisuke Nagakura (Institute for Monetary and Economic Studies, Bank of Japan (E-mail: daisuke.nagakura@boj.or.jp))
    Abstract: In this paper, we propose a simple methodology for investigating how shocks to trend and cycle are correlated in unidentified unobserved components models, in which the correlation is not identified. The proposed methodology is applied to U.S. and U.K. real GDP data. We find that the correlation parameters are negative for both countries. We also investigate how changing the identification restriction results in different trend and cycle estimates. It is found that estimates of the trend and cycle can vary substantially depending on the identification restrictions imposed.
    Keywords: Business Cycle Analysis, Trend, Cycle, Permanent Component, Transitory Component, Unobserved Components Model
    JEL: C01 E32
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:ime:imedps:08-e-24&r=ecm
  7. By: Harvey, A.
    Abstract: A copula models the relationships between variables independently of their marginal distributions. When the variables are time series, the copula may change over time. A statistical framework is suggested for tracking these changes over time. When the marginal distribu- tions change, pre-filtering is necessary before constructing the indicator variables on which the tracking of the copula is based. This entails solving an even more basic problem, namely estimating time-varying quantiles. The methods are applied to the Hong Kong and Korean stock market indices. Some interesting movements are detected, particularly after the attack on the Hong Kong dollar in 1997.
    Keywords: Concordance, contagion, exponentially weighted moving average; quantiles; signal extraction, tail dependence.
    JEL: C14 C22
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:0839&r=ecm
  8. By: Aviv Nevo; Adam M. Rosen
    Abstract: Dealing with endogenous regressors is a central challenge of applied research. The standard solution is to use instrumental variables that are assumed to be uncorrelated with unobservables. We instead assume (i) the correlation between the instrument and the error term has the same sign as the correlation between the endogenous regressor and the error term, and (ii) that the instrument is less correlated with the error term than is the endogenous regressor. Using these assumptions, we derive analytic bounds for the parameters. We demonstrate the method in two applications.
    JEL: C30 C31 C33
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14434&r=ecm
  9. By: Hugo Gerard; Kristoffer Nimark
    Abstract: This paper combines multivariate density forecasts of output growth, inflation and interest rates from a suite of models. An out-of-sample weighting scheme based on the predictive likelihood as proposed by Eklund and Karlsson (2005) and Andersson and Karlsson (2007) is used to combine the models. Three classes of models are considered: a Bayesian vector autoregression (BVAR), a factor-augmented vector autoregression (FAVAR) and a medium-scale dynamic stochastic general equilibrium (DSGE) model. Using Australian data, we find that, at short forecast horizons, the Bayesian VAR model is assigned the most weight, while at intermediate and longer horizons the factor model is preferred. The DSGE model is assigned little weight at all horizons, a result that can be attributed to the DSGE model producing density forecasts that are very wide when compared with the actual distribution of observations. While a density forecast evaluation exercise reveals little formal evidence that the optimally combined densities are superior to those from the best-performing individual model, or a simple equal-weighting scheme, this may be a result of the short sample available.
    Keywords: Density forecasts, combining forecasts, predictive criteria
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1117&r=ecm
  10. By: Harvey, A.; Chakravarty, T.
    Abstract: The GARCH-t model is widely used to predict volatilty. However, modeling the conditional variance as a linear combination of past squared observations may not be the best approach if the standardized observations are non-Gaussian. A simple modi.cation lets the conditional variance, or its logarithm, depend on past values of the score of a t-distribution. The fact that the transformed variable has a beta distribution makes it possible to derive the properties of the resulting models. A practical consequence is that the conditional variance is more resistant to extreme observations. Extensions to deal with leverage and more than one component are discussed, as are the implications of distributions other than Student's t.
    Keywords: Conditional heteroskedasticity; leverage; robustness; score; Student's t; volatility.
    JEL: C22 G10
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:0840&r=ecm
  11. By: Neocleous, Tereza (University of Glasgow); Portnoy, Stephen (University of Illinois)
    Abstract: Censored Regression Quantile (CRQ) methods provide a powerful and flexible approach for the analysis of censored survival data when standard linear models are felt to be appropriate. In many cases however, greater flexibility is desired to go beyond the usual multiple regression paradigm. One area of common interest is that of partially linear models, where one (or more) of the explanatory variables are assumed to act on the response through a non-linear function. Here the CRQ approach (Portnoy, 2003) is extended to such partially linear setting. Basic consistency results are presented. A simulation experiment and analysis of unemployment data example justify the use of the partially linear approach over methods based on the Cox proportional hazards regression model and methods not permitting nonlinearity.
    Keywords: quantile regression; partially linear models ; B-splines ; censored data ; unemployment duration
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:irs:iriswp:2008-07&r=ecm
  12. By: Richard M. Bittman; Joseph P. Romano; Carlos Vallarino; Michael Wolf
    Abstract: We present a theoretical basis for testing related endpoints. Typically, it is known how to construct tests of the individual hypotheses, and the problem is how to combine them into a multiple test procedure that controls the familywise error rate. Using the closure method, we emphasize the role of consonant procedures, from an interpretive as well as a theoretical viewpoint. Suprisingly, even if each intersection test has an optimality property, the overall procedure obtained by applying closure to these tests may be inadmissible. We introduce a new procedure, which is consonant and has a maximin property under the normal model. The results are then applied to PROactive, a clinical trial designed to investigate the effectiveness of a glucose-lowering drug on macrovascular outcomes among patients with type 2 diabetes.
    Keywords: Closure Method, Consonance, Familywise Error Rate, Multiple Endpoints, Multiple Testing, O’Brien’s method.
    JEL: C12 C14
    Date: 2008–07
    URL: http://d.repec.org/n?u=RePEc:zur:iewwpx:307&r=ecm
  13. By: Stanislav Anatolyev (New Economic School); Grigory Kosenok (New Economic School)
    Abstract: Sequential procedures of testing for structural stability do not provide enough guidance on the shape of boundaries that are used to decide on acceptance or rejection, requiring only that the overall size of the test is asymptotically controlled. We introduce and motivate a reasonable criterion for a shape of boundaries which requires that the test size be uniformly distributed over the testing period. Under this criterion, we numerically construct boundaries for most popular sequential tests that are characterized by a test statistic behaving asymptotically either as a Wiener process or Brownian bridge. We handle this problem both in a context of retrospecting a historical sample and in a context of monitoring newly arriving data. We tabulate the boundaries by fitting them to certain flexible but parsimonious functional forms. Interesting patterns emerge in an illustrative application of sequential tests to the Phillips curve model.
    Keywords: Structural stability; sequential tests; CUSUM; retrospection; monitoring; boundaries; asymptotic size
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:cfr:cefirw:w0123&r=ecm
  14. By: Andrea Carriero (Queen Mary, University of London); George Kapetanios (Queen Mary, University of London); Massimiliano Marcellino (European University Institute and Bocconi University)
    Abstract: Models based on economic theory have serious problems at forecasting exchange rates better than simple univariate driftless random walk models, especially at short horizons. Multivariate time series models suffer from the same problem. In this paper, we propose to forecast exchange rates with a large Bayesian VAR (BVAR), using a panel of 33 exchange rates vis-a-vis the US Dollar. Since exchange rates tend to co-move, the use of a large set of them can contain useful information for forecasting. In addition, we adopt a driftless random walk prior, so that cross-dynamics matter for forecasting only if there is strong evidence of them in the data. We produce forecasts for all the 33 exchange rates in the panel, and show that our model produces systematically better forecasts than a random walk for most of the countries, and at any forecast horizon, including at 1-step ahead.
    Keywords: Exchange rates, Forecasting, Bayesian VAR
    JEL: C53 C11 F31
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp634&r=ecm
  15. By: Dominique Guégan; Justin Leroux (IEA, HEC Montréal)
    Abstract: We propose a novel methodology for forecasting chaotic systems which uses information on local Lyapunov exponents (LLEs) to improve upon existing predictors by correcting for their inevitable bias. Using simulated data on the nearest-neighbor predictor, we show that accuracy gains can be substantial and that the candidate selection problem identified in Guégan and Leroux (2009) can be solved irrespective of the value of LLEs. An important corollary follows: the focal value of zero, which traditionally distinguishes order from chaos, plays no role whatsoever when forecasting deterministic systems.
    Keywords: Chaos theory, Lyapunov exponent, Lorenz attractor Rössler attractor, Monte Carlo Simulations.
    JEL: C15 C22 C53 C65
    Date: 2008–09
    URL: http://d.repec.org/n?u=RePEc:iea:carech:0810&r=ecm
  16. By: Busetti, F.; Harvey, A.
    Abstract: A copula defines the probability that observations from two time series lie below given quantiles. It is proposed that stationarity tests constructed from indicator variables be used to test against the hypothesis that the copula is changing over time. Tests associated with different quantiles may point to changes in different parts of the copula, with the lower quantiles being of particular interest in financial applications concerned with risk. Tests located at the median provide an overall test of a changing relationship. The properties of various tests are compared and it is shown that they are still effective if pre-filtering is carried out to correct for changing volatility or, more generally, changing quantiles. Applying the tests to daily stock return indices in Korea and Thailand over the period 1995-9 indicates that the relationship between them is not constant over time.
    Keywords: Concordance; quantile; rank correlation; stationarity test; tail dependence.
    Date: 2008–08
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:0841&r=ecm
  17. By: Dardanoni, V; Li Donni, P
    Abstract: In two important recent papers, Finkelstein and McGarry [25] and Finkelstein and Poterba [28] propose a new test for asymmetric information in insurance markets that considers explicitly unobserved heterogeneity in insurance demand. In this paper we propose an alternative implementation of the Finkelstein-McGarry-Poterba test based on the identification of unobservable types by use of finite mixture models. The actual implementation of our test follows some recent advances on marginal modelling as applied to latent class analysis; formal testing procedures for the null of asymmetric information and for the hypothesis that private information is indeed multidimensional can be performed by imposing restrictions on the behavior of these unobservable types. To show the potential applicability of our approach, we look at the long term insurance market as analyzed in Finkelstein and McGarry [25], where we also find strong evidence for both asymmetric information and multidimensional unobserved heterogeneity.
    Keywords: Asymmetric Information, Unobservable Types, Latent Class Analysis, Long Term Insurance Market.
    JEL: D82 G22 I11
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:yor:hectdg:08/26&r=ecm
  18. By: Francis X. Diebold (Department of Economics, University of Pennsylvania); Georg H. Strasser (Department of Economics, Boston College)
    Abstract: We argue for incorporating the financial economics of market microstructure into the financial econometrics of asset return volatility estimation. In particular, we use market microstructure theory to derive the cross-correlation function between latent returns and market microstructure noise, which feature prominently in the recent volatility literature. The cross-correlation at zero displacement is typically negative, and cross-correlations at nonzero displacements are positive and decay geometrically. If market makers are sufficiently risk averse, however, the cross-correlation pattern is inverted. Our results are useful for assessing the validity of the frequently-assumed independence of latent price and microstructure noise, for explaining observed crosscorrelation patterns, for predicting as-yet undiscovered patterns, and for making informed conjectures as to improved volatility estimation methods.
    Keywords: Realized volatility, Market microstructure theory, High-frequency data, Financial econometrics
    JEL: G14 G20 D82 D83 C51
    Date: 2008–10–09
    URL: http://d.repec.org/n?u=RePEc:pen:papers:08-038&r=ecm
  19. By: Mustapha Belkhouja (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales - CNRS : UMR6579); Imene Mootamri (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales - CNRS : UMR6579); Mohamed Boutahar (GREQAM - Groupement de Recherche en Économie Quantitative d'Aix-Marseille - Université de la Méditerranée - Aix-Marseille II - Université Paul Cézanne - Aix-Marseille III - Ecole des Hautes Etudes en Sciences Sociales - CNRS : UMR6579)
    Abstract: The aim of this paper is to study the dynamic evolution of inflation rate. The model is constructed by extending the ARFIMA-GARCH to ARFIMA with a time varying GARCH model where the transition from one regime to another is evolving smoothly over time. We show by Monte Carlo experiments that the constancy parameter tests perform well. We apply then this new model on eight countries from Europe, Japan and Canada and find that this model is appropriate for six among these countries.
    Keywords: ARFIMA model, Generalised autoregressive conditional heteroscedasticity model, Inflation rate, Long memory process, Nonlinear time series, Time-varying parameter mode
    Date: 2008–10–20
    URL: http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-00331986_v1&r=ecm
  20. By: Oliver Blaskowitz; Helmut Herwartz
    Abstract: The paper proposes a data driven adaptive model selection strategy. The selection crite- rion measures economic ex–ante forecasting content by means of trading implied cash flows. Empirical evidence suggests that the proposed strategy is neither exposed to selection bias nor to the risk of choosing excessively poor models from a parameterized class of candidate specifications.
    Keywords: Model selection, Principal components, Factor analysis, Ex–ante forecasting, EURIBOR swap term structure, Trading strategies.
    JEL: C32 C53 E43 G29
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-064&r=ecm
  21. By: In Choi; Eiji Kurozumi
    Abstract: In this paper, Mallows'(1973) Cp criterion, Akaike's (1973) AIC, Hurvich and Tsai's (1989) corrected AIC and the BIC of Akaike (1978) and Schwarz (1978) are derived for the leads-and-lags cointegrating regression. Deriving model selection criteria for the leads-and-lags regression is a nontrivial task since the true model is of infinite dimension. This paper justifies using the conventional formulas of those model selection criteria for the leads-and-lags cointegrating regression. The numbers of leads and lags can be selected in scientific ways using the model selection criteria. Simulation results regarding the bias and mean squared error of the long-run coefficient estimates are reported. It is found that the model selection criteria are successful in reducing bias and mean squared error relative to the conventional, fixed selection rules. Among the model selection criteria, the BIC appears to be most successful in reducing MSE, and Cp in reducing bias. We also observe that, in most cases, the selection rules without the restriction that the numbers of the leads and lags be the same have an advantage over those with it.
    Keywords: Cointegration, Leads-and-lags regression, AIC, Cor-rected AIC, BIC, Cp
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08-006&r=ecm
  22. By: V. V. Chari; Patrick J. Kehoe; Ellen R. McGrattan
    Abstract: The central finding of the recent structural vector autoregression (SVAR) literature with a differenced specification of hours is that technology shocks lead to a fall in hours. Researchers have used this finding to argue that real business cycle models are unpromising. We subject this SVAR specification to a natural economic test by showing that when applied to data generated from a multiple-shock business cycle model, the procedure incorrectly concludes that the model could not have generated the data as long as demand shocks play a nontrivial role. We also test another popular specification, which uses the level of hours, and show that with nontrivial demand shocks, it cannot distinguish between real business cycle models and sticky price models. The crux of the problem for both SVAR specifications is that available data necessitate a VAR with a small number of lags and, when demand shocks play a nontrivial role, such a VAR is a poor approximation to the model's infinite order VAR.
    JEL: C32 C51 E13 E2 E3 E32 E37
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:14430&r=ecm
  23. By: Mario Cerrato; Christian de Peretti; Chris Stewart
    Abstract: This paper applies recently developed time series and heterogeneous panel nonlinear unit root tests to 24 OECD and 33 non-OECD countries’ consumption-income ratios over the period 1951–2003. This extends evidence provided in the recent literature to consider nonlinear adjustment in time series and panel unit root tests, and substantially expands both time series and cross sectional dimensions of data analysed. We find that there is nonlinear reversion to a mean or trend for just over half of OECD countries and just under half of non-OECD countries.
    Keywords: consumption-income ratio, heterogeneous panel nonlinear unit root test
    JEL: C12 C33 D12
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:gla:glaewp:2008_27&r=ecm
  24. By: Greg Hannsgen
    Abstract: Since Christopher Sims's "Macroeconomics and Reality" (1980), macroeconomists have used structural VARs, or vector autoregressions, for policy analysis. Constructing the impulse-response functions and variance decompositions that are central to this literature requires factoring the variance-covariance matrix of innovations from the VAR. This paper presents evidence consistent with the hypothesis that at least some elements of this matrix are infinite for one monetary VAR, as the innovations have stable, non-Gaussian distributions, with characteristic exponents ranging from 1.5504 to 1.7734 according to ML estimates. Hence, Cholesky and other factorizations that would normally be used to identify structural residuals from the VAR are impossible.
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:lev:wrkpap:wp_546&r=ecm
  25. By: Maximilian Vermorken (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels.); Ariane Szafarz (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels and DULBEA, Université Libre de Bruxelles, Brussels.); Hugues Pirotte (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels)
    Abstract: Standard sector classification frameworks present drawbacks that might hinder portfolio manager. This paper introduces a new non-parametric approach to equity classification. Returns are decomposed into their fundamental drivers through Independent Component Analysis (ICA). Stocks are then classified according to the relative importance of identified fundamental drivers for their returns. A method is developed permitting the quantification of these dependencies, using a similarity index. Hierarchical clustering allows for grouping the stocks into new classes. The resulting classes are compared with those from the 2-digit GICS system for U.S. blue chip companies. It is shown that specific relations between stocks are not captured by the GICS framework. The method is applied on two different samples and tested for robustness.
    Keywords: equity sectors, industry classification, portfolio management
    JEL: G11 G19
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:sol:wpaper:08-032&r=ecm
  26. By: Konstantopoulos, Spyros (Boston College)
    Abstract: In experimental designs with nested structures entire groups (such as schools) are often assigned to treatment conditions. Key aspects of the design in these cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. However, the units at each level of the hierarchy have a cost associated with them and thus researchers need to decide on sample sizes given a certain budget, when designing their studies. This paper provides methods for computing power within an optimal design framework (that incorporates costs of units in all three levels) for three-level cluster randomized balanced designs with two levels of nesting. The optimal sample sizes are a function of the variances at each level and the cost of each unit. Overall, larger effect sizes, smaller intraclass correlations at the second and third level, and lower cost of level-3 and level-2 units result in higher estimates of power.
    Keywords: experimental design, statistical power, optimal sampling
    JEL: I20
    Date: 2008–10
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3753&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.