
on Econometrics 
By:  Anthony W. Lynch; Jessica A. Wachter 
Abstract:  Many applications in financial economics use data series with different starting or ending dates. This paper describes estimation methods, based on the generalized method of moments (GMM), which make use of all available data for each moment condition. We introduce two asymptotically equivalent estimators that are consistent, asymptotically normal, and more efficient asymptotically than standard GMM. We apply these methods to estimating predictive regressions in international data and show that the use of the full sample affects point estimates and standard errors for both assets with data available for the full period and assets with data available for a subset of the period. Monte Carlo experiments demonstrate that reductions hold for smallsample standard errors as well as asymptotic ones. These methods are extended to more general patterns of missing data, and are shown to be more efficient than estimators that ignore intervals of the data, and thus more efficient than standard GMM. 
JEL:  C32 G12 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:14411&r=ecm 
By:  Alain Guay (Department of Economics, Universite du Quebec a Montreal); jeanFrancois Lamarche (Department of Economics, Brock University) 
Abstract:  This paper proposes Pearsontype statistics based on implied probabilities to detect structural change. The class of generalized empirical likelihood estimators (see Smith, 1997) assigns a set of probabilities to each observation such that moment conditions are satisfied. These restricted probabilities are called implied probabilities. Implied probabilities may also be constructed for the standard GMM (see Back and Brown, 1993). The proposed test statistics for structural change are based on the information content in these implied probabilities. We consider cases of structural change with unknown breakpoint which can occur in the parameters of interest or in the overidentifying restrictions used to estimate these parameters. The test statistics considered here have good size and power properties. 
Keywords:  Generalized empirical likelihood, generalized method of moments, parameter instability, structural change 
JEL:  C12 C32 
Date:  2005–07 
URL:  http://d.repec.org/n?u=RePEc:brk:wpaper:0804&r=ecm 
By:  Xiaohong Chen (Yale University); Roger Koenker (University of Illinois at UrbanaChampaign); Zhijie Xiao (Boston College) 
Abstract:  Parametric copulas are shown to be attractive devices for specifying quantile autoregressive models for nonlinear timeseries. Estimation of local, quantilespecific copulabased time series models offers some salient advantages over classical global parametric approaches. Consistency and asymptotic normality of the proposed quantile estimators are established under mild conditions, allowing for global misspecification of parametric copulas and marginals, and without assuming any mixing rate condition. These results lead to a general framework for inference and model specification testing of extreme conditional valueatrisk for financial time series data. 
Keywords:  Quantile autoregression, Copula, Ergodic nonlinear Markov models 
JEL:  C10 C13 C22 
Date:  2008–10–08 
URL:  http://d.repec.org/n?u=RePEc:boc:bocoec:691&r=ecm 
By:  Luiza Badin; Cinzia Daraio; Léopold Simar 
Abstract:  In productivity analysis an important issue is to detect how external (environmental) factors, exogenous to the production process and not under the control of the producer, might influence the production process and the resulting efficiency of the firms. Most of the traditional approaches proposed in the literature have serious drawbacks. An alternative approach is to describe the production process as being conditioned by a given value of the environmental variables (Cazals, Florens and Simar, 2002, Daraio and Simar, 2005). This defines conditional efficiency measures where the production set in the input × output space may depend on the value of the external variables. The statistical properties of nonparametric estimators of these conditional measures are now established (Jeong, Park and Simar, 2008). These involve the estimation of a nonstandard conditional distribution function which requires the specification of a smoothing parameter (a bandwidth). So far, only the asymptotic optimal order of this bandwidth has been established. This is of little interest for the practitioner. In this paper we fill this gap and we propose a datadriven technique for selecting this parameter in practice. The approach, based on a Least Squares Cross Validation procedure (LSCV), provides an optimal bandwidth that minimizes an appropriate integrated Mean Squared Error (MSE). The method is carefully described and exemplified with some simulated data with univariate and multivariate environmental factors. An application on real data (performances of Mutual Funds) illustrates how this new optimal method of bandwidth selection outperforms former methods. 
Keywords:  Nonparametric efficiency estimation, conditional efficiency measures, environmental factors, conditional distribution function, bandwidth. 
JEL:  C14 C40 C60 D20 
Date:  2008–10–24 
URL:  http://d.repec.org/n?u=RePEc:ssa:lemwps:2008/22&r=ecm 
By:  Joao Santos Silva; Geert Dhaene 
Abstract:  This paper suggests a simple specification test to check the adequacy of the assumptions made about the distribution of individual effects in models where these unobservable random variables are integrated out by quadrature methods. Because the proposed test checks the specification of the finitemixture analogue of the model of interest, it also has power to detect other forms of misspecification. Additionally, it is shown that it is easy to increase the flexibility of models with unobserved individual effects. The results of a Monte Carlo study and an application using a well known data set are presented to illustrate the finite sample properties of the proposed methods and their implementation in practice. 
Date:  2008–10–20 
URL:  http://d.repec.org/n?u=RePEc:esx:essedp:661&r=ecm 
By:  Daisuke Nagakura (Institute for Monetary and Economic Studies, Bank of Japan (Email: daisuke.nagakura@boj.or.jp)) 
Abstract:  In this paper, we propose a simple methodology for investigating how shocks to trend and cycle are correlated in unidentified unobserved components models, in which the correlation is not identified. The proposed methodology is applied to U.S. and U.K. real GDP data. We find that the correlation parameters are negative for both countries. We also investigate how changing the identification restriction results in different trend and cycle estimates. It is found that estimates of the trend and cycle can vary substantially depending on the identification restrictions imposed. 
Keywords:  Business Cycle Analysis, Trend, Cycle, Permanent Component, Transitory Component, Unobserved Components Model 
JEL:  C01 E32 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:ime:imedps:08e24&r=ecm 
By:  Harvey, A. 
Abstract:  A copula models the relationships between variables independently of their marginal distributions. When the variables are time series, the copula may change over time. A statistical framework is suggested for tracking these changes over time. When the marginal distribu tions change, prefiltering is necessary before constructing the indicator variables on which the tracking of the copula is based. This entails solving an even more basic problem, namely estimating timevarying quantiles. The methods are applied to the Hong Kong and Korean stock market indices. Some interesting movements are detected, particularly after the attack on the Hong Kong dollar in 1997. 
Keywords:  Concordance, contagion, exponentially weighted moving average; quantiles; signal extraction, tail dependence. 
JEL:  C14 C22 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:0839&r=ecm 
By:  Aviv Nevo; Adam M. Rosen 
Abstract:  Dealing with endogenous regressors is a central challenge of applied research. The standard solution is to use instrumental variables that are assumed to be uncorrelated with unobservables. We instead assume (i) the correlation between the instrument and the error term has the same sign as the correlation between the endogenous regressor and the error term, and (ii) that the instrument is less correlated with the error term than is the endogenous regressor. Using these assumptions, we derive analytic bounds for the parameters. We demonstrate the method in two applications. 
JEL:  C30 C31 C33 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:14434&r=ecm 
By:  Hugo Gerard; Kristoffer Nimark 
Abstract:  This paper combines multivariate density forecasts of output growth, inflation and interest rates from a suite of models. An outofsample weighting scheme based on the predictive likelihood as proposed by Eklund and Karlsson (2005) and Andersson and Karlsson (2007) is used to combine the models. Three classes of models are considered: a Bayesian vector autoregression (BVAR), a factoraugmented vector autoregression (FAVAR) and a mediumscale dynamic stochastic general equilibrium (DSGE) model. Using Australian data, we find that, at short forecast horizons, the Bayesian VAR model is assigned the most weight, while at intermediate and longer horizons the factor model is preferred. The DSGE model is assigned little weight at all horizons, a result that can be attributed to the DSGE model producing density forecasts that are very wide when compared with the actual distribution of observations. While a density forecast evaluation exercise reveals little formal evidence that the optimally combined densities are superior to those from the bestperforming individual model, or a simple equalweighting scheme, this may be a result of the short sample available. 
Keywords:  Density forecasts, combining forecasts, predictive criteria 
Date:  2008–08 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:1117&r=ecm 
By:  Harvey, A.; Chakravarty, T. 
Abstract:  The GARCHt model is widely used to predict volatilty. However, modeling the conditional variance as a linear combination of past squared observations may not be the best approach if the standardized observations are nonGaussian. A simple modi.cation lets the conditional variance, or its logarithm, depend on past values of the score of a tdistribution. The fact that the transformed variable has a beta distribution makes it possible to derive the properties of the resulting models. A practical consequence is that the conditional variance is more resistant to extreme observations. Extensions to deal with leverage and more than one component are discussed, as are the implications of distributions other than Student's t. 
Keywords:  Conditional heteroskedasticity; leverage; robustness; score; Student's t; volatility. 
JEL:  C22 G10 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:0840&r=ecm 
By:  Neocleous, Tereza (University of Glasgow); Portnoy, Stephen (University of Illinois) 
Abstract:  Censored Regression Quantile (CRQ) methods provide a powerful and flexible approach for the analysis of censored survival data when standard linear models are felt to be appropriate. In many cases however, greater flexibility is desired to go beyond the usual multiple regression paradigm. One area of common interest is that of partially linear models, where one (or more) of the explanatory variables are assumed to act on the response through a nonlinear function. Here the CRQ approach (Portnoy, 2003) is extended to such partially linear setting. Basic consistency results are presented. A simulation experiment and analysis of unemployment data example justify the use of the partially linear approach over methods based on the Cox proportional hazards regression model and methods not permitting nonlinearity. 
Keywords:  quantile regression; partially linear models ; Bsplines ; censored data ; unemployment duration 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:irs:iriswp:200807&r=ecm 
By:  Richard M. Bittman; Joseph P. Romano; Carlos Vallarino; Michael Wolf 
Abstract:  We present a theoretical basis for testing related endpoints. Typically, it is known how to construct tests of the individual hypotheses, and the problem is how to combine them into a multiple test procedure that controls the familywise error rate. Using the closure method, we emphasize the role of consonant procedures, from an interpretive as well as a theoretical viewpoint. Suprisingly, even if each intersection test has an optimality property, the overall procedure obtained by applying closure to these tests may be inadmissible. We introduce a new procedure, which is consonant and has a maximin property under the normal model. The results are then applied to PROactive, a clinical trial designed to investigate the effectiveness of a glucoselowering drug on macrovascular outcomes among patients with type 2 diabetes. 
Keywords:  Closure Method, Consonance, Familywise Error Rate, Multiple Endpoints, Multiple Testing, O’Brien’s method. 
JEL:  C12 C14 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:zur:iewwpx:307&r=ecm 
By:  Stanislav Anatolyev (New Economic School); Grigory Kosenok (New Economic School) 
Abstract:  Sequential procedures of testing for structural stability do not provide enough guidance on the shape of boundaries that are used to decide on acceptance or rejection, requiring only that the overall size of the test is asymptotically controlled. We introduce and motivate a reasonable criterion for a shape of boundaries which requires that the test size be uniformly distributed over the testing period. Under this criterion, we numerically construct boundaries for most popular sequential tests that are characterized by a test statistic behaving asymptotically either as a Wiener process or Brownian bridge. We handle this problem both in a context of retrospecting a historical sample and in a context of monitoring newly arriving data. We tabulate the boundaries by fitting them to certain flexible but parsimonious functional forms. Interesting patterns emerge in an illustrative application of sequential tests to the Phillips curve model. 
Keywords:  Structural stability; sequential tests; CUSUM; retrospection; monitoring; boundaries; asymptotic size 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:cfr:cefirw:w0123&r=ecm 
By:  Andrea Carriero (Queen Mary, University of London); George Kapetanios (Queen Mary, University of London); Massimiliano Marcellino (European University Institute and Bocconi University) 
Abstract:  Models based on economic theory have serious problems at forecasting exchange rates better than simple univariate driftless random walk models, especially at short horizons. Multivariate time series models suffer from the same problem. In this paper, we propose to forecast exchange rates with a large Bayesian VAR (BVAR), using a panel of 33 exchange rates visavis the US Dollar. Since exchange rates tend to comove, the use of a large set of them can contain useful information for forecasting. In addition, we adopt a driftless random walk prior, so that crossdynamics matter for forecasting only if there is strong evidence of them in the data. We produce forecasts for all the 33 exchange rates in the panel, and show that our model produces systematically better forecasts than a random walk for most of the countries, and at any forecast horizon, including at 1step ahead. 
Keywords:  Exchange rates, Forecasting, Bayesian VAR 
JEL:  C53 C11 F31 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:qmw:qmwecw:wp634&r=ecm 
By:  Dominique Guégan; Justin Leroux (IEA, HEC Montréal) 
Abstract:  We propose a novel methodology for forecasting chaotic systems which uses information on local Lyapunov exponents (LLEs) to improve upon existing predictors by correcting for their inevitable bias. Using simulated data on the nearestneighbor predictor, we show that accuracy gains can be substantial and that the candidate selection problem identified in Guégan and Leroux (2009) can be solved irrespective of the value of LLEs. An important corollary follows: the focal value of zero, which traditionally distinguishes order from chaos, plays no role whatsoever when forecasting deterministic systems. 
Keywords:  Chaos theory, Lyapunov exponent, Lorenz attractor Rössler attractor, Monte Carlo Simulations. 
JEL:  C15 C22 C53 C65 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:iea:carech:0810&r=ecm 
By:  Busetti, F.; Harvey, A. 
Abstract:  A copula defines the probability that observations from two time series lie below given quantiles. It is proposed that stationarity tests constructed from indicator variables be used to test against the hypothesis that the copula is changing over time. Tests associated with different quantiles may point to changes in different parts of the copula, with the lower quantiles being of particular interest in financial applications concerned with risk. Tests located at the median provide an overall test of a changing relationship. The properties of various tests are compared and it is shown that they are still effective if prefiltering is carried out to correct for changing volatility or, more generally, changing quantiles. Applying the tests to daily stock return indices in Korea and Thailand over the period 19959 indicates that the relationship between them is not constant over time. 
Keywords:  Concordance; quantile; rank correlation; stationarity test; tail dependence. 
Date:  2008–08 
URL:  http://d.repec.org/n?u=RePEc:cam:camdae:0841&r=ecm 
By:  Dardanoni, V; Li Donni, P 
Abstract:  In two important recent papers, Finkelstein and McGarry [25] and Finkelstein and Poterba [28] propose a new test for asymmetric information in insurance markets that considers explicitly unobserved heterogeneity in insurance demand. In this paper we propose an alternative implementation of the FinkelsteinMcGarryPoterba test based on the identification of unobservable types by use of finite mixture models. The actual implementation of our test follows some recent advances on marginal modelling as applied to latent class analysis; formal testing procedures for the null of asymmetric information and for the hypothesis that private information is indeed multidimensional can be performed by imposing restrictions on the behavior of these unobservable types. To show the potential applicability of our approach, we look at the long term insurance market as analyzed in Finkelstein and McGarry [25], where we also find strong evidence for both asymmetric information and multidimensional unobserved heterogeneity. 
Keywords:  Asymmetric Information, Unobservable Types, Latent Class Analysis, Long Term Insurance Market. 
JEL:  D82 G22 I11 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:yor:hectdg:08/26&r=ecm 
By:  Francis X. Diebold (Department of Economics, University of Pennsylvania); Georg H. Strasser (Department of Economics, Boston College) 
Abstract:  We argue for incorporating the financial economics of market microstructure into the financial econometrics of asset return volatility estimation. In particular, we use market microstructure theory to derive the crosscorrelation function between latent returns and market microstructure noise, which feature prominently in the recent volatility literature. The crosscorrelation at zero displacement is typically negative, and crosscorrelations at nonzero displacements are positive and decay geometrically. If market makers are sufficiently risk averse, however, the crosscorrelation pattern is inverted. Our results are useful for assessing the validity of the frequentlyassumed independence of latent price and microstructure noise, for explaining observed crosscorrelation patterns, for predicting asyet undiscovered patterns, and for making informed conjectures as to improved volatility estimation methods. 
Keywords:  Realized volatility, Market microstructure theory, Highfrequency data, Financial econometrics 
JEL:  G14 G20 D82 D83 C51 
Date:  2008–10–09 
URL:  http://d.repec.org/n?u=RePEc:pen:papers:08038&r=ecm 
By:  Mustapha Belkhouja (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales  CNRS : UMR6579); Imene Mootamri (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales  CNRS : UMR6579); Mohamed Boutahar (GREQAM  Groupement de Recherche en Économie Quantitative d'AixMarseille  Université de la Méditerranée  AixMarseille II  Université Paul Cézanne  AixMarseille III  Ecole des Hautes Etudes en Sciences Sociales  CNRS : UMR6579) 
Abstract:  The aim of this paper is to study the dynamic evolution of inflation rate. The model is constructed by extending the ARFIMAGARCH to ARFIMA with a time varying GARCH model where the transition from one regime to another is evolving smoothly over time. We show by Monte Carlo experiments that the constancy parameter tests perform well. We apply then this new model on eight countries from Europe, Japan and Canada and find that this model is appropriate for six among these countries. 
Keywords:  ARFIMA model, Generalised autoregressive conditional heteroscedasticity model, Inflation rate, Long memory process, Nonlinear time series, Timevarying parameter mode 
Date:  2008–10–20 
URL:  http://d.repec.org/n?u=RePEc:hal:wpaper:halshs00331986_v1&r=ecm 
By:  Oliver Blaskowitz; Helmut Herwartz 
Abstract:  The paper proposes a data driven adaptive model selection strategy. The selection crite rion measures economic ex–ante forecasting content by means of trading implied cash flows. Empirical evidence suggests that the proposed strategy is neither exposed to selection bias nor to the risk of choosing excessively poor models from a parameterized class of candidate specifications. 
Keywords:  Model selection, Principal components, Factor analysis, Ex–ante forecasting, EURIBOR swap term structure, Trading strategies. 
JEL:  C32 C53 E43 G29 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008064&r=ecm 
By:  In Choi; Eiji Kurozumi 
Abstract:  In this paper, Mallows'(1973) Cp criterion, Akaike's (1973) AIC, Hurvich and Tsai's (1989) corrected AIC and the BIC of Akaike (1978) and Schwarz (1978) are derived for the leadsandlags cointegrating regression. Deriving model selection criteria for the leadsandlags regression is a nontrivial task since the true model is of infinite dimension. This paper justifies using the conventional formulas of those model selection criteria for the leadsandlags cointegrating regression. The numbers of leads and lags can be selected in scientific ways using the model selection criteria. Simulation results regarding the bias and mean squared error of the longrun coefficient estimates are reported. It is found that the model selection criteria are successful in reducing bias and mean squared error relative to the conventional, fixed selection rules. Among the model selection criteria, the BIC appears to be most successful in reducing MSE, and Cp in reducing bias. We also observe that, in most cases, the selection rules without the restriction that the numbers of the leads and lags be the same have an advantage over those with it. 
Keywords:  Cointegration, Leadsandlags regression, AIC, Corrected AIC, BIC, Cp 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:hst:ghsdps:gd08006&r=ecm 
By:  V. V. Chari; Patrick J. Kehoe; Ellen R. McGrattan 
Abstract:  The central finding of the recent structural vector autoregression (SVAR) literature with a differenced specification of hours is that technology shocks lead to a fall in hours. Researchers have used this finding to argue that real business cycle models are unpromising. We subject this SVAR specification to a natural economic test by showing that when applied to data generated from a multipleshock business cycle model, the procedure incorrectly concludes that the model could not have generated the data as long as demand shocks play a nontrivial role. We also test another popular specification, which uses the level of hours, and show that with nontrivial demand shocks, it cannot distinguish between real business cycle models and sticky price models. The crux of the problem for both SVAR specifications is that available data necessitate a VAR with a small number of lags and, when demand shocks play a nontrivial role, such a VAR is a poor approximation to the model's infinite order VAR. 
JEL:  C32 C51 E13 E2 E3 E32 E37 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:nbr:nberwo:14430&r=ecm 
By:  Mario Cerrato; Christian de Peretti; Chris Stewart 
Abstract:  This paper applies recently developed time series and heterogeneous panel nonlinear unit root tests to 24 OECD and 33 nonOECD countries’ consumptionincome ratios over the period 1951–2003. This extends evidence provided in the recent literature to consider nonlinear adjustment in time series and panel unit root tests, and substantially expands both time series and cross sectional dimensions of data analysed. We find that there is nonlinear reversion to a mean or trend for just over half of OECD countries and just under half of nonOECD countries. 
Keywords:  consumptionincome ratio, heterogeneous panel nonlinear unit root test 
JEL:  C12 C33 D12 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:gla:glaewp:2008_27&r=ecm 
By:  Greg Hannsgen 
Abstract:  Since Christopher Sims's "Macroeconomics and Reality" (1980), macroeconomists have used structural VARs, or vector autoregressions, for policy analysis. Constructing the impulseresponse functions and variance decompositions that are central to this literature requires factoring the variancecovariance matrix of innovations from the VAR. This paper presents evidence consistent with the hypothesis that at least some elements of this matrix are infinite for one monetary VAR, as the innovations have stable, nonGaussian distributions, with characteristic exponents ranging from 1.5504 to 1.7734 according to ML estimates. Hence, Cholesky and other factorizations that would normally be used to identify structural residuals from the VAR are impossible. 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:lev:wrkpap:wp_546&r=ecm 
By:  Maximilian Vermorken (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels.); Ariane Szafarz (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels and DULBEA, Université Libre de Bruxelles, Brussels.); Hugues Pirotte (Centre Emile Bernheim, Solvay Business School, Université Libre de Bruxelles, Brussels) 
Abstract:  Standard sector classification frameworks present drawbacks that might hinder portfolio manager. This paper introduces a new nonparametric approach to equity classification. Returns are decomposed into their fundamental drivers through Independent Component Analysis (ICA). Stocks are then classified according to the relative importance of identified fundamental drivers for their returns. A method is developed permitting the quantification of these dependencies, using a similarity index. Hierarchical clustering allows for grouping the stocks into new classes. The resulting classes are compared with those from the 2digit GICS system for U.S. blue chip companies. It is shown that specific relations between stocks are not captured by the GICS framework. The method is applied on two different samples and tested for robustness. 
Keywords:  equity sectors, industry classification, portfolio management 
JEL:  G11 G19 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:sol:wpaper:08032&r=ecm 
By:  Konstantopoulos, Spyros (Boston College) 
Abstract:  In experimental designs with nested structures entire groups (such as schools) are often assigned to treatment conditions. Key aspects of the design in these cluster randomized experiments include knowledge of the intraclass correlation structure and the sample sizes necessary to achieve adequate power to detect the treatment effect. However, the units at each level of the hierarchy have a cost associated with them and thus researchers need to decide on sample sizes given a certain budget, when designing their studies. This paper provides methods for computing power within an optimal design framework (that incorporates costs of units in all three levels) for threelevel cluster randomized balanced designs with two levels of nesting. The optimal sample sizes are a function of the variances at each level and the cost of each unit. Overall, larger effect sizes, smaller intraclass correlations at the second and third level, and lower cost of level3 and level2 units result in higher estimates of power. 
Keywords:  experimental design, statistical power, optimal sampling 
JEL:  I20 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:iza:izadps:dp3753&r=ecm 