
on Econometrics 
By:  JeanMarie Dufour; Lynda Khalaf; Maral Kichian 
Abstract:  Weak identification is likely to be prevalent in multiequation macroeconomic models such as in dynamic stochastic general equilibrium setups. Identification difficulties cause the breakdown of standard asymptotic procedures, making inference unreliable. While the extensive econometric literature now includes a number of identificationrobust methods that are valid regardless of the identification status of models, these are mostly limitedinformationbased approaches, and applications have accordingly been made on singleequation models such as the New Keynesian Phillips Curve. <br><br> In this paper, we develop a set of identificationrobust econometric tools that, regardless of the model's identification status, are useful for estimating and assessing the fit of a system of structural equations. In particular, we propose a vector autoregression (VAR) based estimation and testing procedure that relies on inverting identificationrobust multivariate statistics. The procedure is valid in the presence of endogeneity, structural constraints, identification difficulties, or any combination of these, and also provides summary measures of fit. Furthermore, it has the additional desirable features that it is robust to missing instruments, errorsinvariables, the specification of the data generating process, and the presence of contemporaneous correlation in the disturbances. <br><br> We apply our methodology, using U.S. data, to the standard New Keynesian model such as the one studied in Clarida, Gali, and Gertler (1999). We find that, despite the presence of identification difficulties, our proposed method is able to shed some light on the fit of the considered model and, particularly, on the nature of the NKPC. Notably our results show that (i) confidence intervals obtained using our systembased approach are generally tighter than their singleequation counterparts, and thus are more informative, (ii) most model coefficients are significant at conventional levels, and (iii) the NKPC is preponderantly forwardlooking, though not purely so. 
Keywords:  Inflation and prices; Econometric and statistical methods 
JEL:  C52 C53 E37 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:bca:bocawp:0919&r=ecm 
By:  Krajina, A. (Tilburg University, Center for Economic Research) 
Abstract:  An elliptical copula model is a distribution function whose copula is that of an elliptical distri bution. The tail dependence function in such a bivariate model has a parametric representation with two parameters: a tail parameter and a correlation parameter. The correlation parameter can be estimated by robust methods based on the whole sample. Using the estimated correla tion parameter as plugin estimator, we then estimate the tail parameter applying a modification of the method of moments approach proposed in the paper by J.H.J. Einmahl, A. Krajina and J. Segers [Bernoulli 14(4), 2008, 10031026]. We show that such an estimator is consistent and asymptotically normal. Also, we derive the joint limit distribution of the estimators of the two parameters. By a simulation study, we illustrate the small sample behavior of the estimator of the tail parameter and we compare its performance to that of the estimator proposed in the paper by C. KlÄuppelberg, G. Kuhn and L. Peng [Scandinavian Journal of Statistics 35(4), 2008, 701718]. 
Keywords:  asymptotic normality;elliptical copula;elliptical distribution;metaelliptical model;method of moments;semiparametric model;tail dependence 
JEL:  C13 C14 C16 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:dgr:kubcen:200942&r=ecm 
By:  Pierre Perron (Boston University); Yohei Yamamoto (Boston University) 
Abstract:  Elliott and Müller (2006) considered the problem of testing for general types of parameter variations, including infrequent breaks. They developed a framework that yields optimal tests, in the sense that they nearly attain some local Gaussian power envelop. The main ingredient in their setup is that the variance of the process generating the changes in the parameters must go to zero at a fast rate. They recommended the socalled qLˆL test, a partial sums type test based on the residuals obtained from the restricted model. We show that for breaks that are very small, its power is indeed higher than other tests, including the popular supWald test. However, the differences are very minor. When the magnitude of change is moderate to large, the power of the test is very low in the context of a regression with lagged dependent variables or when a correction is applied to account for serial correlation in the errors. In many cases, the power goes to zero as the magnitude of change increases. The power of the supWald test does not show this nonmonotonicity and its power is far superior to the qLˆL test when the break is not very small. We claim that the optimality of the qLˆL test does not come from the properties of the test statistics but the criterion adopted, which is not useful to analyze structural change tests. Instead, we use the concept of the relative approximate Bahadur slopes to assess the relative efficiency of two tests. When doing so, it is shown that the supWald test indeed dominates the qLˆL test and, in many cases, the latter has zero relative asymptotic efficiency. 
Keywords:  structural change, local asymptotics, Bahadur efficiency, hypothesis testing, parameter variations 
JEL:  C22 
Date:  2008–05 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008006&r=ecm 
By:  Eickmeier, Sandra; Ng, Tim 
Abstract:  We look at how large international datasets can improve forecasts of national activity. We use the case of New Zealand, an archetypal small open economy. We apply “datarich” factor and shrinkage methods to tackle the problem of efficiently handling hundreds of predictor data series from many countries. The methods covered are principal components, targeted predictors, weighted principal components, partial least squares, elastic net and ridge regression. Using these methods, we assess the marginal predictive content of international data for New Zealand GDP growth. We find that exploiting a large number of international predictors can improve forecasts of our target variable, compared to more traditional models based on small datasets. This is in spite of New Zealand survey data capturing a substantial proportion of the predictive information in the international data. The largest forecasting accuracy gains from including international predictors are at longer forecast horizons. The forecasting performance achievable with the datarich methods differs widely, with shrinkage methods and partial least squares performing best. We also assess the type of international data that contains the most predictive information for New Zealand growth over our sample. 
Keywords:  Forecasting, factor models, shrinkage methods, principal components, targeted predictors, weighted principal components, partial least squares 
JEL:  C33 C53 F47 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7580&r=ecm 
By:  VICTOR CHERNOZHUKOV (Massachusetts Institute of Technology, Department of Economics & Operations Research Center; and University College London, CEMMAP); IVAN FERNANDEZVAL (Boston University, Department of Economics); BLAISE MELLY (Massachusetts Institute of Technology, Department of Economics) 
Abstract:  In this paper we develop procedures to make inference in regression models about how potential policy interventions affect the entire distribution of an outcome variable of interest. These policy interventions consist of counterfactual changes in the distribution of covariates related to the outcome. Under the assumption that the conditional distribution of the outcome is unaltered by the intervention, we obtain uniformly consistent estimates for functionals of the marginal distribution of the outcome before and after the policy intervention. Simultaneous confidence sets for these functionals are also constructed, which take into account the sampling variation in the estimation of the relationship between the outcome and covariates. This estimation can be based on several principal approaches for conditional quantile and distributions functions, including quantile regression and proportional hazard models. Our procedures are general and accommodate both simple unitary changes in the values of a given covariate as well as changes in the distribution of the covariates of general form. An empirical application and a Monte Carlo example illustrate the results. 
Date:  2008–05 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008005&r=ecm 
By:  Jing Zhou (BlackRock, Inc.); Pierre Perron (Boston University) 
Abstract:  In a companion paper, Perron and Zhou (2008) provided a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression model involving stationary regressors, allowing the break dates for the two components to be different or overlap. The aim of this paper is twofold. First, we present detailed simulation analyses to document various issues related to their procedures: a) the inadequacy of the two step procedures that are commonly applied; b) which particular version of the necessary correction factor exhibits better finite sample properties; c) whether applying a correction that is valid under more general conditions than necessary is detrimental to the size and power of the tests; d) the finite sample size and power of the various tests proposed; e) the performance of the sequential method in determining the number and types of breaks present. Second, we apply their testing procedures to various macroeconomic time series studied by Stock and Watson (2002). Our results reinforce the prevalence of change in mean, persistence and variance of the shocks to these series, and the fact that for most of them an important reduction in variance occurred during the 1980s. In many cases, however, the socalled “great moderation” should instead be viewed as a “great reversion”. 
Keywords:  Changepoint, Variance shift, Conditional heteroskedasticity, Likelihood ratio tests, the “Great moderation.” 
JEL:  C22 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008010&r=ecm 
By:  Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian 
Abstract:  This paper discusses pooling versus model selection for now and forecasting in the presence of model uncertainty with large, unbalanced datasets. Empirically, unbalanced data is pervasive in economics and typically due to di¤erent sampling frequencies and publication delays. Two model classes suited in this context are factor models based on large datasets and mixeddata sampling (MIDAS) regressions with few predictors. The specification of these models requires several choices related to, amongst others, the factor estimation method and the number of factors, lag length and indicator selection. Thus, there are many sources of misspecification when selecting a particular model, and an alternative could be pooling over a large set of models with different specifications. We evaluate the relative performance of pooling and model selection for now and forecasting quarterly German GDP, a key macroeconomic indicator for the largest country in the euro area, with a large set of about one hundred monthly indicators. Our empirical findings provide strong support for pooling over many specifications rather than selecting a specific model. 
Keywords:  casting, forecast combination, forecast pooling, model selection, mixed  frequency data, factor models, MIDAS 
JEL:  C53 E37 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7572&r=ecm 
By:  Victor Chernozhukov (MIT); Ivan FernandezVal (BU); Jinyong Hahn (UCLA); Whitney Newey (MIT) 
Abstract:  This paper gives identification and estimation results for marginal effects in nonlinear panel models. We find that linear fixed effects estimators are not consistent, due in part to marginal effects not being identified. We derive bounds for marginal effects and show that they can tighten rapidly as the number of time series observations grows. We also show in numerical calculations that the bounds may be very tight for small numbers of observations, suggesting they may be useful in practice. We propose two novel inference methods for parameters defined as solutions to linear and nonlinear programs such as marginal effects in multinomial choice models. We show that these methods produce uniformly valid confidence regions in large samples. We give an empirical illustration. 
Date:  2009–02 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2009b&r=ecm 
By:  Schumacher, Christian 
Abstract:  This paper considers factor forecasting with national versus factor forecasting withinternational data. We forecast German GDP based on a large set of about 500 time series, consisting of German data as well as data from Euroarea and G7 countries. For factor estimation, we consider standard principal components as well as variable preselection prior to factor estimation using targeted predictors following Bai and Ng [Forecasting economic time series using targeted predictors, Journal of Econometrics 146 (2008), 304317]. The results are as follows: Forecasting without data preselection favours the use of German data only, and no additional information content can be extracted from international data. However, when using targeted predictors for variable selection, international data generally improves the forecastability of German GDP. 
Keywords:  forecasting, factor models, international data, variable selection 
JEL:  C53 E27 F47 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7579&r=ecm 
By:  Gabriele Beissel Durrant 
Abstract:  Missing data are often a problem in social science data. Imputation methods fill in the missing responses and lead, under certain conditions, to valid inference. This article reviews several imputation methods used in the social sciences and discusses advantages and disadvantages of these methods in practice. Simpler imputation methods as well as more advanced methods, such as fractional and multiple imputation, are considered. The paper introduces the reader new to the imputation literature to key ideas and methods. For those already familiar with imputation methods the paper highlights some new developments and clarifies some recent misconceptions in the use of imputation methods. The emphasis is on efficient hot deck imputation methods, implemented in either multiple or fractional imputation approaches. Software packages for using imputation methods in practice are reviewed highlighting newer developments. The paper discusses an example from the social sciences in detail, applying several imputation methods to a missing earnings variable. The objective is to illustrate how to choose between methods in a real data example. A simulation study evaluates various imputation methods, including predictive mean matching, fractional and multiple imputation. Certain forms of fractional and multiple hot deck methods are found to perform well with regards to bias and efficiency of a point estimator and robustness against model misspecifications. Standard parametric imputation methods are not found adequate for the application considered.[NCRM WP] 
Keywords:  itemnonresponse; imputation; fractional imputation; multiple imputation; estimation of distribution functions 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:ess:wpaper:id:2007&r=ecm 
By:  Pierre Perron (Boston University); Jing Zhou (BlackRock, Inc.) 
Abstract:  We provide a comprehensive treatment of the problem of testing jointly for structural change in both the regression coefficients and the variance of the errors in a single equation regression involving stationary regressors. Our framework is quite general in that we allow for general mixingtype regressors and the assumptions imposed on the errors are quite mild. The errors’ distribution can be nonnormal and conditional heteroskedasticity is permissable. Extensions to the case with serially correlated errors are also treated. We provide the required tools for addressing the following testing problems, among others: a) testing for given numbers of changes in regression coefficients and variance of the errors; b) testing for some unknown number of changes less than some prespecified maximum; c) testing for changes in variance (regression coefficients) allowing for a given number of changes in regression coefficients (variance); and d) estimating the number of changes present. These testing problems are important for practical applications as witnessed by recent interests in macroeconomics and finance for which documenting structural change in the variability of shocks to simple autoregressions or vector autoregressive models has been a concern. 
Keywords:  Changepoint, Variance shift, Conditional heteroskedasticity, Likelihood ratio tests 
Date:  2008–07 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008011&r=ecm 
By:  Pierre Perron (Department of Economics, Boston University); Yohei Yamamoto (Department of Economics, Boston University) 
Abstract:  We consider the problem of estimating and testing for multiple breaks in a single equation framework with regressors that are endogenous, i.e., correlated with the errors. First, we show based on standard assumptions about the regressors, instruments and errors that the second stage regression of the instrumental variable (IV) procedure involves regressors and errors that satisfy all the assumptions in Perron and Qu (2006) so that the results about consistency, rate of convergence and limit distributions of the estimates of the break dates, as well as the limit distributions of the tests, are obtained as simple consequences. More importantly from a practical perspective, we show that even in the presence of endogenous regressors, it is still preferable to simply estimate the break dates and test for structural change using the usual ordinary leastsquares (OLS) framework. It delivers estimates of the break dates with higher precision and tests with higher power compared to those obtained using an IV method. To illustrate the relevance of our theoretical results, we consider the stability of the New Keynesian hybrid Phillips curve. IVbased methods do not indicate any instability. On the other hand, OLSbased ones strongly indicate a change in 1991:1 and that after this date the model looses all explanatory power. 
Keywords:  structural change, instrument variables, twostage leastsquares, parameter variations 
JEL:  C22 
Date:  2008–10 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008017&r=ecm 
By:  Yang K. Lu (Boston University); Pierre Perron (Boston University) 
Abstract:  We consider the estimation of a random level shift model for which the series of interest is the sum of a short memory process and a jump or level shift component. For the latter component, we specify the commonly used simple mixture model such that the component is the cumulative sum of a process which is 0 with some probability (1a) and is a random variable with probability a. Our estimation method transforms such a model into a linear state space with mixture of normal innovations, so that an extension of Kalman filter algorithm can be applied. We apply this random level shift model to the logarithm of absolute returns for the S&P 500, AMEX, Dow Jones and NASDAQ stock market return indices. Our point estimates imply few level shifts for all series. But once these are taken into account, there is little evidence of serial correlation in the remaining noise and, hence, no evidence of longmemory. Once the estimated shifts are introduced to a standard GARCH model applied to the returns series, any evidence of GARCH effects disappears. We also produce rolling outofsample forecasts of squared returns. In most cases, our simple random level shifts model clearly outperforms a standard GARCH(1,1) model and, in many cases, it also provides better forecasts than a fractionally integrated GARCH model. 
Keywords:  structural change, forecasting, GARCH models, longmemory 
JEL:  C22 
Date:  2008–09 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008012&r=ecm 
By:  Zhongjun Qu (Boston University); Pierre Perron (Boston University) 
Abstract:  Empirical ?ndings related to the time series properties of stock returns volatility indicate autocorrelations that decay slowly at long lags. In light of this, several longmemory models have been proposed. However, the possibility of level shifts has been advanced as a possible explanation for the appearance of longmemory and there is growing evidence suggesting that it may be an important feature of stock returns volatility. Nevertheless, it remains a conjecture that a model incorporating random level shifts in variance can explain the data well and produce reasonable forecasts. We show that a very simple stochastic volatility model incorporating both a random level shift and a shortmemory component indeed provides a better insample fit of the data and produces forecasts that are no worse, and sometimes better, than standard stationary short and longmemory models. We use a Bayesian method for inference and develop algorithms to obtain the posterior distributions of the parameters and the smoothed estimates of the two latent components. We apply the model to daily S&P 500 and NASDAQ returns over the period 1980.12005.12. Although the occurrence of a level shift is rare, about once every two years, the level shift component clearly contributes most to the total variation in the volatility process. The halflife of a typical shock from the shortmemory component is very short, on average between 8 and 14 days. We also show that, unlike common stationary short or longmemory models, our model is able to replicate keys features of the data. For the NASDAQ series, it forecasts better than a standard stochastic volatility model, and for the S&P 500 index, it performs equally well. 
Keywords:  Bayesian estimation, Structural change, Forecasting, Longmemory, Statespace models, Latent process 
JEL:  C11 C12 C53 G12 
Date:  2008–06 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008007&r=ecm 
By:  Breitung, Jörg; Eickmeier, Sandra 
Abstract:  From time to time, economies undergo farreaching structural changes. In this paper we investigate the consequences of structural breaks in the factor loadings for the specification and estimation of factor models based on principal components and suggest test procedures for structural breaks. It is shown that structural breaks severely inflate the number of factors identified by the usual information criteria. Based on the strict factor model the hypothesis of a structural break is tested by using LikelihoodRatio, LagrangeMultiplier and Wald statistics. The LM test which is shown to perform best in our Monte Carlo simulations, is generalized to factor models where the common factors and idiosyncratic components are serially correlated. We also apply the suggested test procedure to a US dataset used in Stock and Watson (2005) and a euroarea dataset described in Altissimo et al. (2007). We find evidence that the beginning of the socalled Great Moderation in the US as well as the Maastricht treaty and the handover of monetary policy from the European national central banks to the ECB coincide with structural breaks in the factor loadings. Ignoring these breaks may yield misleading results if the empirical analysis focuses on the interpretation of common factors or on the transmission of common shocks to the variables of interest. 
Keywords:  Dynamic factor models, structural breaks, number of factors, Great Moderation, EMU 
JEL:  C01 C12 C3 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7574&r=ecm 
By:  Raymond BRUMMELHUIS; Jules SadefoKamdem 
Abstract:  This paper is concerned with the e±cient analytical computation of ValueatRisk (VaR) for portfolios of assets depending quadratically on a large number of joint risk factors that follows a multivariate Generalized Laplace Distribution. Our approach is designed to supplement the usual MonteCarlo techniques, by providing an asymptotic formula for the quadratic portfolio's cumulative distribution function, together with explicit errorestimates. The application of these methods is demonstrated using some financial applications examples. 
Date:  2009–06 
URL:  http://d.repec.org/n?u=RePEc:lam:wpaper:0906&r=ecm 
By:  James Morley (Washington University in St. Louis); Jeremy Piger (University of Oregon); PaoLin Tien (Department of Economics, Wesleyan University) 
Abstract:  In this paper, we consider the ability of timeseries models to generate simulated data that display the same business cycle features found in U.S. real GDP. Our analysis of a range of popular timeseries models allows us to investigate the extent to which multivariate information can account for the apparent univariate evidence of nonlinear dynamics in GDP. We find that certain nonlinear specifications yield an improvement over linear models in reproducing business cycle features, even when multivariate information inherent in the unemployment rate, inflation, interest rates, and the components of GDP is taken into account. 
JEL:  E30 C52 
Date:  2009–05 
URL:  http://d.repec.org/n?u=RePEc:wes:weswpa:2009003&r=ecm 
By:  Samuel Bazzi 
Abstract:  Despite intense concern that many instrumental variables used in growth regressions may be invalid, or both, top journals studies of economic growth based on problematic instruments. doing so risks pushing the entire literature closer to irrelevance. This article illustrates hidden problems with identification in recent prominently published and widely cited growth studies using their original data and urges researchers to take steps to overcome the shortcomings. 
Keywords:  economic growth, capital, macroeconomy, macroeconomic policy, regression models for growth, Economics 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:ess:wpaper:id:2024&r=ecm 
By:  Pierre Perron (Boston University); Zhongjun Qu (Boston University) 
Abstract:  Recently, there has been an upsurge of interest in the possibility of confusing long memory and structural changes in level. Many studies have shown that when a stationary short memory process is contaminated by level shifts the estimate of the fractional differencing parameter is biased away from zero and the autocovariance function exhibits a slow rate of decay, akin to a long memory process. Partly based on results in Perron and Qu (2007), we analyze the properties of the autocorrelation function, the periodogram and the log periodogram estimate of the memory parameter when the level shift component is specified by a simple mixture model. Our theoretical results explain many findings reported and uncover new features. We confront our theoretical predictions using logsquared returns as a proxy for the volatility of some assets returns, including daily S&P 500 returns over the period 19282002. The autocorrelations and the path of the log periodogram estimates follow patterns that would obtain if the true underlying process was one of shortmemory contaminated by level shifts instead of a fractionally integrated process. A simple testing procedure is also proposed, which reinforces this conclusion. 
Keywords:  structural change, jumps, long memory processes, fractional integration, frequency domain estimates 
JEL:  C22 
Date:  2008–08 
URL:  http://d.repec.org/n?u=RePEc:bos:wpaper:wp2008004&r=ecm 
By:  Kuzin, Vladimir; Marcellino, Massimiliano; Schumacher, Christian 
Abstract:  This paper compares the mixeddata sampling (MIDAS) and mixedfrequency VAR (MFVAR) approaches to model speci cation in the presence of mixedfrequency data, e.g., monthly and quarterly series. MIDAS leads to parsimonious models based on exponential lag polynomials for the coe¢ cients, whereas MFVAR does not restrict the dynamics and therefore can su¤er from the curse of dimensionality. But if the restrictions imposed by MIDAS are too stringent, the MFVAR can perform better. Hence, it is di¢ cult to rank MIDAS and MFVAR a priori, and their relative ranking is better evaluated empirically. In this paper, we compare their performance in a relevant case for policy making, i.e., nowcasting and forecasting quarterly GDP growth in the euro area, on a monthly basis and using a set of 20 monthly indicators. It turns out that the two approaches are more complementary than substitutes, since MFVAR tends to perform better for longer horizons, whereas MIDAS for shorter horizons. 
Keywords:  nowcasting, mixedfrequency data, mixedfrequency VAR, MIDAS 
JEL:  C53 E37 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdp1:7576&r=ecm 
By:  Kagie, M.; Wezel, M.C. van; Groenen, P.J.F. (Erasmus Research Institute of Management (ERIM), RSM Erasmus University) 
Abstract:  Many contentbased recommendation approaches are based on a dissimilarity measure based on the product attributes. In this paper, we evaluate four dissimilarity measures for product recommendation using an online survey. In this survey, we asked users to specify which products they considered to be relevant recommendations given a reference product. We used microwave ovens as product category. Based on these responses, we create a relative relevance matrix we use to evaluate the dissimilarity measures with. Also, we use this matrix to estimate weights to be used in the dissimilarity measures. In this way, we evaluate four dissimilarity measures: the Euclidean Distance, the Hamming Distance, the Heterogeneous EuclideanOverlap Metric, and the Adapted Gower Coefficient. The evaluation shows that these weights improve recommendation performance. Furthermore, the experiments indicate that when recommending a single product, the Heterogeneous EuclideanOverlap Metric should be used and when recommending more than one product the Adapted Gower Coefficient is the best alternative. Finally, we compare these dissimilarity measures with a collaborative method and show that this method performs worse than the dissimilarity based approaches. 
Keywords:  dissimilarity;casebased recommendation;evaluation;weight estimation 
Date:  2009–05–07 
URL:  http://d.repec.org/n?u=RePEc:dgr:eureri:1765015911&r=ecm 
By:  Frank A. Cowell (London School of Economics and STICERD); Carlo V. Fiorio (University of Milan and Econpubblica) 
Abstract:  We show how classic sourcedecomposition and subgroupdecomposition methods can be reconciled with regression methodology used in the recent literature. We also highlight some pitfalls that arise from uncritical use of the regression approach. The LIS database is used to compare the approaches using an analysis of the changing contributions to inequality in the United States and Finland. 
Keywords:  Inequality, decomposition. 
JEL:  D63 
Date:  2009 
URL:  http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq2009117&r=ecm 