
on Econometrics 
By:  Kripfganz, Sebastian; Schwarz, Claudia 
Abstract:  This paper considers estimation methods and inference for linear dynamic panel data models with unitspecific heterogeneity and a short time dimension. In particular, we focus on the identification of the coefficients of timeinvariant variables in a dynamic version of the Hausman and Taylor (1981) model. We propose a twostage estimation procedure to identify the effectsof timeinvariant regressors. We first estimate the coefficients of the timevarying regressors and subsequently regress the firststage residuals on the timeinvariant regressors to recover the coefficients of the latter. Standard errors are adjusted to take into account the firststage estimation uncertainty. As potential firststage estimators we discuss generalized method of moments estimators and the transformed likelihood approach of Hsiao, Pesaran, and Tahmiscioglu (2002). Monte Carlo experiments are used to compare the performance of the twostage approach to various system GMM estimators that obtain all parameter estimates simultaneously. The results are in favor of the twostage approach. We provide further simulation evidence that GMM estimators with a large number of instruments can be severely biased in finite samples. Reducing the instrument count by collapsing the instrument matrices strongly improves the results while restricting the lag depth does not. Finally, we estimate a dynamic Mincer equation with data from the Panel Study of Income Dynamics to illustrate the approach.  
Keywords:  System GMM,Instrument proliferation,Maximum likelihood,Twostage estimation,Monte Carlo simulation,Dynamic Mincer equation 
JEL:  C13 C23 J30 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:zbw:bubdps:252013&r=ecm 
By:  Giuseppe Cavaliere (Università di Bologna); Luca De Angelis (Università di Bologna); Anders Rahbek (University of Copenhagen); robert.taylor@nottingham.ac.uk (University of Nottingham) 
Abstract:  In this paper we investigate the behaviour of a number of methods for estimating the cointegration rank in VAR systems characterized by heteroskedastic innovation processes. In particular we compare the efficacy of the most widely used information criteria, such as AIC and BIC, with the commonly used sequential approach of Johansen (1996) based around the use of either asymptotic or wild bootstrapbased likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor (2013, Econometric Reviews, forthcoming), we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finitesample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BICbased procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct cointegration rank across different values of the cointegration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these the wild bootstrap procedure is perhaps the more reliable overall since it avoids a significant tendency seen in the BICbased method to overestimate the cointegration rank in relatively small sample sizes. 
Keywords:  Cointegrazione; Wild bootstrap; Statistic traccia; Criteri di informazione; Determinazione rango; Eteroschedasticità. Cointegration; Wild bootstrap; Trace statistic; Information criteria; Rank determi nation; Heteroskedasticity. 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:bot:quadip:121&r=ecm 
By:  Jeffrey S. Racine 
Abstract:  A number of approaches towards the kernel estimation of copula have appeared in the literature. Most existing approaches use a manifestation of the copula that requires kernel density estimation of bounded variates lying on a ddimensional unit hypercube. This gives rise to a number of issues as it requires special treatment of the boundary and possible modifications to bandwidth selection routines, among others. Furthermore, existing kernelbased approaches are restricted to continuous date types only, though there is a growing interest in copula estimation with discrete marginals (see e.g. Smith & Khaled (2012) for a Bayesian approach). We demonstrate that using a simple inversion method (cf Nelsen (2006), Fermanian & Scaillet (2003)) can sidestep boundary issues while admitting mixed data types directly thereby extending the reach of kernel copula estimators. Bandwidth selection proceeds by the recently proposed method of Li & Racine (2013). Furthermore, there is no curseofdimensionality for the kernelbased copula estimator (though there is for the copula density estimator, as is the case for existing kernel copula density methods). 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:mcm:deptwp:201312&r=ecm 
By:  Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú) 
Abstract:  In a recent paper, Fajardo et al. (2009) propose an alternative semiparametric estimator of the fractional parameter in ARFIMA models which is robust to the presence of additive outliers. The results are very interesting, however, they use samples of 300 or 800 observations which are rarely found in macroeconomics or economics. In order to perform a comparison, I use the procedure to detect for additive outliers based on the estimator Tau d suggested by Perron and Rodríguez (2003). Further, I use dummy variables associated to the location of the selected outliers to estimate the fractional parameter. I found better results for the mean and bias of this parameter when T = 100 and the results in terms of the standard deviation and the MSE are very similar. However, for higher sample sizes as 300 or 800, the robust procedure performs better, specially based on the standard deviation and MSE measures. Empirical applications for seven Latin American ination series with very small sample sizes contaminated by additive outliers is discussed. What we nd is that when no correction for additive outliers is performed, the fractional parameter is underestimated. 
Keywords:  Additive Outliers, ARFIMA Errors, semiparametric estimation. 
JEL:  C2 C3 C5 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00356&r=ecm 
By:  Michael Greenacre; Patrick J. F. Groenen 
Abstract:  We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular casesbyvariables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimensionreduction methods that are based on the singularvalue decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weightestimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots. 
Keywords:  biplot, correspondence analysis, distance, majorization, multidimensional scaling, singularvalue decomposition, weighted least squares 
JEL:  C19 C88 
Date:  2013–07 
URL:  http://d.repec.org/n?u=RePEc:upf:upfgen:1380&r=ecm 
By:  Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú); Dionisio Ramirez (Universidad Castilla La Mancha) 
Abstract:  This note analyzes the empirical size of the augmented Dickey and Fuller (ADF) statistic proposed by Perron and Rodríguez (2003) when the errors are frac tional. This ADF is based on a searching procedure for additive outliers based on rstdifferences of the data named Tau d. Simulations show that empirical size of the ADF is not affected by fractional errors con rming the claim of Perron and Rodríguez (2003) that the procedure Taud is robust to departures of the unit root framework. In particular the results show low sensitivity of the size of the ADF statistic respect to the fractional parameter (d). However, as expected, when there is strong negative moving average autocorrelation or negative au toregressive autocorrelation, the ADF statistic is oversized. These difficulties are xed when sample increases (from T = 100 to T = 200). Empirical applica tion to eight quarterly LatinAmerican ination series is also provided showing the importance of taking into account dummy variables for the detected additive outliers. 
Keywords:  Additive Outliers, ARFIMA Errors, ADF test 
JEL:  C2 C3 C5 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00357&r=ecm 
By:  James J. Heckman (University of Chicago); Rodrigo Pinto (University of Chicago) 
Abstract:  This paper presents an econometric mediation analysis. It considers identification of production functions and the sources of output effects (treatment effects) from experimental interventions when some inputs are mismeasured and others are entirely omitted. 
Keywords:  Production Function, Mediation Analysis, Measurement Error, Missing Inputs 
JEL:  D24 C21 C43 C38 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:hka:wpaper:2013006&r=ecm 
By:  Gabriel Rodriguez (Departamento de Economía  Pontificia Universidad Católica del Perú); Dionisio Ramirez (Universidad Castilla La Mancha) 
Abstract:  Perron and Rodríguez (2003) claimed that their procedure to detect for additive outliers (Taud) is powerful even when we have departures from the unit root case. In this note, we use MonteCarlo simulations to show that Taud is powerful when we have ARFIMA(p; d; q) errors. Using simulations, we calculate the expected number of additive outliers found in this context and the number of times that the approach Taud identi es the true location of the additive outliers. The results indicate that the power of the procedure Taud depends of the size of the additive outliers. When we have a DGP with big sized additive outliers the percentage of time that Taud detects correctly the location of the additive outliers is 100.0%. 
Keywords:  Additive Outliers, ARFIMA Errors, Detection of Additive Outliers. 
JEL:  C2 C3 C5 
Date:  2013 
URL:  http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00355&r=ecm 
By:  Grech, Aaron George 
Abstract:  The HodrickPrescott (HP) filter is a commonly used method, particularly in potential output studies. However its suitability depends on a number of conditions. Very small open economies do not satisfy these as their macroeconomic series exhibit pronounced trends, large fluctuations and recurrent breaks. Consequently the use of the filter results in random changes in the output gap that are out of line with the concept of equilibrium. Two suggestions are put forward. The first involves defining the upper and lower bounds of a series and determining equilibrium as a weighted average of the filter applied separately on these bounds. The second involves an integration of structural features into the standard filter to allow researchers to set limits on the impact of structural/temporary shocks and allow for lengthy periods of disequilibria. This paper shows that these methods can result in a smoother output gap series for the smallest Euro Area economies. 
Keywords:  Potential output, output gap, HodrickPrescott filter, detrending, business cycles, small open economies 
JEL:  B41 C1 E32 F41 
Date:  2013–08 
URL:  http://d.repec.org/n?u=RePEc:pra:mprapa:48803&r=ecm 