nep-ecm New Economics Papers
on Econometrics
Issue of 2013‒08‒10
nine papers chosen by
Sune Karlsson
Orebro University

  1. Estimation of linear dynamic panel data models with time-invariant regressors By Kripfganz, Sebastian; Schwarz, Claudia
  2. A comparison of sequential and information-based methods for determining the co-integration rank in heteroskedastic VAR models By Giuseppe Cavaliere; Luca De Angelis; Anders Rahbek; robert.taylor@nottingham.ac.uk
  3. Mixed Data Kernel Copulas By Jeffrey S. Racine
  4. A Comparative Note About Estimation of the Fractional Parameter under Additive Outliers By Gabriel Rodriguez
  5. Weighted Euclidean biplots By Michael Greenacre; Patrick J. F. Groenen
  6. A Note on the Size of the ADF Test with Additive Outliers and Fractional Errors. A Reapraisal about the (Non) Stationarity of the Latin-American Inflation Series. By Gabriel Rodriguez; Dionisio Ramirez
  7. Econometric Mediation Analyses: Identifying the Sources of Treatment Effects from Experimentally Estimated Production Technologies with Unmeasured and Mismeasured Inputs By James J. Heckman; Rodrigo Pinto
  8. A comparison between Tau-d and the procedure TRAMO-SEATS is also included. By Gabriel Rodriguez; Dionisio Ramirez
  9. Adapting the Hodrick-Prescott Filter for Very Small Open Economies By Grech, Aaron George

  1. By: Kripfganz, Sebastian; Schwarz, Claudia
    Abstract: This paper considers estimation methods and inference for linear dynamic panel data models with unit-specific heterogeneity and a short time dimension. In particular, we focus on the identification of the coefficients of time-invariant variables in a dynamic version of the Hausman and Taylor (1981) model. We propose a two-stage estimation procedure to identify the effectsof time-invariant regressors. We first estimate the coefficients of the time-varying regressors and subsequently regress the first-stage residuals on the time-invariant regressors to recover the coefficients of the latter. Standard errors are adjusted to take into account the first-stage estimation uncertainty. As potential first-stage estimators we discuss generalized method of moments estimators and the transformed likelihood approach of Hsiao, Pesaran, and Tahmiscioglu (2002). Monte Carlo experiments are used to compare the performance of the two-stage approach to various system GMM estimators that obtain all parameter estimates simultaneously. The results are in favor of the two-stage approach. We provide further simulation evidence that GMM estimators with a large number of instruments can be severely biased in finite samples. Reducing the instrument count by collapsing the instrument matrices strongly improves the results while restricting the lag depth does not. Finally, we estimate a dynamic Mincer equation with data from the Panel Study of Income Dynamics to illustrate the approach. --
    Keywords: System GMM,Instrument proliferation,Maximum likelihood,Two-stage estimation,Monte Carlo simulation,Dynamic Mincer equation
    JEL: C13 C23 J30
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:252013&r=ecm
  2. By: Giuseppe Cavaliere (Università di Bologna); Luca De Angelis (Università di Bologna); Anders Rahbek (University of Copenhagen); robert.taylor@nottingham.ac.uk (University of Nottingham)
    Abstract: In this paper we investigate the behaviour of a number of methods for estimating the co-integration rank in VAR systems characterized by heteroskedastic innovation processes. In particular we compare the efficacy of the most widely used information criteria, such as AIC and BIC, with the commonly used sequential approach of Johansen (1996) based around the use of either asymptotic or wild bootstrap-based likelihood ratio type tests. Complementing recent work done for the latter in Cavaliere, Rahbek and Taylor (2013, Econometric Reviews, forthcoming), we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form. The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms of their frequency of selecting the correct co-integration rank across different values of the co-integration rank, sample size, stationary dynamics and models of heteroskedasticity. Of these the wild bootstrap procedure is perhaps the more reliable overall since it avoids a significant tendency seen in the BIC-based method to over-estimate the co-integration rank in relatively small sample sizes.
    Keywords: Cointegrazione; Wild bootstrap; Statistic traccia; Criteri di informazione; Determinazione rango; Eteroschedasticità. Co-integration; Wild bootstrap; Trace statistic; Information criteria; Rank determi- nation; Heteroskedasticity.
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:bot:quadip:121&r=ecm
  3. By: Jeffrey S. Racine
    Abstract: A number of approaches towards the kernel estimation of copula have appeared in the literature. Most existing approaches use a manifestation of the copula that requires kernel density estimation of bounded variates lying on a d-dimensional unit hypercube. This gives rise to a number of issues as it requires special treatment of the boundary and possible modifications to bandwidth selection routines, among others. Furthermore, existing kernel-based approaches are restricted to continuous date types only, though there is a growing interest in copula estimation with discrete marginals (see e.g. Smith & Khaled (2012) for a Bayesian approach). We demonstrate that using a simple inversion method (cf Nelsen (2006), Fermanian & Scaillet (2003)) can sidestep boundary issues while admitting mixed data types directly thereby extending the reach of kernel copula estimators. Bandwidth selection proceeds by the recently proposed method of Li & Racine (2013). Furthermore, there is no curse-of-dimensionality for the kernel-based copula estimator (though there is for the copula density estimator, as is the case for existing kernel copula density methods).
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:mcm:deptwp:2013-12&r=ecm
  4. By: Gabriel Rodriguez (Departamento de Economía - Pontificia Universidad Católica del Perú)
    Abstract: In a recent paper, Fajardo et al. (2009) propose an alternative semiparametric estimator of the fractional parameter in ARFIMA models which is robust to the presence of additive outliers. The results are very interesting, however, they use samples of 300 or 800 observations which are rarely found in macroeconomics or economics. In order to perform a comparison, I use the procedure to detect for additive outliers based on the estimator Tau- d suggested by Perron and Rodríguez (2003). Further, I use dummy variables associated to the location of the selected outliers to estimate the fractional parameter. I found better results for the mean and bias of this parameter when T = 100 and the results in terms of the standard deviation and the MSE are very similar. However, for higher sample sizes as 300 or 800, the robust procedure performs better, specially based on the standard deviation and MSE measures. Empirical applications for seven Latin American ination series with very small sample sizes contaminated by additive outliers is discussed. What we …nd is that when no correction for additive outliers is performed, the fractional parameter is underestimated.
    Keywords: Additive Outliers, ARFIMA Errors, semiparametric estimation.
    JEL: C2 C3 C5
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00356&r=ecm
  5. By: Michael Greenacre; Patrick J. F. Groenen
    Abstract: We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, the method leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.
    Keywords: biplot, correspondence analysis, distance, majorization, multidimensional scaling, singular-value decomposition, weighted least squares
    JEL: C19 C88
    Date: 2013–07
    URL: http://d.repec.org/n?u=RePEc:upf:upfgen:1380&r=ecm
  6. By: Gabriel Rodriguez (Departamento de Economía - Pontificia Universidad Católica del Perú); Dionisio Ramirez (Universidad Castilla La Mancha)
    Abstract: This note analyzes the empirical size of the augmented Dickey and Fuller (ADF) statistic proposed by Perron and Rodríguez (2003) when the errors are frac- tional. This ADF is based on a searching procedure for additive outliers based on …rst-differences of the data named Tau- d. Simulations show that empirical size of the ADF is not affected by fractional errors con…rming the claim of Perron and Rodríguez (2003) that the procedure Tau-d is robust to departures of the unit root framework. In particular the results show low sensitivity of the size of the ADF statistic respect to the fractional parameter (d). However, as expected, when there is strong negative moving average autocorrelation or negative au- toregressive autocorrelation, the ADF statistic is oversized. These difficulties are …xed when sample increases (from T = 100 to T = 200). Empirical applica- tion to eight quarterly Latin-American ination series is also provided showing the importance of taking into account dummy variables for the detected additive outliers.
    Keywords: Additive Outliers, ARFIMA Errors, ADF test
    JEL: C2 C3 C5
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00357&r=ecm
  7. By: James J. Heckman (University of Chicago); Rodrigo Pinto (University of Chicago)
    Abstract: This paper presents an econometric mediation analysis. It considers identification of production functions and the sources of output effects (treatment effects) from experimental interventions when some inputs are mismeasured and others are entirely omitted.
    Keywords: Production Function, Mediation Analysis, Measurement Error, Missing Inputs
    JEL: D24 C21 C43 C38
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:hka:wpaper:2013-006&r=ecm
  8. By: Gabriel Rodriguez (Departamento de Economía - Pontificia Universidad Católica del Perú); Dionisio Ramirez (Universidad Castilla La Mancha)
    Abstract: Perron and Rodríguez (2003) claimed that their procedure to detect for additive outliers (Tau-d) is powerful even when we have departures from the unit root case. In this note, we use Monte-Carlo simulations to show that Tau-d is powerful when we have ARFIMA(p; d; q) errors. Using simulations, we calculate the expected number of additive outliers found in this context and the number of times that the approach Tau-d identi…es the true location of the additive outliers. The results indicate that the power of the procedure Tau-d depends of the size of the additive outliers. When we have a DGP with big sized additive outliers the percentage of time that Tau-d detects correctly the location of the additive outliers is 100.0%.
    Keywords: Additive Outliers, ARFIMA Errors, Detection of Additive Out-liers.
    JEL: C2 C3 C5
    Date: 2013
    URL: http://d.repec.org/n?u=RePEc:pcp:pucwps:wp00355&r=ecm
  9. By: Grech, Aaron George
    Abstract: The Hodrick-Prescott (HP) filter is a commonly used method, particularly in potential output studies. However its suitability depends on a number of conditions. Very small open economies do not satisfy these as their macroeconomic series exhibit pronounced trends, large fluctuations and recurrent breaks. Consequently the use of the filter results in random changes in the output gap that are out of line with the concept of equilibrium. Two suggestions are put forward. The first involves defining the upper and lower bounds of a series and determining equilibrium as a weighted average of the filter applied separately on these bounds. The second involves an integration of structural features into the standard filter to allow researchers to set limits on the impact of structural/temporary shocks and allow for lengthy periods of disequilibria. This paper shows that these methods can result in a smoother output gap series for the smallest Euro Area economies.
    Keywords: Potential output, output gap, Hodrick-Prescott filter, detrending, business cycles, small open economies
    JEL: B41 C1 E32 F41
    Date: 2013–08
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:48803&r=ecm

This nep-ecm issue is ©2013 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.